-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
readtimearray on duplicate timestamps behaviour #451
Comments
Hi @klangner
well, in this case, I think you can load the CSV into a DataFrame first, then remove the duplicated rows, then
I guess it's |
I currently implemented this with a very dirty hack, namely passing in |
Hi @imbrem
About out-of-order cases: I'm also curious about that is there an algorithm that can determine the out-of-order entrie? |
Hi @iblis17, |
oh, so in this case, the data is still in proper order, only the time index is not ideal.
But for this case, I do not think the method provided by TimeSeries.jl can be applied on these data. |
Ah, and just recall that we have an option
|
Anyway, I made a PR for accepting duplicated but sorted time index. |
That works fine, but could it also be possible to add an option to actually remove out-of-order or duplicate time stamps, and/or actually go back and update their values in the result array? If desired, I can write the PR for this. |
@imbrem yeah, PRs are welcomed.
Updating issues still need more discussions, and I need some time to think about it. |
Currently when trying to read data from the CSV file with duplicate timestamps the function will crash.
Maybe it would be better to add parameter to this function so it will try to read as many rows as possible and then return partial result without crashing?
Or maybe just skip duplicate or out of order items?
BTW is there in Julia some kind of optional type? Like Haskell
Maybe
. Maybe then at least return this type instead of crashing the program?The text was updated successfully, but these errors were encountered: