Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: improve support for datetime columns #486

Open
wants to merge 28 commits into
base: main
Choose a base branch
from

Conversation

theroggy
Copy link
Member

@theroggy theroggy commented Oct 17, 2024

This PR improves support for datetime columns, mainly in read_dataframe and write_dataframe:

  • Fix: when a GPKG was read with use_arrow, naive datetimes (no timezone) were interpreted as being UTC. So a naive time of 05:00 h was interpreted as 05:00 UTC.
  • Fix: when a .fgb was read with use_arrow, for datetime columns with a timezone the timezone was dropped, so 05:00+5:00 was read as 05:00.
  • Fix: when a file was written with use_arrow, for datetime columns with any timezone but UTC, the timezone was dropped, so 05:00+5:00 was written as 05:00 (a naive datetime), for all filetypes.
  • When reading datetimes with use_arrow, don't convert/represent them as being in UTC time if they have another timezone offset in the dataset.
  • Add support to write columns with mixed timezones. Typically the column needs to be of the object type with pandas.Timestamp or datetime objects in them as "standard" pandas datetime64 colums don't support mixed timezone offsets in a column.
  • Add support to read mixed timezone datetimes. These are returned in an object column with Timestamps.
  • For the cases with use_arrow, the fixes typically require GDAL >= 3.11 (OGRLayer::GetArrowStream(): add a DATETIME_AS_STRING=YES/NO option OSGeo/gdal#11213).

Resolves #487

@theroggy theroggy changed the title ENH: deal properly with naive datetimes with arrow TST: add tests exposing some issues with datetimes with arrow? Oct 18, 2024
Copy link
Member

@jorisvandenbossche jorisvandenbossche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for diving into this and improving the test coverage!

pyogrio/tests/test_geopandas_io.py Outdated Show resolved Hide resolved
pyogrio/tests/test_geopandas_io.py Outdated Show resolved Hide resolved
pyogrio/tests/test_geopandas_io.py Outdated Show resolved Hide resolved
pyogrio/tests/test_geopandas_io.py Outdated Show resolved Hide resolved
@theroggy theroggy changed the title TST: add tests exposing some issues with datetimes with arrow? ENH: improve datetime support with arrow for GDAL >= 3.11 Jan 16, 2025
@theroggy theroggy changed the title ENH: improve datetime support with arrow for GDAL >= 3.11 ENH: improve read support for naive and mixed datetimes with arrow for GDAL >= 3.11 Jan 16, 2025
@theroggy theroggy changed the title ENH: improve read support for naive and mixed datetimes with arrow for GDAL >= 3.11 ENH: improve read support for datetimes with arrow for GDAL >= 3.11 Jan 16, 2025
@theroggy theroggy changed the title ENH: improve read support for datetimes with arrow for GDAL >= 3.11 ENH: improve read support for datetime columns with arrow for GDAL >= 3.11 Jan 16, 2025
@theroggy theroggy changed the title ENH: improve read support for datetime columns with arrow for GDAL >= 3.11 ENH: improve support for datetime columns with mixed or naive times Jan 17, 2025
@theroggy theroggy marked this pull request as ready for review January 18, 2025 08:43
Copy link
Member

@jorisvandenbossche jorisvandenbossche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@theroggy thanks for further looking into this!

I do have some doubts about how much effort we should do to cover corner cases and what the desired default behaviour should be, see my comments below.

Comment on lines -54 to -59
# if object dtype, try parse as utc instead
if res.dtype == "object":
try:
res = pd.to_datetime(ser, utc=True, **datetime_kwargs)
except Exception:
pass
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From your top post explanation:

Add support to read mixed timezone datetimes. These are returned in an object column with Timestamps.

First, I don't think this will work with upcoming pandas 3.x (we are suppressing the warning above about mixed timezones going to raise unless passing utc=True, and that you have to use apply and datetime.datetime.strptime instead to get mixed offset objects)
(but the tests are also passing, so maybe I am missing something)

Second, a column of mixed offset objects is in general not that particularly useful .. So changing this behaviour feels like a regression to me. I understand that we might want to provide the user the option to get this, but by default, I am not sure.

Copy link
Member Author

@theroggy theroggy Jan 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From your top post explanation:

Add support to read mixed timezone datetimes. These are returned in an object column with Timestamps.

First, I don't think this will work with upcoming pandas 3.x (we are suppressing the warning above about mixed timezones going to raise unless passing utc=True, and that you have to use apply and datetime.datetime.strptime instead to get mixed offset objects) (but the tests are also passing, so maybe I am missing something)

Yes, I saw. Do you know what the rationale is that in pandas 3 people are being forced to use a more inefficient way (apply) to get to your data?

Second, a column of mixed offset objects is in general not that particularly useful .. So changing this behaviour feels like a regression to me. I understand that we might want to provide the user the option to get this, but by default, I am not sure.

For starters, to be clear, this is only relevant for mixed timezone data. Data saved in naive or UTC timestamps should just stay "regular" pandas datetime columns.

For the case of mixed timezone data, it depends on what you want to do with the datetime data. If it is just to look at it/show/keep it as it is part of the table data, the Timestamps look just fine to me. If you really want to do "fancy stuff" with the datetimes it will in pandas indeed be more convenient for some things to transform them into e.g. UTC datetimes to get a datetime column instead of an object column.

Regarding default behaviour, it feels quite odd to me to transform data by default to a form where information (the original time zone) is lost. Also because when you save the data again, it will then be saved as UTC as well, so also: the timezone information will be lost.

To me, the other way around is more logical: by default you don't loose data. If you want to do "fancy stuff" with a datetime column that contains mixed timezone data, you convert it to e.g. UTC, typically in an extra column, because most likely you will want to keep the timezone information again when saving.

Comment on lines 504 to 507
elif col.dtype == "object":
# Column of Timestamp objects, also split in naive datetime and tz offset
col_na = df[col.notna()][name]
if len(col_na) and all(isinstance(x, pd.Timestamp) for x in col_na):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit hesitant to add custom support for this, exactly given that it is not really supported by pandas itself, do we then need to add special support for it?

Right now, if you have an object dtype column with timestamp columns, they already get written as strings, which in the end preserves the offset information (in the string representation).
It might read back as strings (depending on the file format), but at that point the user can handle this column as they see fit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If they are written as strings to the output files without the proper metadata, it depends on format to format if they will be recognized as datetimes when read. For text files they will typically be recognized as datetime as the data types are "guessed" when the file is read (e.g. geojson), for files like .fgb and .gpkg they won't be recognized as the file metadata will be wrong.

That's not very clean, and as it is very easy to solve I don't quite see the point of not supporting it properly?

Comment on lines 662 to 666
elif isinstance(dtype, pd.DatetimeTZDtype):
# Also for regular datetime columns with timezone mixed timezones are
# possible when thera is a difference between summer and winter time.
df[name] = col.apply(lambda x: None if pd.isna(x) else x.isoformat())
datetime_cols.append(name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this needed for properly typed datetime64 columns?
What does GDAL do with those values? Write as the UTC value? And with this change it will write it, still as a datetime (because of adding the metadata?), but with offset?

FWIW, if we need to do this, you can do df[name].astype(str) to avoid the apply

Copy link
Member Author

@theroggy theroggy Jan 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this needed for properly typed datetime64 columns? What does GDAL do with those values? Write as the UTC value?

It is actually a bit weird. Based on what I tested, when the column is in UTC time zone, the data is written correctly to the file. If the column has another time zone it is simply dropped and the naive times are written. This is the case for both GPKG and e.g. .geojson.

Hence, naive and UTC times can be written via a native arrow datetime column, but in the other cases it needs to be written via a sidestep to a string column.
There is a TIMEZONE option that can be specified in the GDAL arrow code path... but it is only on layer level, so not per column so thats not super useful either. Based on a quick test it also didn't seem to work for timezones like "CET".

And with this change it will write it, still as a datetime (because of adding the metadata?), but with offset?

With this change offsets will be correctly written as timestamp, indeed because of the custom arrow metadata being added and interpreted by GDAL from GDAL 3.11 (OSGeo/gdal#11213)

FWIW, if we need to do this, you can do df[name].astype(str) to avoid the apply

Interesting! Note however that df[name].astype(str) outputs "... ..." instead of "...T...", so no strictly valid ISO... strings. But, astype gives signifficantly better performance (2 sec instead of 12 sec for nz buildings)... and probably not that important... so I changed it. If we rather want to have "...T...", we can add .str.replace(" ", "T") for GDAL < 3.11, that only adds 0.5 sec.
I used df[name].astype("string"), otherwise None/NAT is also cast to a string.

pyogrio/tests/test_geopandas_io.py Outdated Show resolved Hide resolved
Comment on lines 382 to 383
if use_arrow and ext == ".gpkg" and __gdal_version__ < (3, 11, 0):
pytest.skip("Arrow datetime handling improved in GDAL >= 3.11")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is not yet working for the case with no tz for GPKG?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Datetimes in a GPKG without timezone are now interpreted as being UTC. So a naive time of 05:00 h is interpreted as 05:00 UTC.

This is one of the issues listed in #487 (comment)

- Test result < GDAL 3.11 instead of skipping
- Add UTC test
- ...
@theroggy theroggy marked this pull request as draft January 20, 2025 16:31
@theroggy theroggy marked this pull request as ready for review January 20, 2025 22:11
@theroggy theroggy changed the title ENH: improve support for datetime columns with mixed or naive times ENH: improve support for datetime columns Jan 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Differences in how datetime columns are treated with arrow=True
2 participants