-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
partition_data_proc and run_maintenance with infinity timestamps #519
Comments
The default partition feature is built into postgresql and the constraint on the default is always the anti-constraint of all other child tables combined. So that will not work to try and assign that boundary to the built in default. I'm honestly not sure if PostgreSQL itself can support a partition boundary of infinity. Would have to test. |
Please see the documentation for what |
agree, but could prevent to try to create infinity child tables. |
postgres=# insert into public.foo_bar (event_cause, event_ts) VALUES ('bar', 'infinity'); |
Yes, the value itself is a supported thing in PostgreSQL. But does that table have any constraint boundaries on the |
Interesting: |
So looking into this more, I see how you're expecting this to work. However, as you saw, an infinite value cannot be partitioned out. By definition
So, technically, it can live in the default since the default is unbound at the lower/upper end. But it can never partition it out because there is no other possible table for it to exist in. I'm honestly not sure I want to support this, though. The default table should not have values being left in it. The reason why is that every time a new child table is created, the default has to compare all values contained in it against the new child table. And if I tried to support this, the partition_data function would itself also have to check every single value in the default to see if it is infinite or not every time it tries to create a new child table. So either way, it's not very efficient in the long run. |
Perhaps I can make it a flag to the partition_data procedure in 5.x. But just to be sure, infinite values are considered valid in your case and this wasn't errant data that someone put in? Because if it's errant data, then it throwing an error would be preferred and you should just remove it from the default. |
Yes, they are valid. |
Ok. I'll see if I can make this an option for the partition_data functions in the future. I'd recommend working with NULL if that's a possibility, though |
Apologies, NULL isn't going to work either. I currently have a requirement that the partitioning column be NOT NULL. Also this would run into the same issue. Which table would the NULL value go into? The only one that it fits in is the default. |
This is where people start to use 9999-12-31 as a workaround... |
Yes, I can see that. I'll see about adding that option in with a caveat in the docs that it can affect maintenance performance. |
partition_data_proc and run_maintenance seems not to handle infinity timestamps.
While
bre=# call partman.partition_data_proc('foo');
ERROR: Attempted partition time interval is outside PostgreSQL's supported time range.
Unable to create partition with interval before timestamp 294277-01-01 00:00:00+01
infinite_time_partitions = false does't help.
Expectation: infinity timestamps should remain in the default partition and new partitions should be created up to max(timestamp) where timestamp != 'infinity'.
The text was updated successfully, but these errors were encountered: