You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When dumping and reimporting, postgres does a new index. these are size limited. At some point someone sent an event with ascii art in aggregation_key which landed in my public.event_relations table. This is 3497 chars long in size.
As a result this happens on any dump + import:
ERROR: index row size 2920 exceeds btree version 4 maximum 2704 for index "event_relations_relates"
DETAIL: Index row references tuple (9835,12) in relation "event_relations".
TIP: Values larger than 1/3 of a buffer page cannot be indexed.
Consider a function index of an MD5 hash of the value, or use full text indexing.
Steps to reproduce
have a super long string in your db.
Homeserver
matrix.midnightthoughts.space
Synapse Version
v1.106.0
Installation Method
Docker (matrixdotorg/synapse)
Database
Postgres 15
Workers
Single process
Platform
Kubernetes with a pg cluster
Configuration
No response
Relevant log output
This happened on import of a pgdump:
ERROR: index row size 2920 exceeds btree version 4 maximum 2704 for index "event_relations_relates"
DETAIL: Index row references tuple (9835,12) in relation "event_relations".
TIP: Values larger than 1/3 of a buffer page cannot be indexed.
Consider a functionindex of an MD5 hash of the value, or use full text indexing.
### Anything else that would be useful to know?
It would be a good idea to have a migration removing faulty long rows as it will break indexes which might in the best case lower performance and in the worst case break backups in ways where the dump has to manually changed, reimported, the row manually deleted and the index recreated for it to work.
A query how I found it was `select event_id, relates_to_id, relation_type, aggregation_key, length(aggregation_key) from public.event_relations where length(aggregation_key) >= 2000 ORDER BY length(aggregation_key) DESC LIMIT 10;`
The text was updated successfully, but these errors were encountered:
I think it might have just been silently broken all the time as the index was probably created before the event was received. A reindex on the database before the dump with the same pagesize settings actually does work too. I assume postgres just skips broken rows at runtime to prevent downtime? It only became an issue when dumping it and reimporting on a fresh server.
Oh and since I missed to link to it: matrix-org/synapse#12101 does limit this key already. So this issue is only about historic data people may have received before that PR. It cant happen again after a version with that PR afaik.
Description
When dumping and reimporting, postgres does a new index. these are size limited. At some point someone sent an event with ascii art in aggregation_key which landed in my public.event_relations table. This is 3497 chars long in size.
As a result this happens on any dump + import:
Steps to reproduce
Homeserver
matrix.midnightthoughts.space
Synapse Version
v1.106.0
Installation Method
Docker (matrixdotorg/synapse)
Database
Postgres 15
Workers
Single process
Platform
Kubernetes with a pg cluster
Configuration
No response
Relevant log output
The text was updated successfully, but these errors were encountered: