You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pathfinder currently does iterate over the range of blocks in the filter and checks the per-block bloom filter for potential matches. Unfortunately loading/checking the bloom filters is fairly slow: for millions of blocks it certainly will require at least a few seconds even on fast hardware.
We could implement a more elaborate scheme similar to what is implemented by Geth: we should store an aggregate (rotated) Bloom filter for a range of blocks and use that filter to check potential matches in a range (~32k) of blocks. This would enable us to efficiently filter on keys in a block range with a fraction of the cost involved with loading all bloom filters in the range.
Pathfinder currently does iterate over the range of blocks in the filter and checks the per-block bloom filter for potential matches. Unfortunately loading/checking the bloom filters is fairly slow: for millions of blocks it certainly will require at least a few seconds even on fast hardware.
We could implement a more elaborate scheme similar to what is implemented by Geth: we should store an aggregate (rotated) Bloom filter for a range of blocks and use that filter to check potential matches in a range (~32k) of blocks. This would enable us to efficiently filter on keys in a block range with a fraction of the cost involved with loading all bloom filters in the range.
For an overview (and some extra ideas) check the geth implementation or this write-up.
The text was updated successfully, but these errors were encountered: