You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the various cool ways Lucene can now quantize KNN vectors (per-dimension scalar quantization, and the upcoming RabitQ and maybe other cool algos with time...), the "hot RAM" required for efficient searching is much lower than the index size because Lucene always keeps the original (float32 or byte) input vectors so KNN data structures can be recomputed accurately during segment merging.
Let's fix our KNN tooling to separately report "hot RAM" required (subtract the index storage needed for the original vectors).
The text was updated successfully, but these errors were encountered:
[Spinoff from https://github.com/apache/lucene/pull/13651/]
With the various cool ways Lucene can now quantize KNN vectors (per-dimension scalar quantization, and the upcoming RabitQ and maybe other cool algos with time...), the "hot RAM" required for efficient searching is much lower than the index size because Lucene always keeps the original (
float32
orbyte
) input vectors so KNN data structures can be recomputed accurately during segment merging.Let's fix our KNN tooling to separately report "hot RAM" required (subtract the index storage needed for the original vectors).
The text was updated successfully, but these errors were encountered: