-
Notifications
You must be signed in to change notification settings - Fork 857
Memory requirements for generating a full block proof #1768
Comments
@ed255 do you know which FFT algo we are using? With Cooley–Tukey FFT algorithm, we pad the evaluations to the closest power of 2 before doing iFFTs (as the FFTs on power of 2s are much cheaper). We should be using iFFTs directly for the commitment but also for polynomial multiplications and so the end polynomial degree can be much higher (unless you already posted that in the stats PR). So memory-wise, we may have higher gain than what you said. |
Notice the benchmark are taking on k = 26 with 32 chunks. What do you think about another straw-man idea by trading smaller k with larger chunks ? |
In halo2 all polynomials are stored in vectors, and these vectors are always preallocated with sizes power of 2, so I would say the padding happens implicitly (it's just elements in the vector that are not assigned and have 0 by default).
The stats PR already considers the polynomials in the extended domain (which depends on the max expression degree). Is this what you mean? I believe the biggest source of memory consumption comes from the polynomials in the extended domain.
On a related note, the numbers of the stats utility are theoretical. In practice the memory usage of the process may be higher due to:
Yes! I think that's something we could easily do now. On one hand we have to dimensions: memory and compute; and by changing the On the other hand, I think it would be great to find the sweet spot of the aggregation configuration:
|
I was not sure it was the extended domain, but you clarified this. I agree with you!
Also, one straightforward thing that we can do for the time being (before thinking of merging columns and so on) is check if the FFT algo we use is sparse-friendly. |
I am worried about the current architecture and the memory consumption it entails. I did a lower bound estimate and found that SuperCircuit with k=26 (That's 32 chunks for 30M gas) needs 21 TB of memory. See results from #1763
I think we need to discuss ways to reduce the memory consumption. Here are a few ideas:
The text was updated successfully, but these errors were encountered: