-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
strictly use bounded types (for weight V2) #302
Comments
general questions:
overview of where we might need to take action: balances: - balances-tx-payment: - bazaar:
ceremonies:
communities:
scheduler: - references: PR "Bound uses of Call": BoundedVec documentation: Example of a storage migration to BoundedVec: Storage migration documentation: |
weight v2 is really just about storage on parachains. The root problem is that the parachain PoV needs to include all state changes for a block. The bigger the PoV, the higher the effective weight for the relaychain validator to process the PoV. Hereby, storage translates to computation time and computation may take longer than the weight suggests which can make parachains stall. This is not reflected in the current weight model. hence the change to weight v2 There should be a talk by Shawn Tabrizi somewhere that expalins all this |
maybe this helps: https://forum.polkadot.network/t/weight-v2-discussion-and-updates/227 |
on the StorageMap topic: I couldn't find docs, but the idea is that you limit the levels of depth the hashmap to a number which will suffice for its purpose |
ok, my current understanding is: now my question remains: what is the practical use of this? because even with those constraints, a storage proof can still get infeasibly large. also, what is the unit of @brenzi i do not necessarily expect answers to those questions, i think it is good to have those thoughts for future reference. (of course, if you have the answers, they are appreciated :) ) |
I disagree: just because key and value types of a map are bounded is not sufficient. You need to limit how many entries the map can have. For a map, this actually translates to how many levels of a binary tree the map has. So, for each map, we will need to define an upper bound of entries we will EVER need. So, we need to estimate the storage needed if every human uses encointer (worlds population might grow further too) and then see if the weight goes above the weight limit currently and tune down from there to a pragmatic level |
ok that somehow also answers my question. have you seen an example of an implementation of such a limitation of the size of a storage map in the substrate repo? i have so far only seen examples where they migrate also, what do you mean by weight limit? where is it defined? |
I suggest you discuss both questions with Shawn Tabrizi (or better: ask on StackExchange). I don't know how far they are with examples adn migrating their own stuff |
posted here |
I guess we can close this issue, or is something missing? @pifragile |
well, one thing is still open, namely limiting the depth of storage maps in general. |
No description provided.
The text was updated successfully, but these errors were encountered: