Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

strictly use bounded types (for weight V2) #302

Open
brenzi opened this issue Mar 7, 2023 · 11 comments
Open

strictly use bounded types (for weight V2) #302

brenzi opened this issue Mar 7, 2023 · 11 comments
Assignees

Comments

@brenzi
Copy link
Member

brenzi commented Mar 7, 2023

No description provided.

@pifragile
Copy link
Contributor

general questions:

  • what exactly is the rationale behind using only bounded types? how is it connected to Weights v2?
  • this is basically about using BoundedVec and BoundedSlice or is there anything else to do?
  • do we only need bounded types in storage? or make sure that all operations in dispachables have bounded runtime? ie. constrain looping over storage maps..
  • what about storage double maps? or even maps? do they need to have bounded entries? eg. iter_prefix can be unbounded...?
    --> we often have the count variables, eg. BootStrapperCount for maps. can we just bound those?

overview of where we might need to take action:

balances: -

balances-tx-payment: -

bazaar:

  • is BusinessRegistry::::iter_prefix(cid).collect() an issue? how can we bound StorageDoubleMap num entries?

ceremonies:

  • claim_rewards -> get_meetup_participants returns Vec
  • attest_attendees takes vec
  • AttestationRegistry has vec type
  • update_inactivity_counters takes vec of cids. will be fixed if pallet_communities::community_identifiers is bounded

communities:

  • locations storage has Vec<Location>
  • community_identifiers has Vec<CommunityIdentifier>

scheduler: -

references:

PR "Bound uses of Call":
paritytech/substrate#11649

BoundedVec documentation:
https://paritytech.github.io/substrate/master/frame_support/storage/bounded_vec/struct.BoundedVec.html

Example of a storage migration to BoundedVec:
https://github.com/paritytech/substrate/blob/9b43d8f891b1e53be2aa55687632ac6b17800cf8/frame/democracy/src/migrations.rs#L56

Storage migration documentation:
https://substrate-developer-hub.github.io/substrate-how-to-guides/docs/storage-migrations/nicks-migration/

@brenzi
Copy link
Member Author

brenzi commented Mar 30, 2023

weight v2 is really just about storage on parachains. The root problem is that the parachain PoV needs to include all state changes for a block. The bigger the PoV, the higher the effective weight for the relaychain validator to process the PoV. Hereby, storage translates to computation time and computation may take longer than the weight suggests which can make parachains stall. This is not reflected in the current weight model. hence the change to weight v2

There should be a talk by Shawn Tabrizi somewhere that expalins all this

@brenzi
Copy link
Member Author

brenzi commented Mar 30, 2023

@brenzi
Copy link
Member Author

brenzi commented Mar 30, 2023

on the StorageMap topic: I couldn't find docs, but the idea is that you limit the levels of depth the hashmap to a number which will suffice for its purpose

@pifragile
Copy link
Contributor

ok, my current understanding is:
as long as we limit the key and value types of storage maps to bounded types, the size of the entire map is bounded and thus also the size of a PoV is bounded.

now my question remains: what is the practical use of this? because even with those constraints, a storage proof can still get infeasibly large.

also, what is the unit of proof_size in the new Weight type?

@brenzi i do not necessarily expect answers to those questions, i think it is good to have those thoughts for future reference. (of course, if you have the answers, they are appreciated :) )

@brenzi
Copy link
Member Author

brenzi commented Apr 2, 2023

I disagree: just because key and value types of a map are bounded is not sufficient. You need to limit how many entries the map can have. For a map, this actually translates to how many levels of a binary tree the map has. So, for each map, we will need to define an upper bound of entries we will EVER need. So, we need to estimate the storage needed if every human uses encointer (worlds population might grow further too) and then see if the weight goes above the weight limit currently and tune down from there to a pragmatic level

@pifragile
Copy link
Contributor

ok that somehow also answers my question.

have you seen an example of an implementation of such a limitation of the size of a storage map in the substrate repo? i have so far only seen examples where they migrate Vec storage values to BoundedVec and similar.

also, what do you mean by weight limit? where is it defined?

@brenzi
Copy link
Member Author

brenzi commented Apr 4, 2023

I suggest you discuss both questions with Shawn Tabrizi (or better: ask on StackExchange). I don't know how far they are with examples adn migrating their own stuff

@pifragile
Copy link
Contributor

posted here

@clangenb
Copy link
Member

I guess we can close this issue, or is something missing? @pifragile

@pifragile
Copy link
Contributor

well, one thing is still open, namely limiting the depth of storage maps in general.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants