-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: use weight for setting storage capacity per network #1388
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@morph-dev Some of the tests are failing, could you fix them before a review, please? |
Sorry about that. Some peertests were failing and the fix wasn't the simplest so it took me some time (some were relying on specific storage capacity). I ended up refactoring how peertests are initialized by allowing for custom set of subnetworks (instead of all 3 all the time). I also noticed that capacity weigh should be applied on bytes instead of MB, so I fixed that as well. |
What's the reasoning for the current weights of |
Just seems like the correct order of magnitude if we would store everything (all blocks vs all state tries). As I said in the PR description, it's placeholder and up to discussion. I'm also fine with any of:
but I wouldn't go lower than that. @njgheorghita What would you propose? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
info!("Testing ping for history cross mainnet and angelfood discv5 protocol id"); | ||
let bootnode_enr = peertest.bootnode.enr.clone(); | ||
if let Ok(pong) = HistoryNetworkApiClient::ping(target, bootnode_enr).await { | ||
let bootnode_enr = angelfood_node.enr.clone(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick maybe angelfood_target
instead of bootnode_enr
HISTORY_NETWORK => Self::HISTORY_CAPACITY_WEIGHT, | ||
STATE_NETWORK => Self::STATE_CAPACITY_WEIGHT, | ||
BEACON_NETWORK => Self::BEACON_CAPACITY_WEIGHT, | ||
_ => panic!("Invalid subnetwork: {subnetwork}"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And if so, maybe also for Network
(mainnet/angelfood)... working with enums is a bit nicer than using raw strings in various places
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking the same, but I want to get this pr asap (so I can get another pr before the release this week). So I will leave this for the future (created #1392 ).
} | ||
|
||
impl PortalStorageConfigFactory { | ||
const HISTORY_CAPACITY_WEIGHT: u64 = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re: 99/1 (state/history capacity)....
I guess I don't have any good, sound reasoning for why it shouldn't be that, just that this seems a little high... But, it is true that the network has much higher storage demands for state than history data, and as long as we don't update our current mainnet 4444s nodes to support state network, then 99/1 is fine by me....
Though, I wonder if we want to introduce some concept of a minimum (total) capacity? Idk, like 100mb
? (though, the only repercussion I can think of is that this minimum would break this test)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should definitely not enable state network on the 4444 nodes. That would be bad regardless of the ratio that we pick here.
With that in mind, I will leave the ratio as 99/1 for now (we can easily change it if there is need for it).
I don't have strong opinion about enforcing minimum capacity. It's something we can discuss in the next meeting. But I would leave it out of this pr.
What was wrong?
Currently, we only have one flag to setting storage capacity that applies to each subnetwork separately. See #1387 for more info.
How was it fixed?
As discussed in the meeting, we will use predefined weight per subnetwork (for the short/medium term).
I put following weights as a placeholder. Let me know if you have other suggestions:
To-Do