Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft(WIP): Initial implementation of the propagation reactor #1619

Draft
wants to merge 15 commits into
base: feature/recovery
Choose a base branch
from

Conversation

rach-id
Copy link
Member

@rach-id rach-id commented Feb 19, 2025

Description

Please add a description of the changes that this PR introduces and the files that
are the most critical to review.

If this PR is non-trivial/large/complex, please ensure that you have either
created an issue that the team's had a chance to respond to, or had some
discussion with the team prior to submitting substantial pull requests. The team
can be reached via GitHub Discussions or the Cosmos Network Discord server in
the #cometbft channel. GitHub Discussions is preferred over Discord as it
allows us to keep track of conversations topically.
https://github.com/cometbft/cometbft/discussions

If the work in this PR is not aligned with the team's current priorities, please
be advised that it may take some time before it is merged - especially if it has
not yet been discussed with the team.

See the project board for the team's current priorities:
https://github.com/orgs/cometbft/projects/1

-->


PR checklist

  • Tests written/updated
  • Changelog entry added in .changelog (we use
    unclog to manage our changelog)
  • Updated relevant documentation (docs/ or spec/) and code comments

@rach-id rach-id requested a review from a team as a code owner February 19, 2025 14:16
@rach-id rach-id requested review from rootulp and ninabarbakadze and removed request for a team February 19, 2025 14:16
@rach-id rach-id marked this pull request as draft February 19, 2025 14:16
@rach-id rach-id requested review from evan-forbes and removed request for rootulp and ninabarbakadze February 19, 2025 14:16
Copy link
Member

@evan-forbes evan-forbes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd also rename the reactor file to be lowercase

I think this is a great foundation tho!

Comment on lines +26 to +33
// Channel the channel ID used by the propagation reactor.
Channel = byte(0x50)

// DataChannel Duplicate of consensus data channel
// added as a temporary fix for circular dependencies
// TODO: fix redefining these values here.
DataChannel = byte(0x21)
StateChannel = byte(0x20)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

documenting a from a sync discussion

the reactors need a new channels, and should only use new channels to remain compatible. We need to communicate between the consensus reactor using the sync routine and an interface that adds handles the valid block message etc

Comment on lines +47 to +50
self p2p.ID
}

func NewReactor(consensusState *consensus.State, options ...ReactorOption) *Reactor {
func NewReactor(options ...ReactorOption) *Reactor {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could pass self as not optional since its always required

Comment on lines +47 to +50
self p2p.ID
}

func NewReactor(consensusState *consensus.State, options ...ReactorOption) *Reactor {
func NewReactor(options ...ReactorOption) *Reactor {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we initializing the state here?

Comment on lines +32 to +33
DataChannel = byte(0x21)
StateChannel = byte(0x20)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need at least two channels 1) recovery parts, compact blocks / proposals, and haves must be in the same channel 2) its likely advantageous to have another channel for wants

Comment on lines +117 to 129
case Channel:
switch msg := msg.(type) {
case *types2.TxMetaData:
// TODO: implement
case *types2.CompactBlock:
// TODO: implement
case *types2.PartMetaData:
// TODO: implement
case *types2.HaveParts:
// TODO: implement
case *types2.WantParts:
// TODO: implement
case *types2.RecoveryPart:
case *proptypes.CompactBlock:
// TODO: implement
case *proptypes.HaveParts:
// TODO check if we need to bypass request limits
blockProp.handleHaves(e.Src.ID(), msg, false)
case *proptypes.WantParts:
blockProp.handleWants(e.Src.ID(), msg.Height, msg.Round, msg.Parts)
case *proptypes.RecoveryPart:
blockProp.handleRecoveryPart(e.Src.ID(), msg)
default:
blockProp.Logger.Error(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we also need to have the ability to handle proposals

// HandleProposal adds a proposal to the data routine. This should be called any
// time a proposal is recevied from a peer or when a proposal is created. If the
// proposal is new, it will be stored and broadcast to the relevant peers.
func (blockProp *Reactor) HandleProposal(proposal *types.Proposal, from p2p.ID, haves *bits.BitArray) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could actually include the compact block in the proposal in a non-breaking way if we just add a field that isn't hashed or signed over (so literally just don't touch the signBytes or hash functions, just add the field to the prototype)

this way, peers that are running new reactor will send us proposals with compact blocks, which is nice and we get both in the state at the same time. peers not running the new proposal will just ignore the field in those proposals

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants