-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(MessageQueueV2): add reading of MessageQueueV2 and transition logic from V1 to V2 for EuclidV2 #1108
Conversation
…retrieve data from L1
…s submitted in one transaction
…single transactions
…-use-code-from-l1-follower
…1-follower-mode-update-da-codec
…atch within a set of batches
…erifier-use-code-from-l1-follower
…1-follower-mode-update-da-codec
…s used before EuclidV2 fork and V2 afterward
…e-update-da-codec
…dec' into jt/message-queue-v2
…after L1MessageQueueV2 is deployed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (7)
rollup/sync_service/bridge_client.go (2)
61-70
: BridgeClient struct assignment block.All fields are assembled in one go. Consider adding inline doc comments if you foresee more complexity or if the meaning of each struct field might be unclear to new contributors.
134-134
: Trace log for newly received L1 queue transaction events (V2).Reporting the event in logs is helpful for debugging. Consider using structured logging fields for improved traceability.
rollup/sync_service/sync_service.go (1)
81-98
: Resetting synced height to re-fetch V2 messages.This logic handles partial upgrades after V2 deployment. It avoids missing any V2 messages by rolling back the sync to the V2 deployment block. Great for correctness. If re-fetching many blocks, confirm performance is acceptable at large chain heights.
core/block_validator.go (1)
143-185
: New EuclidV2 validations and L1MessageQueueV2 usage.This added conditional path (lines 143–185) enforces no skipped L1 messages and uses the V2 iterator. Key observations:
- The contiguous block check for L1 messages is stricter, which is correct under V2 rules.
- Immediate returns on missing or unknown L1 messages help preserve chain correctness.
- Overall logic is consistent with the shift from V1 to V2 at the fork time.
Confirm that any reorg or sidechain scenarios handle partially included V2 messages gracefully. Also ensure thorough fork tests around the block time for EuclidV2.
core/rawdb/accessors_l1_message.go (1)
144-145
: Fix minor grammatical issue.The comment says "all L1 message" but should read "all L1 messages."
-// all L1 message in the database +// all L1 messages in the databaseparams/config.go (2)
692-693
: Verify the TODO comments.The TODO comments indicate that the address and deployment block for
MessageQueueV2
are not yet known. Please ensure these values are updated before deploying to production.Do you want me to generate a script to track these TODO comments or open a new issue to track this task?
703-704
: Update theString()
method to include the new fields.The
String()
method should be updated to include the newL1MessageQueueV2Address
andL1MessageQueueV2DeploymentBlock
fields for better debugging and logging.Apply this diff to update the method:
-return fmt.Sprintf("{l1ChainId: %v, l1MessageQueueAddress: %v, numL1MessagesPerBlock: %v, ScrollChainAddress: %v}", - c.L1ChainId, c.L1MessageQueueAddress.Hex(), c.NumL1MessagesPerBlock, c.ScrollChainAddress.Hex()) +return fmt.Sprintf("{l1ChainId: %v, l1MessageQueueAddress: %v, l1MessageQueueV2Address: %v, l1MessageQueueV2DeploymentBlock: %v, numL1MessagesPerBlock: %v, ScrollChainAddress: %v}", + c.L1ChainId, c.L1MessageQueueAddress.Hex(), c.L1MessageQueueV2Address.Hex(), c.L1MessageQueueV2DeploymentBlock, c.NumL1MessagesPerBlock, c.ScrollChainAddress.Hex())
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
cmd/geth/main.go
(1 hunks)cmd/utils/flags.go
(2 hunks)core/block_validator.go
(1 hunks)core/blockchain.go
(1 hunks)core/rawdb/accessors_l1_message.go
(4 hunks)core/rawdb/accessors_l1_message_test.go
(4 hunks)core/rawdb/schema.go
(1 hunks)go.mod
(1 hunks)miner/scroll_worker.go
(1 hunks)miner/scroll_worker_test.go
(2 hunks)node/config.go
(1 hunks)params/config.go
(1 hunks)params/version.go
(1 hunks)rollup/sync_service/bindings.go
(1 hunks)rollup/sync_service/bridge_client.go
(3 hunks)rollup/sync_service/sync_service.go
(5 hunks)
✅ Files skipped from review due to trivial changes (1)
- rollup/sync_service/bindings.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (46)
rollup/sync_service/bridge_client.go (15)
19-28
: Added fields for V1/V2 message queue addresses in the BridgeClient struct.These new fields (e.g.,
enableMessageQueueV2
,l1MessageQueueV2Address
) adequately separate V1 vs. V2 logic. Make sure that any future additions (e.g., further queue versions) use similarly clear naming for consistency.
30-35
: Validate address arguments upon constructing BridgeClient.The additional checks for non-zero
l1MessageQueueV2Address
whenenableMessageQueueV2
is true provide a robust guard against misconfiguration. This is good defensive programming.
47-50
: Check FiltererV1 initialization error handling.Lines 47-50 properly return a contextual error if
NewL1MessageQueueFilterer
for V1 fails to initialize. Confirm that the caller logs or handles this error at a higher level, so that initialization failures are not silently ignored.
52-57
: Initialize FiltererV2 conditionally.This code path ensures V2 filterer is created only if
enableMessageQueueV2
is set, avoiding extraneous resource usage. The error handling and returned message are concise.
77-77
: Function signature extends fetchMessagesInRange to include queryL1MessagesV1.This additional boolean parameter clarifies whether to fetch V1 messages, promoting explicit version checks for L1 messages. Validate that other call sites in the codebase set this parameter correctly.
80-81
: Initialize separate slices msgsV1 and msgsV2.Great approach for storing V1 vs. V2 messages distinctly. This separation improves clarity and helps avoid confusion about which version a message belongs to.
88-97
: Conditional fetching of V1 messages.The logic wrapped in
if queryL1MessagesV1 { ... }
is straightforward and correct. Be sure to test boundary conditions where V2 messages may appear just after a fork block, ensuring that V1 fetching is gracefully disabled at the right time.
100-102
: Gas limit type validation (V1).Enforcing
event.GasLimit.IsUint64()
is good to avoid unsafe type conversions. The error message is descriptive.
104-111
: Appending new L1MessageTx entries for V1.The appended fields (
QueueIndex
,Gas
, etc.) appear correct. Confirm that you handle potential large arrays in memory if the block range is large.
114-117
: Handling Filter QueueTransaction iterator errors (V1).Catching
it.Error()
ensures you detect partial iteration failures. The immediate return is correct to prevent corrupt data from being processed.
119-122
: Disable or bypass V2 queries.The check
if !c.enableMessageQueueV2
helps maintain backward compatibility or testing scenarios where V2 is off. This is a neat feature toggle.
125-130
: FiltererV2 usage and error handling.Lines 125-130 consistently mirror V1’s approach. Good job returning the same triple-value signature to unify error handling.
137-137
: Gas limit type validation (V2).Same robust check as V1. This consistency ensures fewer pitfalls around big integer usage.
140-140
: Appending new L1MessageTx entries for V2.Similar logic to V1. Keep an eye on memory usage if large volumes of V2 messages appear in a short timeframe.
154-154
: Final return statement.Returning separate slices for V1 and V2 is clear, making downstream usage easier.
rollup/sync_service/sync_service.go (3)
68-68
: Extended newBridgeClient call with V2 parameters.Passing
!nodeConfig.L1DisableMessageQueueV2
is a straightforward toggle. Ensure that the config is typically false in non-test environments to enable V2.
251-252
: Conditional query of V1 messages.Line 251 checks
l1MessageV2StartIndex == nil
to decide if we should continue querying V1. This is consistent with the transition logic from V1 to V2.
262-270
: Recording first L1MessageV2 queue index and block.Storing this only once in the database is sound. We see a robust approach to avoid rewriting the start index. Make sure you have test coverage verifying sequential restarts pick up from the correct index.
core/rawdb/accessors_l1_message.go (11)
211-215
: Looks good.No obvious issues with struct introduction.
217-225
: No concerns detected.The iterator creation logic appears consistent with the rest of the codebase.
227-239
: Single-iteration pattern is acceptable.This approach of returning on the first iteration call is consistent with existing usage in the underlying iterator pattern.
241-245
: Header comment is clear.No additional feedback.
247-257
: Iterator creation logic is consistent.No issues found.
259-265
: Condition checks look good.Returning false if v2StartIndex is nil ensures no accidental iteration over V2 messages.
267-307
: Implementation appears correct.The sanity checks and iteration mechanism for V1 messages are clear. Logging a critical error (log.Crit) is a deliberate choice.
309-350
: Similar logic to V1; no issues.Matches the established iteration pattern for V2.
378-386
: Write function seems fine.Database storage for V2 start index is straightforward and uses existing patterns.
388-408
: Read function is straightforward.Handles the nil case appropriately.
410-439
: No concerns.Writing and reading first L1 block number for V2 appear consistent with existing approach.
miner/scroll_worker.go (1)
439-467
: Verify block time rewriting logic.Rewriting
w.current.header.Time
to the parent’s timestamp is a key transition mechanism from V1 to V2. However, it might allow creating a block whose timestamp does not strictly increase relative to its parent. Ensure that your consensus rules allow equal or lesser timestamps when discarding leftover V1 messages. Otherwise, consider verifying parent’s timestamp to avoid potential ordering violations.miner/scroll_worker_test.go (3)
20-27
: Import changes approved.Adding "fmt" and "gomonkey" is appropriate for testing.
1242-1243
: Utility function is fine.The helper function
newUint64
is clear and does not introduce any issues.
1251-1358
: Test logic thoroughly covers EuclidV2 transition scenario.This test effectively validates that V1 messages are consumed before switching to V2, ensuring correct backdating of block timestamps. The approach of patching
time.Now()
with gomonkey is an acceptable pattern for such functional tests.params/version.go (1)
27-27
: LGTM! Version increment follows semantic versioning.The patch version increment from 7 to 8 is appropriate for introducing new functionality without breaking changes.
core/rawdb/accessors_l1_message_test.go (5)
7-7
: LGTM! Good choice of assertion library.Using
require
fromtestify
is a good choice as it provides clearer failure messages and stops test execution on failure.
77-77
: LGTM! Appropriate function visibility change.Changing
IterateL1MessagesFrom
toiterateL1MessagesFrom
correctly reflects its internal usage scope.
109-109
: LGTM! Function name updates reflect versioning.The function name changes from
ReadL1MessagesFrom
toReadL1MessagesV1From
clearly indicate the version compatibility.Also applies to: 156-156
163-192
: LGTM! Comprehensive test for V1 message iteration.The test thoroughly verifies that:
- V1 messages are correctly iterated up to the V2 start index
- Iteration stops at the V2 start index
- Error cases are handled appropriately
194-235
: LGTM! Thorough test coverage for V2 message iteration.The test effectively covers:
- No V2 messages in database
- V2 messages with start index mismatch
- Successful V2 message iteration
core/rawdb/schema.go (1)
112-113
: LGTM! Well-defined schema keys for V2 message handling.The new keys follow the established naming convention and are appropriately placed in the L1 message store section:
l1MessageV2StartIndexKey
: Tracks the first index of V2 messagesl1MessageV2FirstL1BlockNumberKey
: Tracks the first L1 block number with V2 messagescmd/geth/main.go (1)
172-172
: LGTM! Flag addition aligns with PR objectives.The
L1DisableMessageQueueV2Flag
is appropriately added to enable regression testing by allowing V2 message queue to be disabled.node/config.go (1)
200-201
: LGTM!The new configuration field is properly integrated with the TOML serialization system and its purpose is clearly documented. The field aligns with the PR objectives by providing a way to disable
MessageQueueV2
for regression testing.cmd/utils/flags.go (2)
857-860
: LGTM! Well-defined flag for MessageQueueV2 control.The flag definition follows the established patterns and naming conventions, with a clear and descriptive usage message.
1415-1417
: LGTM! Proper flag value assignment.The implementation correctly assigns the flag value to the config, following the established pattern for L1-related flags.
core/blockchain.go (1)
1856-1866
: LGTM! Essential validations for L1 message queue transition.The added validation steps are crucial for ensuring:
- Correct ordering and continuity of L1 messages during the V1 to V2 transition.
- Consistent block state after message processing.
go.mod (1)
8-8
: LGTM! Adding gomonkey for testing.The addition of the gomonkey library (v2.12.0) will help with mocking in tests, particularly for testing the MessageQueueV2 transition logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
rollup/sync_service/sync_service.go (2)
81-97
: LGTM! Consider adding more detailed logging.The reset logic for handling nodes that upgraded after V2 deployment is well-implemented. It ensures no V2 messages are missed during the transition.
Consider adding more detailed logging when resetting the synced height:
- log.Info("Resetting L1 message sync height to fetch previous V2 messages", "L1 block", latestProcessedBlock) + log.Info("Resetting L1 message sync height to fetch previous V2 messages", + "L1 block", latestProcessedBlock, + "previous_block", s.latestProcessedBlock, + "v2_deployment_block", genesisConfig.Scroll.L1Config.L1MessageQueueV2DeploymentBlock)
262-269
: LGTM! Consider adding more metrics.The logic for writing V2 start index and first block number is well-implemented. The code ensures this is a one-time operation.
Consider adding metrics to track V2 message processing:
+ if len(msgsV2) > 0 && l1MessageV2StartIndex == nil { + l1MessageV2Counter.Inc(int64(len(msgsV2)))core/rawdb/accessors_l1_message.go (2)
211-244
: Consider optimizing V2 start index checks.The V1 iterator implementation is correct, but checking the V2 start index in every iteration might be inefficient.
Consider caching the V2 start index for a short duration to reduce database reads:
+const v2StartIndexCacheDuration = 1 * time.Second + type L1MessageV1Iterator struct { db ethdb.Database v2StartIndex *uint64 + lastCheck time.Time L1MessageIterator } func (it *L1MessageV1Iterator) Next() bool { for it.L1MessageIterator.Next() { - if it.v2StartIndex == nil { + if it.v2StartIndex == nil && time.Since(it.lastCheck) >= v2StartIndexCacheDuration { it.v2StartIndex = ReadL1MessageV2StartIndex(it.db) + it.lastCheck = time.Now() }
384-413
: LGTM! Consider adding validation.The V2 start index functions are well-implemented and follow consistent patterns.
Consider adding validation to ensure the V2 start index is greater than zero:
func WriteL1MessageV2StartIndex(db ethdb.KeyValueWriter, queueIndex uint64) { + if queueIndex == 0 { + log.Crit("Invalid L1MessageV2 start index", "queueIndex", queueIndex) + } value := big.NewInt(0).SetUint64(queueIndex).Bytes()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
core/rawdb/accessors_l1_message.go
(4 hunks)core/rawdb/schema.go
(1 hunks)rollup/sync_service/sync_service.go
(5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- core/rawdb/schema.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (2)
rollup/sync_service/sync_service.go (1)
251-252
: LGTM! Clean and efficient message version handling.The logic for determining which message version to query is well-implemented. Using
l1MessageV2StartIndex == nil
as a flag to query V1 messages is a clean approach.core/rawdb/accessors_l1_message.go (1)
246-270
: LGTM! Clean and efficient V2 iterator implementation.The V2 iterator implementation is well-designed:
- Early return if V2 start index is not set.
- Efficient check for V2 messages using a single condition.
1. Purpose or design rationale of this PR
This PR implements reading L1 messages from the new
MessageQueueV2
and the transition logic from V1 to V2 for theEuclidV2
fork.L1 messages V1 are only compatible before
EuclidV2
. L1 messages fromMessageQueueV2
are only compatible from/after theEuclidV2
fork. As such we need to:EuclidV2
: continue processing L1 messages V1, do not yet process L1 messages V2EuclidV2
: stop processing L1 messages V1, only process L1 messages V2To support the previously described following changes are implemented in this PR:
SyncService
MessageV2StartIndex
: the first index we've seen a V2 message. After this there won't be any V1 message from the contracts anymore.MessageV2StartIndex
is checked to guarantee that only V1 or V2 messages are returned respectively.BlockValidator
is adjusted to check for V2 messages and skipping of L1 messages is removedScrollWorker
is adjusted to read only V2 messages afterEuclidV2
forkNote: Since the implementation and therefore address of
MessageQueueV2
is not yet available, this PR adds a flag--l1.disablemqv2
to explicitly disableMessageQueueV2
when querying for L1 messages. This way the code can be used as is for regression testing.2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.go
been updated?4. Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
New Features
Enhancements
Tests
Version Update