Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

store claimTx into storage every deposit #544

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

chengzhinei
Copy link

@chengzhinei chengzhinei commented Nov 9, 2023

Closes #.

What does this PR do?

There is a serious problem when monitorTxs handle "ReviewMonitoredTx" and update the mtx.Nonce:
image
here, the mTx.Nonce may bigger than nonce, and the nonce will never updated. And all the mTx can not be sended.
We have saw this bug happend in X1, and it made the bridge function hanged until we use this fixed code.
And this bug caused to the next problem, the GER include more than 70000 txs, and it took more than 10 hours to create claimTx.
A ExitRoots may include many deposits, and during "processDepositStatus()", it builds claimTx(one claimTx may cost about 1s), but only when storage.Commit, the ClaimTx will be write into db, the whole process may cost more than 10 minutes. But during that time, the "monitorTxs" can't get the claimTxs, and no claimTx would be sended to L2, so the claim is slowed down.
And I made a change, every deposit builds a claimTx, it stores the claimTx into db immediately.

Reviewers

Main reviewers:

Codeowner reviewers:

  • @-Alice
  • @-Bob

Copy link

cla-bot bot commented Nov 9, 2023

We require contributors/corporates @chengzhinei to read our Contributor License Agreement, please check the Individual CLA document/Corporate CLA document

1 similar comment
Copy link

cla-bot bot commented Nov 9, 2023

We require contributors/corporates @chengzhinei to read our Contributor License Agreement, please check the Individual CLA document/Corporate CLA document

Copy link

cla-bot bot commented Nov 9, 2023

We require contributors/corporates @chengzhinei to read our Contributor License Agreement, please check the Individual CLA document/Corporate CLA document

Copy link

sonarqubecloud bot commented Nov 9, 2023

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 5 Code Smells

No Coverage information No Coverage information
30.6% 30.6% Duplication

idea Catch issues before they fail your Quality Gate with our IDE extension sonarlint SonarLint

@ARR552
Copy link
Contributor

ARR552 commented Nov 22, 2023

Hi, I don't see the problem since the nonce is only updated if the service get a nonce too low error. As all txs are sent sincronously, all of them must consume the nonce (success tx or revert). I think this situations could only a problem in your current implementation.
On the other hand, I see that your PR contains many other changes, not only the fix you have commented on the description.

@chengzhinei
Copy link
Author

chengzhinei commented Nov 22, 2023

Hi, I don't see the problem since the nonce is only updated if the service get a nonce too low error. As all txs are sent sincronously, all of them must consume the nonce (success tx or revert). I think this situations could only a problem in your current implementation. On the other hand, I see that your PR contains many other changes, not only the fix you have commented on the description.

Thanks for your reply~
I'm sure that this problem is not only in the X1 implementation.
I have used the code [0xPolygonHermez](https://github.com/0xPolygonHermez/zkevm-bridge-service/releases/tag/v0.3.0) run a mock environment, and keep send bridge transactions for test, and this problem happened every time. I have not found the root cause why the nonce is too low, but this problem is always going to happen.
And in my opinion, as long as it into the "ReviewMonitoredTx()", the mTx.nonce should be updated with the latest value, no matter the new nonce is low or high.

Besides, the other changes wants to solve the problem I descripted before: "And this bug caused to the next problem, the GER include more than 70000 txs, and it took more than 10 hours to create claimTx.". The changes could add claimTx into storage every deposit instead of the logic that finish making the claimTxs of all the deposits then add all the claimTxs into storage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants