Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to sync Cronos Testnet from scratch or apply public snapshot #1737

Open
Igorarg91 opened this issue Feb 3, 2025 · 5 comments
Open

Comments

@Igorarg91
Copy link

Describe the bug
Syncing from scratch based on Cronos EVM Docs fails at image 0.7.0-rc1-testnet. Tried many times and node stuck at appHeight=1739311.

To Reproduce
Steps to reproduce the behavior:

  1. Based on guide depoloy Cronos testnet node with image cronos_0.6.0-testnet with fresh empty volume.
  2. Wait till node synced to block 1553700
  3. Upgrade the node to image cronos_0.7.0-rc1-testnet
  4. Node continues syncing till block 1739311 and stuck at this block
  5. Next logs loops when node stuck:
2:18PM INF service start impl="Peer{MConn{35.206.109.1:26656} b8e6d6e16d236fa6a7101316d96de718200c500c out}" module=p2p msg={} peer={"id":"b8e6d6e16d236fa6a7101316d96de718200c500c","ip":"35.206.109.1","port":26656} server=node
2:18PM INF service start impl=MConn{35.206.109.1:26656} module=p2p msg={} peer={"id":"b8e6d6e16d236fa6a7101316d96de718200c500c","ip":"35.206.109.1","port":26656} server=node
2:18PM INF Added peer module=p2p peer={"Data":{},"Logger":{}} server=node
2:18PM INF invalid peer blockHeight=5880540 module=blockchain peer=b8e6d6e16d236fa6a7101316d96de718200c500c server=node
2:18PM ERR Stopping peer for error err="error with peer b8e6d6e16d236fa6a7101316d96de718200c500c: invalid peer" module=p2p peer={"Data":{},"Logger":{}} server=node
2:18PM INF service stop impl={"Logger":{}} module=p2p msg={} peer={"id":"b8e6d6e16d236fa6a7101316d96de718200c500c","ip":"35.206.109.1","port":26656} server=node
2:18PM INF service stop impl={"Data":{},"Logger":{}} module=p2p msg={} peer={"id":"b8e6d6e16d236fa6a7101316d96de718200c500c","ip":"35.206.109.1","port":26656} server=node
2:18PM INF Reconnecting to peer addr={"id":"b8e6d6e16d236fa6a7101316d96de718200c500c","ip":"35.206.109.1","port":26656} module=p2p server=node
2:18PM INF Dialing peer address={"id":"b8e6d6e16d236fa6a7101316d96de718200c500c","ip":"35.206.109.1","port":26656} module=p2p server=node
2:18PM INF service start impl="Peer{MConn{35.206.109.1:26656} b8e6d6e16d236fa6a7101316d96de718200c500c out}" module=p2p msg={} peer={"id":"b8e6d6e16d236fa6a7101316d96de718200c500c","ip":"35.206.109.1","port":26656} server=node
2:18PM INF service start impl=MConn{35.206.109.1:26656} module=p2p msg={} peer={"id":"b8e6d6e16d236fa6a7101316d96de718200c500c","ip":"35.206.109.1","port":26656} server=node
2:18PM INF Added peer module=p2p peer={"Data":{},"Logger":{}} server=node

Expected behavior
Node continues syncing till 1869000 block

Desktop (please complete the following information):

  • OS: debian:bullseye-slim
  • app.toml
# config file template https://raw.githubusercontent.com/crypto-org-chain/cronos-testnets/main/cronostestnet_338-3/app.toml
minimum-gas-prices = "5000000000000basecro"

# default: the last 100 states are kept in addition to every 500th state; pruning at 10 block intervals
# nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node)
# everything: all saved states will be deleted, storing only the current and previous state; pruning at 10 block intervals
# custom: allow pruning options to be manually specified through 'pruning-keep-recent', 'pruning-keep-every', and 'pruning-interval'
pruning = "default"

# These are applied if and only if the pruning strategy is custom.
pruning-keep-recent = "0"
pruning-keep-every = "0"
pruning-interval = "0"

halt-height = 0
halt-time = 0
min-retain-blocks = 0

inter-block-cache = true
index-events = []

# IavlCacheSize set the size of the iavl tree cache.
# Default cache size is 50mb.
iavl-cache-size = 781250

[telemetry]

# Prefixed with keys to separate services.
service-name = ""

# Enabled enables the application telemetry functionality. When enabled,
# an in-memory sink is also enabled by default. Operators may also enabled
# other sinks such as Prometheus.
enabled = false

# Enable prefixing gauge values with hostname.
enable-hostname = false

# Enable adding hostname to labels.
enable-hostname-label = false

# Enable adding service to labels.
enable-service-label = false

# PrometheusRetentionTime, when positive, enables a Prometheus metrics sink.
prometheus-retention-time = 0

# GlobalLabels defines a global set of name/value label tuples applied to all
# metrics emitted using the wrapper functions defined in telemetry package.
#
# Example:
# [["chain_id", "cosmoshub-1"]]
global-labels = [
]

[api]
# Enable defines if the API server should be enabled.
enable = false
# Swagger defines if swagger documentation should automatically be registered.
swagger = false
# Address defines the API server to listen on.
address = "tcp://0.0.0.0:1317"
# MaxOpenConnections defines the number of maximum open connections.
max-open-connections = 1000
# RPCReadTimeout defines the Tendermint RPC read timeout (in seconds).
rpc-read-timeout = 10
# RPCWriteTimeout defines the Tendermint RPC write timeout (in seconds).
rpc-write-timeout = 0
# RPCMaxBodyBytes defines the Tendermint maximum response body (in bytes).
rpc-max-body-bytes = 1000000
# EnableUnsafeCORS defines if CORS should be enabled (unsafe - use it at your own risk).
enabled-unsafe-cors = false

[rosetta]
enable = false
address = ":8080"
blockchain = "app"
network = "network"
retries = 3
offline = false

[grpc]
enable = false
address = "0.0.0.0:9090"

# State sync snapshots allow other nodes to rapidly join the network without replaying historical
# blocks, instead downloading and applying a snapshot of the application state at a given height.
[state-sync]
snapshot-interval = 0
snapshot-keep-recent = 2

[evm]

# Tracer defines the 'vm.Tracer' type that the EVM will use when the node is run in
# debug mode. To enable tracing use the '--trace' flag when starting your node.
# Valid types are: json|struct|access_list|markdown
tracer = "json"

# MaxTxGasWanted defines the gas wanted for each eth tx returned in ante handler in check tx mode.
max-tx-gas-wanted = 500000

block-executor = "block-stm"
block-stm-workers = 0
block-stm-pre-estimate = false

[json-rpc]

# Enable defines if the gRPC server should be enabled.
enable = true
address = "0.0.0.0:8545"
ws-address = "0.0.0.0:8546"

# API defines a list of JSON-RPC namespaces that should be enabled
# Example: "eth,txpool,personal,net,debug,web3"
api = "eth,net,web3"

# GasCap sets a cap on gas that can be used in eth_call/estimateGas (0=infinite). Default: 25,000,000.
gas-cap = 25000000

# EVMTimeout is the global timeout for eth_call. Default: 5s.
evm-timeout =  "4m0s"

# TxFeeCap is the global tx-fee cap for send transaction. Default: 1eth.
txfee-cap = 1
# FilterCap sets the global cap for total number of filters that can be created
filter-cap = 200
# FeeHistoryCap sets the global cap for total number of blocks that can be fetched
feehistory-cap = 100
# LogsCap defines the max number of results can be returned from single 'eth_getLogs' query.
logs-cap = 10000
# BlockRangeCap defines the max block range allowed for 'eth_getLogs' query.
block-range-cap = 2000
# HTTPTimeout is the read/write timeout of http json-rpc server.
http-timeout = "4m0s"
# HTTPIdleTimeout is the idle timeout of http json-rpc server.
http-idle-timeout = "4m0s"

[tls]
# Certificate path defines the cert.pem file path for the TLS configuration.
certificate-path = ""
# Key path defines the key.pem file path for the TLS configuration.
key-path = ""
[versiondb]
enable = true

Additional context
As an option, tried to sync the node using snapshot which Cronos team shared in Discord ticket. But it node failed again.
I use image 1.4.2-testnet for this snapshot but node cant open the db with the error:

Image

@yihuang
Copy link
Collaborator

yihuang commented Feb 4, 2025

Hi, can you post the full log file, we might need to look for more error messages when it get stuck at height 1739311.

@Igorarg91
Copy link
Author

Igorarg91 commented Feb 9, 2025

Hello, i made one more try to sync it from scratch. have the same result. managed to collect some data.
Block of logs with las synced blocks:

{"pc":150,"op":91,"gas":"0x37a134","gasCost":"0x1","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a","0x240e","0x240e"],"returnData":"0x","depth":1,"refund":0,"opName":"JUMPDEST","error":""}
{"pc":151,"op":80,"gas":"0x37a133","gasCost":"0x2","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a","0x240e","0x240e"],"returnData":"0x","depth":1,"refund":0,"opName":"POP","error":""}
{"pc":152,"op":80,"gas":"0x37a131","gasCost":"0x2","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a","0x240e"],"returnData":"0x","depth":1,"refund":0,"opName":"POP","error":""}
{"pc":153,"op":86,"gas":"0x37a12f","gasCost":"0x8","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a"],"returnData":"0x","depth":1,"refund":0,"opName":"JUMP","error":""}
{"pc":74,"op":91,"gas":"0x37a127","gasCost":"0x1","memory":"0x","memSize":96,"stack":["0xfe0d94c1"],"returnData":"0x","depth":1,"refund":0,"opName":"JUMPDEST","error":""}
{"pc":75,"op":0,"gas":"0x37a126","gasCost":"0x0","memory":"0x","memSize":96,"stack":["0xfe0d94c1"],"returnData":"0x","depth":1,"refund":0,"opName":"STOP","error":""}
{"output":"","gasUsed":"0x121fe54","time":43626060706}
�[90m9:09PM�[0m �[32mINF�[0m executed block �[36mheight=�[0m1729195 �[36mmodule=�[0mstate �[36mnum_invalid_txs=�[0m0 �[36mnum_valid_txs=�[0m1 �[36mserver=�[0mnode
�[90m9:09PM�[0m �[32mINF�[0m commit synced �[36mcommit=�[0m436F6D6D697449447B5B313120333920353120353120313231203231382032343520323436203431203839203139392031302031343720313733203137372031393020313833203920313335203232342032312031333420313533203839203633203435203133332039332031313820323130203330203135385D3A3141363241427D
�[90m9:09PM�[0m �[32mINF�[0m committed state �[36mapp_hash=�[0m0B27333379DAF5F62959C70A93ADB1BEB70987E0158699593F2D855D76D21E9E �[36mheight=�[0m1729195 �[36mmodule=�[0mstate �[36mnum_txs=�[0m1 �[36mserver=�[0mnode
�[90m9:09PM�[0m �[32mINF�[0m indexed block �[36mheight=�[0m1729195 �[36mmodule=�[0mtxindex �[36mserver=�[0mnode
{"pc":0,"op":96,"gas":"0x1599f7a","gasCost":"0x3","memory":"0x","memSize":0,"stack":[],"returnData":"0x","depth":1,"refund":0,"opName":"PUSH1","error":""}
{"pc":2,"op":96,"gas":"0x1599f77","gasCost":"0x3","memory":"0x","memSize":0,"stack":["0x80"],"returnData":"0x","depth":1,"refund":0,"opName":"PUSH1","error":""}
{"pc":4,"op":82,"gas":"0x1599f74","gasCost":"0xc","memory":"0x","memSize":96,"stack":["0x80","0x40"],"returnData":"0x","depth":1,"refund":0,"opName":"MSTORE","error":""}
{"pc":5,"op":52,"gas":"0x1599f68","gasCost":"0x2","memory":"0x","memSize":96,"stack":[],"returnData":"0x","depth":1,"refund":0,"opName":"CALLVALUE","error":""}
{"pc":6,"op":128,"gas":"0x1599f66","gasCost":"0x3","memory":"0x","memSize":96,"stack":["0x0"],"returnData":"0x","depth":1,"refund":0,"opName":"DUP1","error":""}
{"pc":7,"op":21,"gas":"0x1599f63","gasCost":"0x3","memory":"0x","memSize":96,"stack":["0x0","0x0"],"returnData":"0x","depth":1,"refund":0,"opName":"ISZERO","error":""}

Resource usage and headlag stast:

Image Image Image

At 21.13 pod was evicted due to disc pressure on kubernetes node. In logs was nothing except part.
Last logs was:

{"pc":309,"op":86,"gas":"0x12229c5","gasCost":"0x8","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a","0x2852","0x6e5","0x186a0","0x7c","0xa","0x186a0","0x0","0x136","0x186a0","0x249"],"returnData":"0x","depth":1,"refund":0,"opName":"JUMP","error":""}
{"pc":585,"op":91,"gas":"0x12229bd","gasCost":"0x1","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a","0x2852","0x6e5","0x186a0","0x7c","0xa","0x186a0","0x0","0x136","0x186a0"],"returnData":"0x","depth":1,"refund":0,"opName":"JUMPDEST","error":""}
{"pc":586,"op":96,"gas":"0x12229bc","gasCost":"0x3","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a","0x2852","0x6e5","0x186a0","0x7c","0xa","0x186a0","0x0","0x136","0x186a0"],"returnData":"0x","depth":1,"refund":0,"opName":"PUSH1","error":""}
{"pc":588,"op":129,"gas":"0x12229b9","gasCost":"0x3","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a","0x2852","0x6e5","0x186a0","0x7c","0xa","0x186a0","0x0","0x136","0x186a0","0x0"],"returnData":"0x","depth":1,"refund":0,"opName":"DUP2","error":""}
{"pc":589,"op":144,"gas":"0x12229b6","gasCost":"0x3","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a","0x2852","0x6e5","0x186a0","0x7c","0xa","0x186a0","0x0","0x136","0x186a0","0x0","0x186a0"],"returnData":"0x","depth":1,"refund":0,"opName":"SWAP1","error":""}
{"pc":590,"op":80,"gas":"0x12229b3","gasCost":"0x2","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a","0x2852","0x6e5","0x186a0","0x7c","0xa","0x186a0","0x0","0x136","0x186a0","0x186a0","0x0"],"returnData":"0x","depth":1,"refund":0,"opName":"POP","error":""}
{"pc":591,"op":145,"gas":"0x12229b1","gasCost":"0x3","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a","0x2852","0x6e5","0x186a0","0x7c","0xa","0x186a0","0x0","0x136","0x186a0","0x186a0"],"returnData":"0x","depth":1,"refund":0,"opName":"SWAP2","error":""}
{"pc":592,"op":144,"gas":"0x12229ae","gasCost":"0x3","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a","0x2852","0x6e5","0x186a0","0x7c","0xa","0x186a0","0x0","0x186a0","0x186a0","0x136"],"returnData":"0x","depth":1,"refund":0,"opName":"SWAP1","error":""}
{"pc":593,"op":80,"gas":"0x12229ab","gasCost":"0x2","memory":"0x","memSize":96,"stack":["0xfe0d94c1","0x4a","0x2852","0x6e5","0x186a0","0x7c","0xa","0x186a0","0x0","0x186a0","0x136","0x186a0"],"returnData":"0x","depth":1,"refund":0,"opName":"POP","error":""}

and all the time node has this kind of logs in loop.

@mmsqe
Copy link
Collaborator

mmsqe commented Feb 10, 2025

Hi @Igorarg91, have you try sync with tracer = ""?

@Igorarg91
Copy link
Author

Igorarg91 commented Feb 10, 2025

@mmsqe when i launched the node on 0.6.0-testnet i got error about setting up tracer. But now, when i reset it to tracer = "" on 0.7.0-rc1 it continues syncing after stuck.

@mmsqe
Copy link
Collaborator

mmsqe commented Feb 11, 2025

yes, can switch empty tracer after v0.6.10 which is expensive when syncing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants