-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix clippy #83
Fix clippy #83
Conversation
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
For testing, I've left this PR + #61 running a full Kusama node with:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
#![allow(clippy::single_match)] | ||
#![allow(clippy::result_large_err)] | ||
#![allow(clippy::redundant_pattern_matching)] | ||
#![allow(clippy::type_complexity)] | ||
#![allow(clippy::result_unit_err)] | ||
#![allow(clippy::should_implement_trait)] | ||
#![allow(clippy::too_many_arguments)] | ||
#![allow(clippy::assign_op_pattern)] | ||
#![allow(clippy::match_like_matches_macro)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we revisit this later?
This PR ensures that litep2p does not panic when decoding public keys from received TCP noise handshake. The code operated under the assumption that only the `ed25519` key is valid in the context of Substrate. However, peers could still use a different key (`rsa` / `ecdsa`) and cause the code to panic. In those cases, an error is returned which terminates the negotiation handshake. Discovered during testing a sync node with litep2p backend on kusama as part of #83. ```bash Version: 1.10.0-cd9d08d6311 0: sp_panic_handler::set::{{closure}} 1: std::panicking::rust_panic_with_hook 2: std::panicking::begin_panic_handler::{{closure}} 3: std::sys_common::backtrace::__rust_end_short_backtrace 4: rust_begin_unwind 5: core::panicking::panic_fmt 6: <litep2p::crypto::PublicKey as core::convert::TryFrom<litep2p::crypto::keys_proto::PublicKey>>::try_from 7: litep2p::crypto::PublicKey::from_protobuf_encoding 8: litep2p::crypto::noise::parse_peer_id 9: litep2p::transport::tcp::connection::TcpConnection::negotiate_connection::{{closure}} 10: <tokio::time::timeout::Timeout<T> as core::future::future::Future>::poll 11: <litep2p::transport::tcp::TcpTransport as litep2p::transport::Transport>::negotiate::{{closure}} 12: <futures_util::stream::futures_unordered::FuturesUnordered<Fut> as futures_core::stream::Stream>::poll_next 13: <litep2p::transport::tcp::TcpTransport as futures_core::stream::Stream>::poll_next 14: <litep2p::transport::manager::TransportContext as futures_core::stream::Stream>::poll_next 15: litep2p::transport::manager::TransportManager::next::{{closure}} 16: <tokio::future::poll_fn::PollFn<F> as core::future::future::Future>::poll 17: <sc_network::litep2p::Litep2pNetworkBackend as sc_network::service::traits::NetworkBackend<B,H>>::run::{{closure}} 18: sc_service::build_network_future::{{closure}}::{{closure}}::{{closure}} 19: <futures_util::future::poll_fn::PollFn<F> as core::future::future::Future>::poll 20: <sc_service::task_manager::prometheus_future::PrometheusFuture<T> as core::future::future::Future>::poll 21: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll 22: <tracing_futures::Instrumented<T> as core::future::future::Future>::poll 23: tokio::runtime::park::CachedParkThread::block_on 24: tokio::runtime::context::runtime::enter_runtime 25: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll 26: tokio::runtime::task::core::Core<T,S>::poll 27: tokio::runtime::task::harness::Harness<T,S>::poll 28: tokio::runtime::blocking::pool::Inner::run 29: std::sys_common::backtrace::__rust_begin_short_backtrace 30: core::ops::function::FnOnce::call_once{{vtable.shim}} 31: std::sys::pal::unix::thread::Thread::new::thread_start 32: <unknown> 33: <unknown> Thread 'tokio-runtime-worker' panicked at 'not implemented: unsupported key type', /home/ubuntu/.cargo/git/checkouts/litep2p-2515ad90543f141a/153d388/src/crypto/mod.rs:103 ``` cc @dmitry-markin Signed-off-by: Alexandru Vasile <[email protected]>
Testing ResultsThe warp-sync node is producing blocks, running for roughly ~20h. WARN tokio-runtime-worker telemetry: ❌ Error while dialing /dns/telemetry.polkadot.io/tcp/443/x-parity-wss/%2Fsubmit%2F: Custom { kind: Other, error: Timeout }
WARN tokio-runtime-worker litep2p::ipfs::identify: inbound identify substream opened for peer who doesn't exist peer=PeerId("12D3KooWF3PWbXdGEuT35nBh3MgECtxnHng3s5c5QKapoDZMy38z") protocol=/ipfs/id/1.0.0
WARN tokio-runtime-worker litep2p::ipfs::identify: inbound identify substream opened for peer who doesn't exist peer=PeerId("12D3KooWF3PWbXdGEuT35nBh3MgECtxnHng3s5c5QKapoDZMy38z") protocol=/ipfs/id/1.0.0
WARN tokio-runtime-worker sync: 💔 Ignored block (#22873601 -- 0x649e…eab2) announcement from 12D3KooWBDbBuoE4umuzJnZcUouT4GY6n31BRWHXdAFsThjTKrug because all validation slots for this peer are occupied.
WARN tokio-runtime-worker sync: 💔 Ignored block (#22873781 -- 0xf711…b203) announcement from 12D3KooWBDbBuoE4umuzJnZcUouT4GY6n31BRWHXdAFsThjTKrug because all validation slots for this peer are occupied.
WARN tokio-runtime-worker sync: 💔 Ignored block (#22873782 -- 0x917b…dfa0) announcement from 12D3KooWBDbBuoE4umuzJnZcUouT4GY6n31BRWHXdAFsThjTKrug because all validation slots for this peer are occupied.
WARN tokio-runtime-worker db::notification_pinning: Notification block pinning limit reached. Unpinning block with hash
ERROR tokio-runtime-worker beefy: 🥩 Error: ConsensusReset. Restarting voter. I think we are good to go here. I'll wait for a few more hours and if everything looks sane, I'll merge this and #61. I'll leave the full node running for a few more days. |
## [0.5.0] - 2023-05-24 This is a small patch release that makes the `FindNode` command a bit more robst: - The `FindNode` command now retains the K (replication factor) best results. - The `FindNode` command has been updated to handle errors and unexpected states without panicking. ### Changed - kad: Refactor FindNode query, keep K best results and add tests ([#114](#114)) ## [0.4.0] - 2023-05-23 This release introduces breaking changes to the litep2p crate, primarily affecting the `kad` module. Key updates include: - The `GetRecord` command now exposes all peer records, not just the latest one. - A new `RecordType` has been introduced to clearly distinguish between locally stored records and those discovered from the network. Significant refactoring has been done to enhance the efficiency and accuracy of the `kad` module. The updates are as follows: - The `GetRecord` command now exposes all peer records. - The `GetRecord` command has been updated to handle errors and unexpected states without panicking. Additionally, we've improved code coverage in the `kad` module by adding more tests. ### Added - Add release checklist ([#115](#115)) - Re-export `multihash` & `multiaddr` types ([#79](#79)) - kad: Expose all peer records of `GET_VALUE` query ([#96](#96)) ### Changed - multistream_select: Remove unneeded changelog.md ([#116](#116)) - kad: Refactor `GetRecord` query and add tests ([#97](#97)) - kad/store: Set memory-store on an incoming record for PutRecordTo ([#88](#88)) - multistream: Dialer deny multiple /multistream/1.0.0 headers ([#61](#61)) - kad: Limit MemoryStore entries ([#78](#78)) - Refactor WebRTC code ([#51](#51)) - Revert "Bring `rustfmt.toml` in sync with polkadot-sdk (#71)" ([#74](#74)) - cargo: Update str0m from 0.4.1 to 0.5.1 ([#95](#95)) ### Fixed - Fix clippy ([#83](#83)) - crypto: Don't panic on unsupported key types ([#84](#84)) --------- Signed-off-by: Alexandru Vasile <[email protected]>
Builds on #57, however due to the high number of conflicts I fixed the errors in this PR.
Next Steps