事件
事件名称 - NewOrder 用途 - 当新挂单挂上的时候发出该事件 参数 - Who:挂单人账号AccountId Index:挂单的唯一编号u128 OrderPair:挂单的交易对 OrderType:买单或者卖单 Amount:总挂单数量u64 Price:挂单价格u64
事件名称 - CancelOrder 用途 - 当撤单的时候发出该事件 参数 - Who:挂单人账号AccountId Index:挂单的唯一编号u128
事件名称 - MatchOrder 用途 - 当撮合成功一笔交易的时候发出该事件 参数 - Who1:挂单人账号AccountId 被动的人 Who2:挂单人账号AccountId 主动的人 Index1:挂单的唯一编号u128 被动 Index2:挂单的唯一编号u128 主动 OrderPair:挂单的交易对 Amount:成交数量u64 Price:成交价格u64
查询DEX 编号,交易对 OrderIndex get(order_index): u128 挂单唯一编号当前最大值 OrderPairList get(order_pair_list): Vec<OrderPair> 全部合法的交易对
盘口专用数据 某种交易对的买或卖的头的index pub BidListHeaderFor get(bidlist_header_for): map (OrderPair,OrderType) => Option<MultiNodeIndex<(OrderPair,OrderType), BidT<T>>>; 某种交易对的买或卖的尾的index pub BidListTailFor get(bidlist_tail_for): map (OrderPair,OrderType) => Option<MultiNodeIndex<(OrderPair,OrderType), BidT<T>>>; pub BidListCache get(bidlist_cache): map u128 => Option<Node<BidT<T>>>; NodeId get(nodeid):u128; 盘口数据,某个挂单在盘口队列里面的信息 pub BidOf get(bid_of):map u128 => Option<BidDetailT<T>>;
具体挂单信息 pub OrderInfor get(order_info): map u128 => Option<OrderInfo<T>>;
查询TOKEN 所有token类型以及其对应的精度 TokenTypeAndPrecision get(token_type_and_precision): map Vec<u8> ⇒ Option<u64>;
可用token FreeToken get(free_token) : map (Vec<u8>,T::AccountId) => u64; 因为挂单锁定着的token LockedToken get(locked_token) : map (Vec<u8>,T::AccountId) => u64;
操作接口DEX 挂单并撮合 pair 买单或者卖单 数量 价格 put_order_and_match(origin, orderpair:OrderPair, ordertype:OrderType, amount:u64, price:u64)
通过挂单的index来撤单 cancel_order(origin, orderpair:OrderPair, index:u128)
操作接口TOKEN
transfer_free_token(origin, dest:T::AccountId, tokentype:Vec<u8>, value:u64)
add_new_tokentype(origin, tokentypt:Vec<u8>, precision:u64)
At its heart, Substrate is a combination of three technologies: WebAssembly, Libp2p and GRANDPA Consensus. About GRANDPA, see this definition, introduction and formal specification. It is both a library for building new blockchains and a "skeleton key" of a blockchain client, able to synchronize to any Substrate-based chain.
Substrate chains have three distinct features that make them "next-generation": a dynamic, self-defining state-transition function; light-client functionality from day one; and a progressive consensus algorithm with fast block production and adaptive, definite finality. The STF, encoded in WebAssembly, is known as the "runtime". This defines the execute_block
function, and can specify everything from the staking algorithm, transaction semantics, logging mechanisms and procedures for replacing any aspect of itself or of the blockchain’s state ("governance"). Because the runtime is entirely dynamic all of these can be switched out or upgraded at any time. A Substrate chain is very much a "living organism".
See also https://www.parity.io/what-is-substrate/.
Substrate is still an early stage project, and while it has already been used as the basis of major projects like Polkadot, using it is still a significant undertaking. In particular, you should have a good knowledge of blockchain concepts and basic cryptography. Terminology like header, block, client, hash, transaction and signature should be familiar. At present you will need a working knowledge of Rust to be able to do anything interesting (though eventually, we aim for this not to be the case).
Substrate is designed to be used in one of three ways:
-
Trivial: By running the Substrate binary
substrate
and configuring it with a genesis block that includes the current demonstration runtime. In this case, you just build Substrate, configure a JSON file and launch your own blockchain. This affords you the least amount of customisability, primarily allowing you to change the genesis parameters of the various included runtime modules such as balances, staking, block-period, fees and governance. -
Modular: By hacking together modules from the Substrate Runtime Module Library into a new runtime and possibly altering or reconfiguring the Substrate client’s block authoring logic. This affords you a very large amount of freedom over your own blockchain’s logic, letting you change datatypes, add or remove modules and, crucially, add your own modules. Much can be changed without touching the block-authoring logic (since it is generic). If this is the case, then the existing Substrate binary can be used for block authoring and syncing. If the block authoring logic needs to be tweaked, then a new altered block-authoring binary must be built as a separate project and used by validators. This is how the Polkadot relay chain is built and should suffice for almost all circumstances in the near to mid-term.
-
Generic: The entire Substrate Runtime Module Library can be ignored and the entire runtime designed and implemented from scratch. If desired, this can be done in a language other than Rust, providing it can target WebAssembly. If the runtime can be made to be compatible with the existing client’s block authoring logic, then you can simply construct a new genesis block from your Wasm blob and launch your chain with the existing Rust-based Substrate client. If not, then you’ll need to alter the client’s block authoring logic accordingly. This is probably a useless option for most projects right now, but provides complete flexibility allowing for a long-term far-reaching upgrade path for the Substrate paradigm.
Substrate is a blockchain platform with a completely generic state transition function. That said, it does come with both standards and conventions (particularly regarding the Runtime Module Library) regarding underlying data structures. Roughly speaking, these core datatypes correspond to +trait+s in terms of the actual non-negotiable standard and generic +struct+s in terms of the convention.
Header := Parent + ExtrinsicsRoot + StorageRoot + Digest
Block := Header + Extrinsics + Justifications
Extrinsics in Substrate are pieces of information from "the outside world" that are contained in the blocks of the chain. You might think "ahh, that means transactions": in fact, no. Extrinsics fall into two broad categories of which only one is transactions. The other is known as inherents. The difference between these two is that transactions are signed and gossiped on the network and can be deemed useful per se. This fits the mold of what you would call transactions in Bitcoin or Ethereum.
Inherents, meanwhile, are not passed on the network and are not signed. They represent data which describes the environment but which cannot call upon anything to prove it such as a signature. Rather they are assumed to be "true" simply because a sufficiently large number of validators have agreed on them being reasonable.
To give an example, there is the timestamp inherent, which sets the current timestamp of the block. This is not a fixed part of Substrate, but does come as part of the Substrate Runtime Module Library to be used as desired. No signature could fundamentally prove that a block were authored at a given time in quite the same way that a signature can "prove" the desire to spend some particular funds. Rather, it is the business of each validator to ensure that they believe the timestamp is set to something reasonable before they agree that the block candidate is valid.
Other examples include the parachain-heads extrinsic in Polkadot and the "note-missed-proposal" extrinsic used in the Substrate Runtime Module Library to determine and punish or deactivate offline validators.
Substrate chains all have a runtime. The runtime is a WebAssembly "blob" that includes a number of entry-points. Some entry-points are required as part of the underlying Substrate specification. Others are merely convention and required for the default implementation of the Substrate client to be able to author blocks.
If you want to develop a chain with Substrate, you will need to implement the Core
trait. This Core
trait generates an API with the minimum necessary functionality to interact with your runtime. A special macro is provided called impl_runtime_apis!
that help you implement runtime API traits. All runtime API trait implementations need to be done in one call of the impl_runtime_apis!
macro. All parameters and return values need to implement parity-codec
to be encodable and decodable.
Here’s a snippet of the Polkadot API implementation as of PoC-3:
impl_runtime_apis! {
impl client_api::Core<Block> for Runtime {
fn version() -> RuntimeVersion {
VERSION
}
fn execute_block(block: Block) {
Executive::execute_block(block)
}
fn initialize_block(header: <Block as BlockT>::Header) {
Executive::initialize_block(&header)
}
}
// ---snip---
}
The Substrate Runtime Module Library includes functionality for timestamps and slashing. If used, these rely on "trusted" external information being passed in via inherent extrinsics. The Substrate reference block authoring client software will expect to be able to call into the runtime API with collated data (in the case of the reference Substrate authoring client, this is merely the current timestamp and which nodes were offline) in order to return the appropriate extrinsics ready for inclusion. If new inherent extrinsic types and data are to be used in a modified runtime, then it is this function (and its argument type) that would change.
In Substrate, there is a major distinction between blockchain syncing and block authoring ("authoring" is a more general term for what is called "mining" in Bitcoin). The first case might be referred to as a "full node" (or "light node" - Substrate supports both): authoring necessarily requires a synced node and, therefore, all authoring clients must necessarily be able to synchronize. However, the reverse is not true. The primary functionality that authoring nodes have which is not in "sync nodes" is threefold: transaction queue logic, inherent transaction knowledge and BFT consensus logic. BFT consensus logic is provided as a core element of Substrate and can be ignored since it is only exposed in the SDK under the authorities()
API entry.
Transaction queue logic in Substrate is designed to be as generic as possible, allowing a runtime to express which transactions are fit for inclusion in a block through the initialize_block
and apply_extrinsic
calls. However, more subtle aspects like prioritization and replacement policy must currently be expressed "hard coded" as part of the blockchain’s authoring code. That said, Substrate’s reference implementation for a transaction queue should be sufficient for an initial chain implementation.
Inherent extrinsic knowledge is again somewhat generic, and the actual construction of the extrinsics is, by convention, delegated to the "soft code" in the runtime. If ever there needs to be additional extrinsic information in the chain, then both the block authoring logic will need to be altered to provide it into the runtime and the runtime’s inherent_extrinsics
call will need to use this extra information in order to construct any additional extrinsic transactions for inclusion in the block.
-
0.1 "PoC-1": PBFT consensus, Wasm runtime engine, basic runtime modules.
-
0.2 "PoC-2": Libp2p
Substrate Node is Substrate’s pre-baked blockchain client. You can run a development node locally or configure a new chain and launch your own global testnet.
To get going as fast as possible, there is a simple script that installs all required dependencies and installs Substrate into your path. Just open a terminal and run:
curl https://getsubstrate.io -sSf | bash
You can start a local Substrate development chain with running substrate --dev
.
To create your own global network/cryptocurrency, you’ll need to make a new Substrate Node chain specification file ("chainspec").
First let’s get a template chainspec that you can edit. We’ll use the "staging" chain, a sort of default chain that the node comes pre-configured with:
substrate build-spec --chain=staging > ~/chainspec.json
Now, edit ~/chainspec.json
in your editor. There are a lot of individual fields for each module, and one very large one which contains the WebAssembly code blob for this chain. The easiest field to edit is the block period
. Change it to 10 (seconds):
"timestamp": {
"minimumPeriod": 10
},
Now with this new chainspec file, you can build a "raw" chain definition for your new chain:
substrate build-spec --chain ~/chainspec.json --raw > ~/mychain.json
This can be fed into Substrate:
substrate --chain ~/mychain.json
It won’t do much until you start producing blocks though, so to do that you’ll need to use the --validator
option together with passing the seed for the account(s) that is configured to be the initial authorities:
substrate --chain ~/mychain.json --validator
You can distribute mychain.json
so that everyone can synchronize and (depending on your authorities list) validate on your chain.
If you’d actually like to hack on Substrate, you can just grab the source code and build it. Ensure you have Rust and the support software installed:
For Unix-based operating systems, you should run the following commands:
curl https://sh.rustup.rs -sSf | sh
rustup update nightly
rustup target add wasm32-unknown-unknown --toolchain nightly
rustup update stable
cargo install --git https://github.com/alexcrichton/wasm-gc
You will also need to install the following packages:
-
Linux:
sudo apt install cmake pkg-config libssl-dev git clang libclang-dev
-
Linux on ARM:
rust-lld
is required for linking wasm, but is missing on non Tier 1 platforms. So, use this script to buildlld
and create the symlink/usr/bin/rust-lld
to the build binary. -
Mac:
brew install cmake pkg-config openssl git llvm
To finish installation of Substrate, jump down to shared steps.
If you are trying to set up Substrate on Windows, you should do the following:
-
First, you will need to download and install "Build Tools for Visual Studio:"
-
You can get it at this link: https://aka.ms/buildtools
-
Run the installation file:
vs_buildtools.exe
-
Please ensure the Windows 10 SDK component is included when installing the Visual C++ Build Tools.
-
Restart your computer.
-
-
Next, you need to install Rust:
-
Detailed instructions are provided by the Rust Book.
-
Download from: https://www.rust-lang.org/tools/install
-
Run the installation file:
rustup-init.exe
> Note that it should not prompt you to install vs_buildtools since you did it in step 1. -
Choose "Default Installation."
-
To get started, you need Cargo’s bin directory (%USERPROFILE%\.cargo\bin) in your PATH environment variable. Future applications will automatically have the correct environment, but you may need to restart your current shell.
-
-
Then, you will need to run some commands in CMD to set up your Wasm Build Environment:
rustup update nightly rustup update stable rustup target add wasm32-unknown-unknown --toolchain nightly
-
Next, you install wasm-gc, which is used to slim down Wasm files:
cargo install --git https://github.com/alexcrichton/wasm-gc --force
-
Then, you need to install LLVM: https://releases.llvm.org/download.html
-
Next, you need to install OpenSSL, which we will do with
vcpkg
:mkdir \Tools cd \Tools git clone https://github.com/Microsoft/vcpkg.git cd vcpkg .\bootstrap-vcpkg.bat .\vcpkg.exe install openssl:x64-windows-static
-
After, you need to add OpenSSL to your System Variables:
$env:OPENSSL_DIR = 'C:\Tools\vcpkg\installed\x64-windows-static' $env:OPENSSL_STATIC = 'Yes' [System.Environment]::SetEnvironmentVariable('OPENSSL_DIR', $env:OPENSSL_DIR, [System.EnvironmentVariableTarget]::User) [System.Environment]::SetEnvironmentVariable('OPENSSL_STATIC', $env:OPENSSL_STATIC, [System.EnvironmentVariableTarget]::User)
-
Finally, you need to install
cmake
: https://cmake.org/download/
Then, grab the Substrate source code:
git clone https://github.com/paritytech/substrate.git
cd substrate
Then build the code:
cargo build # Builds all native code
You can run all the tests if you like:
cargo test --all
Or just run the tests of a specific package (i.e. cargo test -p srml-assets
)
You can start a development chain with:
cargo run --release -- --dev
Detailed logs may be shown by running the node with the following environment variables set: RUST_LOG=debug RUST_BACKTRACE=1 cargo run --release -- --dev
.
If you want to see the multi-node consensus algorithm in action locally, then you can create a local testnet with two validator nodes for Alice and Bob, who are the initial authorities of the genesis chain specification that have been endowed with a testnet DOTs. We’ll give each node a name and expose them so they are listed on Telemetry. You’ll need two terminal windows open.
We’ll start Alice’s Substrate node first on default TCP port 30333 with her chain database stored locally at /tmp/alice
. The Bootnode ID of her node is QmRpheLN4JWdAnY7HGJfWFNbfkQCb6tFf4vvA6hgjMZKrR
, which is generated from the --node-key
value that we specify below:
cargo run --release \-- \
--base-path /tmp/alice \
--chain=local \
--alice \
--node-key 0000000000000000000000000000000000000000000000000000000000000001 \
--telemetry-url ws://telemetry.polkadot.io:1024 \
--validator
In the second terminal, we’ll run the following to start Bob’s Substrate node on a different TCP port of 30334, and with his chain database stored locally at /tmp/bob
. We’ll specify a value for the --bootnodes
option that will connect his node to Alice’s Bootnode ID on TCP port 30333:
cargo run --release \-- \
--base-path /tmp/bob \
--bootnodes /ip4/127.0.0.1/tcp/30333/p2p/QmRpheLN4JWdAnY7HGJfWFNbfkQCb6tFf4vvA6hgjMZKrR \
--chain=local \
--bob \
--port 30334 \
--telemetry-url ws://telemetry.polkadot.io:1024 \
--validator
Additional Substrate CLI usage options are available and may be shown by running cargo run -- --help
.
The WASM binaries are built during the normal cargo build
process. To control the WASM binary building,
we support multiple environment variables:
-
SKIP_WASM_BUILD
- Skips building any WASM binary. This is useful when only native should be recompiled. -
BUILD_DUMMY_WASM_BINARY
- Builds dummy WASM binaries. These dummy binaries are empty and useful forcargo check
runs. -
WASM_BUILD_TYPE
- Sets the build type for building WASM binaries. Supported values arerelease
ordebug
. By default the build type is equal to the build type used by the main build. -
TRIGGER_WASM_BUILD
- Can be set to trigger a WASM build. On subsequent calls the value of the variable needs to change. As WASM builder instructscargo
to watch for file changes this environment variable should only be required in certain circumstances. -
WASM_TARGET_DIRECTORY
- Will copy any build WASM binary to the given directory. The path needs to be absolute.
Each project can be skipped individually by using the environment variable SKIP_PROJECT_NAME_WASM_BUILD
.
Where PROJECT_NAME
needs to be replaced by the name of the cargo project, e.g. node-runtime
will
be NODE_RUNTIME
.
Flaming Fir is the new testnet for Substrate master (2.0) to test the latest development features. Please note that master is not compatible with the BBQ Birch, Charred Cherry, Dried Danta or Emberic Elm testnets. Ensure you have the dependencies listed above before compiling.
Since Flaming Fir is targeting the master branch we make absolutely no guarantees of stability and/or persistence of the network. We might reset the chain at any time if it is necessary to deploy new changes. Currently, the validators are running with a client built from d013bd900
, if you build from this commit you should be able to successfully sync, later commits may not work as new breaking changes may be introduced in master.
Latest known working version: a2a0eb5398d6223e531455b4c155ef053a4a3a2b
git clone https://github.com/paritytech/substrate.git
cd substrate
git checkout -b flaming-fir a2a0eb5398d6223e531455b4c155ef053a4a3a2b
You can run the tests if you like:
cargo test --all
Start your node:
cargo run --release \--
To see a list of command line options, enter:
cargo run --release \-- --help
For example, you can choose a custom node name:
cargo run --release \-- --name my_custom_name
If you are successful, you will see your node syncing at https://telemetry.polkadot.io/#/Flaming%20Fir
Emberic Elm is the testnet for Substrate 1.0. Please note that 1.0 is not compatible with the BBQ Birch, Charred Cherry, Dried Danta or Flaming Fir testnets.
In order to join the Emberic Elm testnet you should build from the v1.0
branch. Ensure you have the dependencies listed above before compiling.
git clone https://github.com/paritytech/substrate.git
cd substrate
git checkout -b v1.0 origin/v1.0
You can then follow the same steps for building and running as described above in Joining the Flaming Fir Testnet.
Keys in Substrate are stored in the keystore in the file system. To store keys into this keystore,
you need to use one of the two provided RPC calls. If your keys are encrypted or should be encrypted
by the keystore, you need to provide the key using one of the cli arguments --password
,
--password-interactive
or --password-filename
.
For most users who want to run a validator node, the author_rotateKeys
RPC call is sufficient.
The RPC call will generate N
Session keys for you and return their public keys. N
is the number
of session keys configured in the runtime. The output of the RPC call can be used as input for the
session::set_keys
transaction.
curl -H 'Content-Type: application/json' --data '{ "jsonrpc":"2.0", "method":"author_rotateKeys", "id":1 }' localhost:9933
If the Session keys need to match a fixed seed, they can be set individually key by key. The RPC call expects the key seed and the key type. The key types supported by default in Substrate are listed here, but the user can declare any key type.
curl -H 'Content-Type: application/json' --data '{ "jsonrpc":"2.0", "method":"author_insertKey", "params":["KEY_TYPE", "SEED", "PUBLIC"],"id":1 }' localhost:9933
KEY_TYPE
- needs to be replaced with the 4-character key type identifier.
SEED
- is the seed of the key.
PUBLIC
- public key for the given key.
You can generate documentation for a Substrate Rust package and have it automatically open in your web browser using rustdoc with Cargo, (of the The Rustdoc Book), by running the the following command:
cargo doc --package <spec> --open
Replacing <spec>
with one of the following (i.e. cargo doc --package substrate --open
):
-
All Substrate Packages
substrate
-
Substrate Core
substrate, substrate-cli, substrate-client, substrate-client-db, substrate-consensus-common, substrate-consensus-rhd, substrate-executor, substrate-finality-grandpa, substrate-keyring, substrate-keystore, substrate-network, substrate-network-libp2p, substrate-primitives, substrate-rpc, substrate-rpc-servers, substrate-serializer, substrate-service, substrate-service-test, substrate-state-db, substrate-state-machine, substrate-telemetry, substrate-test-client, substrate-test-runtime, substrate-transaction-graph, substrate-transaction-pool, substrate-trie
-
Substrate Runtime
sr-api, sr-io, sr-primitives, sr-sandbox, sr-std, sr-version
-
Substrate Runtime Module Library (SRML)
srml-assets, srml-balances, srml-consensus, srml-contracts, srml-council, srml-democracy, srml-example, srml-executive, srml-metadata, srml-session, srml-staking, srml-support, srml-system, srml-timestamp, srml-treasury
-
Node
node-cli, node-consensus, node-executor, node-network, node-primitives, node-runtime
-
Subkey
subkey
Document source code for Substrate packages by annotating the source code with documentation comments.
Example (generic):
/// Summary
///
/// Description
///
/// # Panics
///
/// # Errors
///
/// # Safety
///
/// # Examples
///
/// Summary of Example 1
///
/// ```rust
/// // insert example 1 code here
/// ```
///
-
Important notes:
-
Documentation comments must use annotations with a triple slash
///
-
Modules are documented using
//!
-
//! Summary (of module)
//!
//! Description (of module)
-
Special section header is indicated with a hash
#
.-
Panics
section requires an explanation if the function triggers a panic -
Errors
section is for describing conditions under which a function of method returnsErr(E)
if it returns aResult<T, E>
-
Safety
section requires an explanation if the function isunsafe
-
Examples
section includes examples of using the function or method
-
-
Code block annotations for examples are included between triple graves, as shown above. Instead of including the programming language to use for syntax highlighting as the annotation after the triple graves, alternative annotations include the
ignore
,text
,should_panic
, orno_run
. -
Summary sentence is a short high level single sentence of its functionality
-
Description paragraph is for details additional to the summary sentence
-
Missing documentation annotations may be used to identify where to generate warnings with
![warn(missing_docs)]
or errors![deny(missing_docs)]
-
Hide documentation for items with
#[doc(hidden)]
The code block annotations in the # Example
section may be used as documentation as tests and for extended examples.
-
Important notes:
-
Rustdoc will automatically add a
main()
wrapper around the code block to test it -
Documentation as tests examples are included when running
cargo test
-