diff --git a/docs/build/build-on-layer-1/builder-guides/_category_.json b/docs/build/build-on-layer-1/builder-guides/_category_.json
new file mode 100644
index 0000000..a7fdb64
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Builder Guides",
+ "position": 10
+}
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/_category_.json b/docs/build/build-on-layer-1/builder-guides/astar_features/_category_.json
new file mode 100644
index 0000000..9987864
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/astar_features/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Use Astar Features",
+ "position": 3
+}
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/astarBase.md b/docs/build/build-on-layer-1/builder-guides/astar_features/astarBase.md
new file mode 100644
index 0000000..907aa0e
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/astar_features/astarBase.md
@@ -0,0 +1,404 @@
+---
+sidebar_position: 2
+---
+
+import Figure from '/src/components/figure'
+
+# How to get started with AstarBase
+
+## Overview
+
+This guide will demonstrate how to get started using AstarBase, an on-chain EVM database for the Astar Ecosystem. Upon completion of this section, developers should possess the knowledge required to demonstrate a "Hello World" use case of their own.
+
+## What is AstarBase
+
+AstarBase is an on-chain EVM database. AstarBase contains the mapping of users's EVM and native address. An EVM address is usually referred as a Metamask address or H160, native address can be referred as a SS58 address. These two are now interchangeable through the mapping that AstarBase offers.
+
+The main goal of AstarBase is creating more end-user cases for users to participate in the Astar ecosystem by offering some mechanisms such as rewarding users easily.
+
+### Functions available in AstarBase
+
+We have three major functions in AstarBase.
+
+```jsx
+function isRegistered(address evmAddress)
+ external view
+ returns (bool);
+```
+This code snippet checks if the given address was registered in AstarBase.
+
+```jsx
+function checkStakerStatus(address evmAddress)
+ external view
+ returns (uint128);
+```
+This code snippet checks if the pair of addresses (SS58 & EVM) is an active staker in dApps Staking and returns the staked amount.
+
+```jsx
+function checkStakerStatusOnContract(address evmAddress, address stakingContract)
+ external view
+ returns (uint128);
+```
+This code snippet checks if the pair of addresses (SS58 & EVM) is an active staker in dApps Staking on the specified contract and returns the staked amount.
+
+## Create a simple dApp using AstarBase
+
+We will work on a simple “Hello World” dApp which has a simple frontend to show a practical use case of AstarBase.
+
+Our showcase checks if certain user is a registered user in AstarBase. One application of this would be to pick a loyal user.
+
+### Step 1:
+First, we will create a simple front end. In this example, we use React.js by running the command below.
+
+```bash
+npx create-react-app my-app
+cd my-app
+npm start
+```
+
+### Step 2:
+We slightly modify the frontend in App.js file so that now when a user clicks a button. The log on console shows if a certain user is registered in AstarBase.
+
+```jsx
+return (
+
+
+
+ );
+```
+
+### Step 3:
+
+We use a Shibuya address for this example. Add the a necessary details available from [here](https://github.com/AstarNetwork/astarbase/blob/main/public/config/config.json) in App.js file.
+
+```jsx
+const web3 = new Web3(new Web3.providers.HttpProvider("[https://evm.shibuya.astar.network](https://evm.shibuya.astar.network/)"));
+```
+
+### Step 4:
+The ABI is available [here](https://github.com/AstarNetwork/astarbase/blob/main/public/config/register_abi.json), we can now add it in App.js file. For this, we put ABI in the same file to make is easy, but you can put it in a different folder to make your code cleaner.
+
+```jsx
+const abi = [
+ {
+ "inputs": [],
+ "stateMutability": "nonpayable",
+ "type": "constructor"
+ },
+ {
+ "anonymous": false,
+ "inputs": [
+ {
+ "indexed": true,
+ "internalType": "address",
+ "name": "previousOwner",
+ "type": "address"
+ },
+ {
+ "indexed": true,
+ "internalType": "address",
+ "name": "newOwner",
+ "type": "address"
+ }
+ ],
+ "name": "OwnershipTransferred",
+ "type": "event"
+ },
+ {
+ "inputs": [],
+ "name": "DAPPS_STAKING",
+ "outputs": [
+ {
+ "internalType": "contract DappsStaking",
+ "name": "",
+ "type": "address"
+ }
+ ],
+ "stateMutability": "view",
+ "type": "function"
+ },
+ {
+ "inputs": [],
+ "name": "SR25519Contract",
+ "outputs": [
+ {
+ "internalType": "contract SR25519",
+ "name": "",
+ "type": "address"
+ }
+ ],
+ "stateMutability": "view",
+ "type": "function"
+ },
+ {
+ "inputs": [
+ {
+ "internalType": "address",
+ "name": "",
+ "type": "address"
+ }
+ ],
+ "name": "addressMap",
+ "outputs": [
+ {
+ "internalType": "bytes32",
+ "name": "",
+ "type": "bytes32"
+ }
+ ],
+ "stateMutability": "view",
+ "type": "function"
+ },
+ {
+ "inputs": [],
+ "name": "beneficiary",
+ "outputs": [
+ {
+ "internalType": "address",
+ "name": "",
+ "type": "address"
+ }
+ ],
+ "stateMutability": "view",
+ "type": "function"
+ },
+ {
+ "inputs": [
+ {
+ "internalType": "address",
+ "name": "evmAddress",
+ "type": "address"
+ }
+ ],
+ "name": "checkStakerStatus",
+ "outputs": [
+ {
+ "internalType": "uint128",
+ "name": "",
+ "type": "uint128"
+ }
+ ],
+ "stateMutability": "view",
+ "type": "function"
+ },
+ {
+ "inputs": [
+ {
+ "internalType": "address",
+ "name": "evmAddress",
+ "type": "address"
+ }
+ ],
+ "name": "isRegistered",
+ "outputs": [
+ {
+ "internalType": "bool",
+ "name": "",
+ "type": "bool"
+ }
+ ],
+ "stateMutability": "view",
+ "type": "function"
+ },
+ {
+ "inputs": [],
+ "name": "owner",
+ "outputs": [
+ {
+ "internalType": "address",
+ "name": "",
+ "type": "address"
+ }
+ ],
+ "stateMutability": "view",
+ "type": "function"
+ },
+ {
+ "inputs": [
+ {
+ "internalType": "bool",
+ "name": "_state",
+ "type": "bool"
+ }
+ ],
+ "name": "pause",
+ "outputs": [],
+ "stateMutability": "nonpayable",
+ "type": "function"
+ },
+ {
+ "inputs": [],
+ "name": "paused",
+ "outputs": [
+ {
+ "internalType": "bool",
+ "name": "",
+ "type": "bool"
+ }
+ ],
+ "stateMutability": "view",
+ "type": "function"
+ },
+ {
+ "inputs": [
+ {
+ "internalType": "bytes32",
+ "name": "ss58PublicKey",
+ "type": "bytes32"
+ },
+ {
+ "internalType": "bytes",
+ "name": "signedMsg",
+ "type": "bytes"
+ }
+ ],
+ "name": "register",
+ "outputs": [],
+ "stateMutability": "nonpayable",
+ "type": "function"
+ },
+ {
+ "inputs": [],
+ "name": "registeredCnt",
+ "outputs": [
+ {
+ "internalType": "uint256",
+ "name": "_value",
+ "type": "uint256"
+ }
+ ],
+ "stateMutability": "view",
+ "type": "function"
+ },
+ {
+ "inputs": [],
+ "name": "renounceOwnership",
+ "outputs": [],
+ "stateMutability": "nonpayable",
+ "type": "function"
+ },
+ {
+ "inputs": [
+ {
+ "internalType": "address",
+ "name": "_newBeneficiary",
+ "type": "address"
+ }
+ ],
+ "name": "setBeneficiary",
+ "outputs": [],
+ "stateMutability": "nonpayable",
+ "type": "function"
+ },
+ {
+ "inputs": [
+ {
+ "internalType": "uint256",
+ "name": "_newCost",
+ "type": "uint256"
+ }
+ ],
+ "name": "setUnregisterFee",
+ "outputs": [],
+ "stateMutability": "nonpayable",
+ "type": "function"
+ },
+ {
+ "inputs": [
+ {
+ "internalType": "bytes32",
+ "name": "",
+ "type": "bytes32"
+ }
+ ],
+ "name": "ss58Map",
+ "outputs": [
+ {
+ "internalType": "address",
+ "name": "",
+ "type": "address"
+ }
+ ],
+ "stateMutability": "view",
+ "type": "function"
+ },
+ {
+ "inputs": [
+ {
+ "internalType": "address",
+ "name": "evmAddress",
+ "type": "address"
+ }
+ ],
+ "name": "sudoUnRegister",
+ "outputs": [],
+ "stateMutability": "nonpayable",
+ "type": "function"
+ },
+ {
+ "inputs": [
+ {
+ "internalType": "address",
+ "name": "newOwner",
+ "type": "address"
+ }
+ ],
+ "name": "transferOwnership",
+ "outputs": [],
+ "stateMutability": "nonpayable",
+ "type": "function"
+ },
+ {
+ "inputs": [],
+ "name": "unRegister",
+ "outputs": [],
+ "stateMutability": "payable",
+ "type": "function"
+ },
+ {
+ "inputs": [],
+ "name": "unregisterFee",
+ "outputs": [
+ {
+ "internalType": "uint256",
+ "name": "",
+ "type": "uint256"
+ }
+ ],
+ "stateMutability": "view",
+ "type": "function"
+ },
+ {
+ "inputs": [],
+ "name": "withdraw",
+ "outputs": [],
+ "stateMutability": "payable",
+ "type": "function"
+ }
+ ];
+```
+
+### Step 5:
+Finally, we add an example contract address available from [here](https://github.com/AstarNetwork/astarbase/blob/main/contract/deployment-info.md). In this example, we use the Shibuya version but you can use Astar version as well as Shiden version.
+
+```jsx
+const address = "0xF183f51D3E8dfb2513c15B046F848D4a68bd3F5D";
+```
+
+### Step 6:
+We will combine all what we wrote in the previous steps. For #EVM_ADDRESS, you can replace it with your specified address in EVM format.
+
+```jsx
+const smartContract = new web3.eth.Contract(abi, address);
+const stakerStatus = await smartContract.methods.isRegistered("#EVM_ADDRESS").call();
+console.log(stakerStatus);
+```
+
+In the end, this returns if a certain address is registered in AstarBase.
+That’s a wrap! Happy hacking!
+
+## Reference
+
+- Official Guide for AstarBase](/docs/build/build-on-layer-1/builder-guides/astar_features/astarBase.md/)
+- Official Document for creating a React app:
+ - [https://reactjs.org/docs/create-a-new-react-app.html](https://reactjs.org/docs/create-a-new-react-app.html)
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 1.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 1.png
new file mode 100644
index 0000000..8dfb0d5
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 1.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 2.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 2.png
new file mode 100644
index 0000000..cb6b83e
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 2.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 3.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 3.png
new file mode 100644
index 0000000..f395778
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 3.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 4.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 4.png
new file mode 100644
index 0000000..d1004e9
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 4.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 5.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 5.png
new file mode 100644
index 0000000..ce561f6
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 5.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 6.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 6.png
new file mode 100644
index 0000000..308deec
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 6.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 7.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 7.png
new file mode 100644
index 0000000..59aff80
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 7.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 8.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 8.png
new file mode 100644
index 0000000..2e90c74
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled 8.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled.png
new file mode 100644
index 0000000..41efaed
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Remix-cookbook/Untitled.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Truffle-cookbook/1.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Truffle-cookbook/1.png
new file mode 100644
index 0000000..9698e33
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Truffle-cookbook/1.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Truffle-cookbook/2.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Truffle-cookbook/2.png
new file mode 100644
index 0000000..fcf9c5a
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Truffle-cookbook/2.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Truffle-cookbook/3.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Truffle-cookbook/3.png
new file mode 100644
index 0000000..942544e
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Truffle-cookbook/3.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-Truffle-cookbook/4.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Truffle-cookbook/4.png
new file mode 100644
index 0000000..e9467f7
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-Truffle-cookbook/4.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/Untitled 1.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/Untitled 1.png
new file mode 100644
index 0000000..5cc250f
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/Untitled 1.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/Untitled 2.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/Untitled 2.png
new file mode 100644
index 0000000..ec665c5
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/Untitled 2.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/Untitled 3.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/Untitled 3.png
new file mode 100644
index 0000000..c8b858a
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/Untitled 3.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/Untitled.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/Untitled.png
new file mode 100644
index 0000000..99939e6
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/Untitled.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/img b/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/img
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/astar_features/img-localchain-cookbook/img
@@ -0,0 +1 @@
+
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img/1.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img/1.png
new file mode 100644
index 0000000..ec5bea4
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img/1.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img/2.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img/2.png
new file mode 100644
index 0000000..5eb3d97
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img/2.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img/3.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img/3.png
new file mode 100644
index 0000000..b9a9506
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img/3.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img/4.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img/4.png
new file mode 100644
index 0000000..d54e865
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img/4.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img/5.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img/5.png
new file mode 100644
index 0000000..b1e8ee4
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img/5.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/img/6.png b/docs/build/build-on-layer-1/builder-guides/astar_features/img/6.png
new file mode 100644
index 0000000..4134739
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/astar_features/img/6.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/run_local_network.md b/docs/build/build-on-layer-1/builder-guides/astar_features/run_local_network.md
new file mode 100644
index 0000000..180113f
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/astar_features/run_local_network.md
@@ -0,0 +1,148 @@
+---
+sidebar_position: 1
+---
+
+# How to run a local Astar blockchain for contract testing
+
+## TL;DR
+
+Compared with using Shibuya testnet, testing on a local Astar blockchain can provide higher throughput, quicker response, and more privacy since the only node of the network runs on your local device. Building a local Astar blockchain is essential for relatively large project development and testing.
+
+In this guide, we will walk you through the process of setting up a local Astar node, running the local blockchain, accessing the blockchain via your local explorer, and configuring the local blockchain in other developer tools.
+
+---
+
+## What is a local Astar blockchain
+
+A local Astar blockchain is a **single-node network** running on your local device which can simulate the on-chain environment of Astar Network and be used for local testing without needing any network connections. You can set up a local blockchain by downloading the latest Astar collator node code from [Github](https://github.com/AstarNetwork/Astar) and building from source, or directly run the binary built for your environment.
+
+:::info
+Running a local blockchain is common for smart contract development and testing.
+:::
+
+## Why should I run a local Astar blockchain
+
+Compared to the Shibuya testnet, running a local Astar blockchain will have the following benefits:
+
+- Higher throughput and quicker response compared to using Shibuya testnet, which may save you a lot of testing time.
+- Privacy of testing data and development history since the only node is on your local device and only you have access to the network.
+- Self-customized release version and testing conditions. By using a local blockchain for testing and development, you will be able to choose the node release version and customize the testing conditions, e.g. rolling back the network for 10000 blocks.
+
+---
+
+## Instructions
+### Download the latest Astar-collator binary file
+
+A binary file is an executable program that is already compiled with a specific environment. In this guide, we will demonstrate how to build the local blockchain using binary files since it is the most widely used approach.
+
+If you prefer building from source code with your local environment, follow the guide [here](https://github.com/AstarNetwork/Astar#building-from-source).
+
+Download the latest release of [the Astar collator](https://github.com/AstarNetwork/Astar/releases) for macOS for Ubuntu:
+
+![Untitled](img-localchain-cookbook/Untitled.png)
+:::tip
+Please make sure you are running a macOS or Ubuntu with the appropriate version. For macOS, please use a version above MacOS 12.0.
+:::
+:::info
+Please rename the binary file to `astar`, for illustration purposes the consistency of the command lines in this tutorial.
+:::
+
+---
+
+### Add execution permission to the binary file
+
+- Change the directory to the folder containing the `astar` binary file
+
+ ```jsx
+ cd local-astar-cookbook
+ ```
+
+- Add execution permission to the binary file.
+ **Note**: if you are using a Mac, you may need an extra step to configure the security settings:
+ - Go to Apple menu > System Settings > Privacy & Security.
+ - Under security, add the `astar` binary file that you just downloaded to the whitelist.
+ - Continue with the following command.
+
+ ```jsx
+ chmod +x ./astar
+ ```
+
+- Now you can double-check the version of your Astar collator node:
+
+ ```jsx
+ ./astar --version
+ ```
+
+
+---
+
+### Configure and run the local blockchain
+
+Run the local network with the following configurations:
+- `--port 30333`: use `port 30333` for P2P TCP connection
+- `--rpc-port 9944`: use `port 9944` for RPC, both WS(s) and HTTP(s)
+- `--rpc-cors all`: accept any origins for HTTP and WebSocket connections
+- `--alice`: enable `Alice` session keys
+- `--dev`: launch the network in development mode
+
+```jsx
+./astar --port 30333 --rpc-port 9944 --rpc-cors all --alice --dev
+```
+
+You will be able to see the local Astar collator node info and new blocksafter successfully running it.
+![Untitled](img-localchain-cookbook/Untitled%201.png)
+![Untitled](img-localchain-cookbook/Untitled%202.png)
+
+You can check a full list of subcommand and explanation using the following command:
+
+```jsx
+./astar help
+```
+
+
+---
+
+### Access the local blockchain via explorer
+
+- Go to [local explorer](https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944#/explorer)
+- You will be able to to view the recent blocks, accounts info, etc. as the on-chain environment of Astar Network
+
+![Untitled](img-localchain-cookbook/Untitled%203.png)
+
+
+---
+
+### Configure the local blockchain in other dev tools
+
+When using the local blockchain with other dev tools including MetaMask, Hardhat, Remix, Truffle, please configure the network with the following info to connect with the local blockchain:
+
+| Network Name | Local Astar Testnet 0 |
+| --- | --- |
+| New RPC URL | http://127.0.0.1:9944 |
+| Chain ID | 4369 |
+| Currency Symbol | ASTL |
+
+---
+
+## Appendix: useful subcommand in smart contract testings
+
+When using the local blockchain with other dev tools including MetaMask, Hardhat, Remix, Truffle, please configure the network with the following info to connect with the local blockchain:
+
+- `build-spec`: build a chain specification
+- `check-block`: validate blocks
+- `export-blocks`: export blocks
+- `export-genesis-state`: export the genesis state of the parachain
+- `export-genesis-wasm`: export the genesis wasm of the parachain
+- `export-state`: export the state of a given block into a chain spec
+- `help`: print this message or the help of the given subcommand(s)
+- `import-blocks`: import blocks
+- `key`: key management cli utilities
+- `purge-chain`: remove the whole chain
+- `revert`: revert the chain to a previous state
+- `sign`: sign a message, with a given (secret) key
+- `vanity`: generate a seed that provides a vanity address
+- `verify`: verify a signature for a message, provided on STDIN, with a given (public or secret) key
+
+## Reference
+
+- [Astar Github](https://github.com/AstarNetwork/Astar)
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/truffle.md b/docs/build/build-on-layer-1/builder-guides/astar_features/truffle.md
new file mode 100644
index 0000000..07d5d3e
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/astar_features/truffle.md
@@ -0,0 +1,158 @@
+---
+sidebar_position: 4
+---
+
+# How to use Truffle to deploy on Shibuya
+
+## TL;DR
+
+This cookbook gives you a basic idea of how to use Truffle and deploy a simple test smart contract on our Shibuya testnet.
+
+## What is Truffle?
+
+Truffle is a popular development framework for Ethereum based blockchain. It offers a suite of tools that make it easier to develop and deploy smart contracts on the EVM(Ethereum Virtual Machine) blockchain. Some of the key features of Truffle include:
+
+- A suite of development and testing tools, including a code compiler, a testing framework, and a debugger.
+- Support for popular programming languages, including Solidity and JavaScript.
+- Integration with popular Ethereum wallets, such as MetaMask and Ledger.
+- Automated contract deployment and management tools.
+
+Overall, Truffle is designed to make it easier for developers to build and deploy decentralized applications (dApps) on the EVM blockchain.
+
+## Builders Guide
+### Step 1: Project Setup
+
+Let’s set up a project folder first. We create a project directory and navigate into that directory:
+
+```bash
+mkdir truffleApp
+```
+
+```bash
+cd truffleApp
+```
+
+If you haven’t installed Truffle yet, please do so by running the command below:
+
+```bash
+npm install -g truffle
+```
+
+We initialize Truffle to create our project:
+
+```bash
+truffle init
+```
+
+Now we see something like below to confirm the project is initialized:
+
+
+![1](img-Truffle-cookbook/1.png)
+
+
+Make sure you install HDWalletProvider which we will use later:
+
+```bash
+npm install @truffle/hdwallet-provider --save
+```
+
+---
+
+### Step 2: Start Coding
+
+Now, we would see the following file structure:
+
+
+![2](img-Truffle-cookbook/2.png)
+
+
+From here, we create a file for smart contract called **HelloShibuya.sol** inside the **contracts** directory:
+
+```jsx
+pragma solidity ^0.8.15;
+
+contract HelloShibuya {
+ string public greet = "Hello Shibuya!";
+}
+```
+
+We need to add a migration file called **1_deploy_contract.js** inside the **migrations** directory:
+
+```jsx
+var HelloShibuya = artifacts.require("HelloShibuya");
+
+module.exports = function (deployer) {
+ deployer.deploy(HelloShibuya);
+};
+```
+
+---
+
+### Step 3: Configure Settings
+
+Now we add information for the Shibuya testnet in **truffle-config.js**.
+For the endpoint, take a look and use one of Shibuya endpoints provided [here](/docs/build/build-on-layer-1/environment/endpoints.md).
+
+```jsx
+require('dotenv').config();
+const mnemonic = process.env.MNEMONIC;
+const HDWalletProvider = require('@truffle/hdwallet-provider');
+
+module.exports = {
+
+ networks: {
+ shibuya: {
+ provider: () => new HDWalletProvider(mnemonic, `https://shibuya.public.blastapi.io`),
+ network_id: 81,
+ confirmations: 10,
+ timeoutBlocks: 200,
+ skipDryRun: true,
+ from: "0x(your Shibuya account address)"
+ },
+
+```
+
+Be aware that we need to declare mnemonic which is used by **HDWalletProvider** in the **truffle-config.js** file to verify the account supplying funds during contract deployment. To set mnemonic variable, you would set it as an environment variable in **.env** file in the root directory.
+
+```bash
+npm i dotenv
+```
+
+```bash
+MNEMONIC="(Your secret recovery phase)"
+```
+
+We can find our secret recovery phase for our account in the Metamask by going through **Settings**, **Security & Privacy**, and then **Reveal Secret Recovery Phrase**.
+
+---
+
+### Step 4: Deployment
+
+Finally, we have everything ready and we can compile the smart contract we made:
+
+```bash
+truffle compile
+```
+
+```bash
+truffle migrate --network shibuya
+```
+
+We would see something like below to confirm our smart contract is deployed on Shibuya testnet.
+
+
+![3](img-Truffle-cookbook/3.png)
+
+We can confirm this also by looking at the explorer [Subscan](https://shibuya.subscan.io/).
+
+
+![4](img-Truffle-cookbook/4.png)
+
+
+If you have any questions, please feel free to ask us in our [official discord channel](https://discord.gg/GhTvWxsF6S).
+
+---
+
+## Reference
+
+- [Official Document for Truffle](https://trufflesuite.com/docs/)
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/use_hardhat.md b/docs/build/build-on-layer-1/builder-guides/astar_features/use_hardhat.md
new file mode 100644
index 0000000..fa0cb0a
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/astar_features/use_hardhat.md
@@ -0,0 +1,169 @@
+# How to use Hardhat to deploy on Shibuya
+
+## TL;DR
+
+Hardhat is an Ethereum (EVM) development environment that helps developers to manage and automate the recurring tasks inherent to the process of building smart contracts and dApps, which means compiling and testing at the very core.
+
+Since Astar Network is a multi-VM smart contract hub, we support both WASM and EVM smart contracts, which means you can use Ethereum dev tools including Hardhat to directly interact with Astar EVM’s API and deploy Solidity smart contracts on Astar.
+
+In this cookbook, we will guide you on how to set up the environment for Astar EVM development, how to create and configure a Hardhat project for Astar EVM, and how to deploy a simple Solidity smart contract on Astar EVM via Hardhat.
+
+---
+
+## What is Astar EVM?
+
+As a multi-VM smart contract hub, Astar Network supports both WASM and EVM smart contracts, which means both Solidity smart contracts and WASM-based smart contracts can be deployed on Astar Network.
+
+And for Solidity developers, you can directly use Ethereum dev tools including Hardhat, Remix, MetaMask to directly interact with Astar EVM’s API and deploy Solidity smart contracts on Astar EVM.
+
+## What is Hardhat?
+
+Hardhat is a development environment that helps developers in testing, compiling, deploying, and debugging smart contracts and dApps on Ethereum Virtual Machine (EVM). Hardhat smart contract development environment offers suitable tools to developers for managing the development workflow, and identifying causes for the failure of applications.
+
+---
+## Set up Node.js environment for Hardhat
+Hardhat is built on top of Node.js, the JavaScript runtime built on Chrome's V8 JavaScript engine. As the first step for to set up Hardhat, we need to set up Node.js environment.
+
+---
+## Create a Hardhat project
+
+First, let’s create a directory for this tutorial with the following command.
+
+```bash
+mkdir hardhat_test
+cd hardhat_test
+```
+
+
+Then, let’s initialize npm environment.
+
+```bash
+npm init -y
+```
+
+After the command above, you will see the following return message.
+
+```
+Wrote to /Users/suguruseo/Documents/Astar Work/hardhat/test-hardhat/hardhat_test/package.json:{
+ "name": "hardhat_test",
+ "version": "1.0.0",
+ "description": "",
+ "main": "index.js",
+ "scripts": {
+ "test": "echo \"Error: no test specified\" && exit 1"
+ },
+ "keywords": [],
+ "author": "",
+ "license": "ISC"
+}
+```
+
+At last, we install Hardhat and create a Hardhat project.
+
+```bash
+npm install hardhat
+```
+
+![1](img/1.png)
+
+
+---
+### Set up a private key
+
+In this section, we will set up a private key.
+
+```bash
+touch private.json
+vim private.json
+```
+
+Then, we add a private key for testing deployment here. PLEASE DON NOT USE IT IN PROD DEPLOYMENT.
+
+```
+{"privateKey":"0xde9be858da4a475276426320d5e9262ecfc3ba460bfac56360bfa6c4c28b4ee0"}
+```
+
+---
+### Add Shibuya Network details to Hardhat project configuration file
+
+Now, we need to add network settings in hardhat.config.js file like below.
+
+```jsx
+require("@nomicfoundation/hardhat-toolbox");
+
+task("accounts", "Prints the list of accounts", async () => {
+ const accounts = await ethers.getSigners();
+
+ for (const account of accounts) {
+ console.log(account.address);
+ }
+});
+
+const { privateKey } = require('./private.json');
+
+/** @type import('hardhat/config').HardhatUserConfig */
+module.exports = {
+ solidity: "0.8.17",
+ networks: {
+ localhost: {
+ url:"http://localhost:8545",
+ chainId:31337,
+ accounts: [privateKey],
+ },
+ shibuya: {
+ url:"https://evm.shibuya.astar.network",
+ chainId:81,
+ accounts: [privateKey],
+ }
+ }
+};
+```
+
+---
+### Add Shibuya testnet to MetaMask
+
+Now, we can manually add Shibuya test net in MetaMask like below.
+
+![2](img/2.png)
+
+---
+### Claim Shibuya testnet tokens from the Discord faucet
+
+Now, we need gas fee to deploy smart contract.
+For this purpose, we use our Shibuya faucet from our [discord](https://discord.gg/astarnetwork) like below.
+
+We need to type something like below.
+
+```jsx
+/drip network: Your Shibuya Address
+```
+
+![3](img/3.png)
+
+We can confirm we got some Shibuya token now.
+
+![4](img/4.png)
+
+---
+### Deploy the smart contract on Shibuya
+
+Finally, we deploy our smart contract by running the command below.
+
+```bash
+npx hardhat run --network shibuya scripts/sample-script.js
+```
+
+Thats’s it! We see smart contract is successfully deployed.
+
+![5](img/5.png)
+
+You can also confirm that the contract was deployed successfully by checking [Blackscout](https://blockscout.com/shibuya/).
+
+![6](img/6.png)
+
+Happy Hacking!
+
+---
+## Reference
+
+- Official Document for Hardhat: [https://hardhat.org/hardhat-runner/docs/getting-started#overview](https://hardhat.org/hardhat-runner/docs/getting-started#overview)
diff --git a/docs/build/build-on-layer-1/builder-guides/astar_features/use_remix.md b/docs/build/build-on-layer-1/builder-guides/astar_features/use_remix.md
new file mode 100644
index 0000000..1650cb0
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/astar_features/use_remix.md
@@ -0,0 +1,135 @@
+# How to use Remix IDE to deploy an on-chain storage contract on Astar EVM
+
+## TL;DR
+
+Remix is a powerful open-source toolset for developing, deploying, debugging, and testing EVM-compatible smart contracts. Remix IDE is part of the Remix Project, which includes the Remix Plugin Engine and Remix Libraries: low-level tools for wider use.
+
+Since Astar Network is a multi-VM smart contract hub, we support both WASM and EVM, which means you can use Ethereum dev tools, including Remix, to interact with Astar EVM’s API directly and deploy Solidity-based smart contracts on Astar EVM.
+
+In this cookbook, we will guide you on creating a Solidity-based on-chain storage smart contract with Remix Online IDE, compiling and deploying the contract to the Shibuya testnet, and how interacting with the contract to write the value on our blockchain and retrieve the value.
+
+---
+
+## What is Astar EVM?
+
+As a multi-VM smart contract hub, Astar Network supports both WASM and EVM, which means Solidity smart contracts and WASM-based smart contracts can be deployed on Astar Network.
+
+And Solidity developers can directly use Ethereum dev tools, including Hardhat, Remix, MetaMask, to directly interact with Astar EVM’s API and deploy Solidity smart contracts on Astar EVM.
+
+## What is Remix?
+
+Remix is a powerful open-source toolset for developing, deploying, debugging, and testing EVM-compatible smart contracts. Remix IDE is part of the Remix Project, which includes the Remix Plugin Engine and Remix Libraries: low-level tools for wider use.
+
+---
+
+## Create a Solidity contract with Remix IDE
+
+- Visit [https://remix.ethereum.org/](https://remix.ethereum.org/) for online Remix IDE
+ - or install the Remix IDE Desktop from [https://github.com/ethereum/remix-**desktop**/releases](https://github.com/ethereum/remix-desktop/releases).
+- Create a new workspace by clicking the “+” beside “Workspace” and use the “Blank” template.
+
+![Untitled](img-Remix-cookbook/Untitled.png)
+
+- Add a new file named “storage.sol” with the following contract code provided. This is a simple example contract with two methods to `store()` and `retrieve()` value in a variable deployed on-chain.
+
+ ```
+ // SPDX-License-Identifier: GPL-3.0
+
+ pragma solidity >=0.7.0 <0.9.0;
+
+ /**
+ * @title Storage
+ * @dev Store & retrieve value in a variable
+ * @custom:dev-run-script ./scripts/deploy_with_ethers.ts
+ */
+ contract Storage {
+
+ uint256 number;
+
+ /**
+ * @dev Store value in variable
+ * @param num value to store
+ */
+ function store(uint256 num) public {
+ number = num;
+ }
+
+ /**
+ * @dev Return value
+ * @return value of 'number'
+ */
+ function retrieve() public view returns (uint256){
+ return number;
+ }
+ }
+ ```
+
+
+![Untitled](img-Remix-cookbook/Untitled%201.png)
+
+---
+
+## Compile the Solidity contract for deployment
+
+Before smart contracts can be deployed, the Solidity code must be compiled to bytecode for the EVM (Ethereum Virtual Machine), which will eventually be deployed on the blockchain. The compilation also generates an ABI (Application Binary Interface), an interface between operating systems and user programs.
+
+- Clicking the Solidity icon in the icon panel brings you to the Solidity Compiler.
+
+![Untitled](img-Remix-cookbook/Untitled%202.png)
+
+- Compile our Solidity storage contract by clicking “Compile storage.sol”.
+
+![Untitled](img-Remix-cookbook/Untitled%203.png)
+
+- After the compilation, you will be able to check the contract ABI and bytecode in the “ABI” and “Bytecode” sections under the “Compilation Details”. You will also find the “Storage.json” file in your workspace, which may be useful for your contract verification on block explorers.
+
+---
+
+## Deploy the Solidity contract to Shibuya testnet
+
+- Before the deployment on the Shibuya testnet, which is the testnet of Astar Network and Shiden Network:
+ - Please ensure that you have added Shibuya Network to your MetaMask wallet with the following configuration [https://docs.astar.network/docs/environment/endpoints/](https://docs.astar.network/docs/environment/endpoints/).
+ - Network name: Shibuya Network
+ - New RPC URL: [https://evm.shibuya.astar.network](https://evm.shibuya.astar.network/)
+ - Chain ID: 81
+ - Please claim SBY testnet tokens from the Shibuya faucet following the guide here [https://docs.astar.network/docs/environment/faucet/](https://docs.astar.network/docs/environment/faucet/)
+- Click the EVM icon on the left sidebar (the fourth icon) and visit the “DEPLOY & RUN TRANSACTIONS” page.
+- Switch the “ENVIRONMENT” to “Injected Provider - MetaMask” and ensure you have the right wallet address connected in MetaMask.
+
+![Untitled](img-Remix-cookbook/Untitled%204.png)
+
+- Click “Deploy” and confirm the transaction in your MetaMask.
+
+![Untitled](img-Remix-cookbook/Untitled%205.png)
+
+- Now, your first Solidity contract on the Shibuya testnet is deployed! Please feel free to copy the deployed contract address from “Deployed Contracts” and view it in the block explorer. BlockScout for Shibuya: [https://blockscout.com/shibuya](https://blockscout.com/shibuya)
+
+![Untitled](img-Remix-cookbook/Untitled%206.png)
+
+---
+
+## Interact with the deployed Solidity contract via Remix
+
+You will also be able to interact with the contract that you just deployed on Shibuya via Remix IDE.
+
+- Scroll down the contract details under the “Deployed Contracts” section on the “DEPLOY & RUN TRANSACTIONS” page.
+
+![Untitled](img-Remix-cookbook/Untitled%207.png)
+
+- You will be able to call the methods in the deployed contract.
+ - `store()`: to store a value in a variable deployed on-chain.
+ - `retrieve()`: read-only, to retrieve a value in a variable deployed on-chain.
+
+![Untitled](img-Remix-cookbook/Untitled%208.png)
+
+---
+
+## FAQ
+
+Please feel free to join our Discord [here](https://discord.gg/AstarNetwork) for technical support.
+
+## Reference
+
+[https://remix-ide.readthedocs.io/en/latest/index.html](https://remix-ide.readthedocs.io/en/latest/index.html)
+
+[https://docs.astar.network/](https://docs.astar.network/)
diff --git a/docs/build/build-on-layer-1/builder-guides/hacking/_category_.json b/docs/build/build-on-layer-1/builder-guides/hacking/_category_.json
new file mode 100644
index 0000000..99275dc
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/hacking/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Astar Hacker Guide",
+ "position": 12
+}
diff --git a/docs/build/build-on-layer-1/builder-guides/hacking/general.md b/docs/build/build-on-layer-1/builder-guides/hacking/general.md
new file mode 100644
index 0000000..b89decb
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/hacking/general.md
@@ -0,0 +1,43 @@
+---
+sidebar_position: 1
+---
+
+import Figure from '/src/components/figure'
+
+# General
+
+
+
+Please read the linked chapter and try to answer questions. If you can't find the answer, go to the [Astar Discord server](https://discord.gg/invite/AstarNetwork) and ask the question in the general channel under the Developer Support category.
+
+## Introduction
+### Polkadot Relay [Chapter](/docs/build/build-on-layer-1/introduction/polkadot_relay.md)
+* What would be the most valuable features that a Relay Chain provides to all connected parachains?
+* Is Kusama a parachain?
+* Is Astar a parachain on the Polkadot Relay Chain?
+* Does Astar use Substrate pallets as building blocks?
+* What is the pallet/module name which enables execution of Wasm smart contracts in a Substrate node?
+
+
+### Interact with the Node [Chapter](/docs/build/build-on-layer-1/introduction/node_interact.md)
+Connect to Astar Network using Polkadot-JS UI,
+* How many blocks are produced by Astar Network so far?
+* What it the value set for the constant called `blocksPerEra` in the `dappsStaking` pallet?
+
+### Accounts [Chapter](/docs/build/build-on-layer-1/introduction/create_account.md)
+* Did you safely store key seed? How many seed words are used to create your key?
+* Go to Subscan [Account Format Transfer](https://astar.subscan.io/tools/format_transform) tool and input your account to check what is the prefix number used on Astar and Shiden.
+* Can you share your public key?
+* Please note that you can use this account on Polkadot, Kusama and all parachains, but the address format will be different depending on the network prefix used. Connect to Polkadot and then to Astar Network on Polkadot-JS UI to observe the addresses change for same account.
+
+### Astar Network Family [Chapter](/docs/build/build-on-layer-1/introduction/astar_family.md)
+* Is Shiden a parachain on Kusama Relay Chain?
+* Is Shiden a parachain on Astar Relay Chain?
+* Is Kusama a parachain on Polkadot?
+* Using the native account you created in Accounts chapters, go to [Astar portal](https://portal.astar.network/), connect to Shibuya testnet and claim faucet tokens. You will later need these tokens to pay the gas fee and deploy contracts on Shibuya.
+* Where can you sell SBY tokens? What is the value od SBY (Shibuya network token)?
+* Can you test cross chain messaging with Zombienet?
+
+## Setup Your Environment
+### RPC Endpoints [Chapter](/docs/build/build-on-layer-1/environment/endpoints.md)
+* Check RPC address which you will use to connect front-end to your smart contract on Shibuya network.
diff --git a/docs/build/build-on-layer-1/builder-guides/hacking/hack_evm.md b/docs/build/build-on-layer-1/builder-guides/hacking/hack_evm.md
new file mode 100644
index 0000000..5197ab7
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/hacking/hack_evm.md
@@ -0,0 +1,35 @@
+---
+sidebar_position: 3
+---
+
+import Figure from '/src/components/figure'
+
+# Hack EVM Smart Contracts
+
+
+
+Read the linked chapters or use tutorials to be able to answer following questions:
+
+## Setup MetaMask and Remix
+* Did you install and connect your MetaMask to Shibuya? Which `Chain Id` did you use to setup Shibuya Network in MetaMask?
+* Connect to browser IDE Remix using this [tutorial](/docs/build/build-on-layer-1/builder-guides/astar_features/use_remix.md)
+* Does your environment look like this:
+
+
+
+* Can you say what does `Custom (81) network` mean?
+* Deploy and test smart contract from Remix tutorial.
+* What is the contract's address?
+* Can you find ABI for the contract?
+
+## Start using Solidity
+
+
+Since it's inception, Solidity has become mainstream language for smart contract development. There are many good tutorials to start learning Solidity and one of the popular places is [Crypto Zombies](https://cryptozombies.io/).
+
+## Setup Hardhat and Truffle
+The Truffle and Hardhat are preferred tools to develop, deploy and test smart contract. For this guide we will pick Hardhat.
+Setup your Hardhat environment using [How to use Hardhat to deploy on Shibuya](/docs/build/build-on-layer-1/builder-guides/astar_features/use_hardhat.md).
+
+
+What is [next](/docs/build/build-on-layer-1/builder-guides/hacking/next.md)?
diff --git a/docs/build/build-on-layer-1/builder-guides/hacking/hack_wasm.md b/docs/build/build-on-layer-1/builder-guides/hacking/hack_wasm.md
new file mode 100644
index 0000000..1861c8e
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/hacking/hack_wasm.md
@@ -0,0 +1,43 @@
+---
+sidebar_position: 2
+---
+
+import Figure from '/src/components/figure'
+
+# Hack Wasm Smart Contracts
+
+
+
+Read the linked chapters or use tutorials to be able to answer following questions:
+
+## Setup ink! Environment [Chapter](/docs/build/build-on-layer-1/environment/ink_environment.md)
+
+* Which cargo version are you using?
+* Run `rustup show` command.
+* Run `cargo contract -V`. Is your cargo contract version higher than 1.5.0?
+* Which rust toolchain do you need to use to be able to compile ink! smart contrats? (nightly or stable)? How do you manage this choice?
+
+## Test Tokens [Chapter](/docs/build/build-on-layer-1/environment/faucet.md)
+* Did you claim Shibuya tokens? How many SBY tokens did the faucet provide to you?
+* Can you unit test ink! smart contract without running test node like Swanky-node?
+
+## Run [Swanky](https://github.com/AstarNetwork/swanky-node) Node
+* Start your Swanky node and connect [polkadot-JS UI](https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944#/explorer) to it. Please note that for Swanky node there will be no node production if there is no interaction with it.
+
+:::note
+
+Please note that the current version of polkadot-JS is broken for contracts because of [lack of support for Weights V2](https://github.com/polkadot-js/apps/issues/8364). Until that gets resolved please use our [custom built polkadot-JS UI](https://polkadotjs-apps.web.app/#/explorer) or [Contracts-UI](https://contracts-ui.substrate.io/).
+
+:::
+
+## From Zero to ink Hero [Tutorials](/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/flipper-contract/flipper.md)
+Depending on your confidence, use any of these tutorial. If you are just starting, the Flipper contract is the way to go.
+* Your task is to deploy the contract from the tutorial to Shibuya Network.
+ * After you build contract notice where the `.contract` and `metadata.json` are created.
+ * Deploy Contract using [Contracts-UI](https://contracts-ui.substrate.io/).
+ * What is the contract address?
+ * Do you have any method that requires payment to be executed?
+ * Use Polkadot-JS UI to reload same contract you just deployed using Contracts-UI.
+
+
+What is [next](/docs/build/build-on-layer-1/builder-guides/hacking/next.md)?
diff --git a/docs/build/build-on-layer-1/builder-guides/hacking/img/custom_net.png b/docs/build/build-on-layer-1/builder-guides/hacking/img/custom_net.png
new file mode 100644
index 0000000..c34ebee
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/hacking/img/custom_net.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/hacking/img/zombie.png b/docs/build/build-on-layer-1/builder-guides/hacking/img/zombie.png
new file mode 100644
index 0000000..de3a580
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/hacking/img/zombie.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/hacking/index.md b/docs/build/build-on-layer-1/builder-guides/hacking/index.md
new file mode 100644
index 0000000..6c1d96f
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/hacking/index.md
@@ -0,0 +1,27 @@
+# Astar Hacker Guide
+
+
+
+Welcome to web3!
+
+## What is this Guide all about?
+This guide will navigate you through the Astar Documentation to jump start your development journey. Although the subject material is complex, we are here to support you along the way!
+
+The Astar Hacker Guide can be used for:
+* General Dev onboarding on Astar.
+* Preparing participants for Astar centric hackathons.
+* Onboarding new team members for teams building on Astar.
+* Onboarding new Astar team members.
+
+
+## Your Developer Background
+To follow this Hacker Guide you should have basic programming understanding. The programming languages used throughout this guide are Rust, Solidity and Javascript. Your previous knowledge of them is not mandatory but it will be useful if you add some basics under your hacker's belt.
+
+## How to use it
+This guide is divided into 2 tracks with the mandatory General chapter.
+
+0. ***General*** - must know basics. Nodes, interactions, accounts, Environment
+1. ***Wasm smart contracts*** - build, deploy, test, interact
+2. ***EVM smart contracts*** - build, deploy, test and interact
+
+Each section has assignments and questions which will help you to understand what is expected from you.
diff --git a/docs/build/build-on-layer-1/builder-guides/hacking/next.md b/docs/build/build-on-layer-1/builder-guides/hacking/next.md
new file mode 100644
index 0000000..80b26d7
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/hacking/next.md
@@ -0,0 +1,57 @@
+---
+sidebar_position: 4
+---
+
+
+# Next step
+Here comes the fun! Time to build your own dApp.
+Here you will find a list of ideas to implement in any of the smart contract environments.
+
+
+## Enter Community
+
+- Join Astar Discord and Post GM.
+- Follow Twitter.
+- Create an account on Stack Exchange.
+- Create an account, Post Hi on Forum (A thread just for this purpose).
+- Subscribe to Astar Newsletter.
+
+## Ideas to build
+These ideas can be implemented as WASM or EVM smart contract. Main intention is for the ink! developers.
+
+### Pool Together
+Explore this [project](https://app.pooltogether.com/) and build your own version in ink!
+
+### Voting
+Use Ink! v4 with Swanky-node to develop a smart contract which allows people to vote The rules are:
+
+* Contract owner initializes a set of candidates (2-10).
+* Lets anyone vote for the candidates.
+* Each voter is limited to only one ote (per address).
+* Displays the vote totals received by each candidate.
+
+### Tamagotchi
+Use Ink! v4 with Swanky-node to create a virtual pet smart contract, allowing users to create, interact with, and trade virtual pets securely and transparently on the blockchain.
+Create Tamagotchi: The smart contract should allow users to create a Tamagotchi object with attributes such as hunger, happiness, and energy levels.
+
+* Interact with Tamagotchi: Users should be able to interact with the Tamagotchi object by calling functions to modify its attributes, such as "feed", "play", and "rest".
+* Implement Rules: The smart contract should enforce rules and restrictions to prevent users from overfeeding, neglecting, or exploiting the Tamagotchi object.
+* Track Lifespan: The smart contract should track the Tamagotchi object's lifespan and trigger events such as death, rebirth, or evolution based on its age, level, and behavior.
+* Support Multiple Tamagotchis: The smart contract should support multiple Tamagotchi objects, each with its own set of attributes and state, and allow users to own, trade, or exchange them.
+
+### Charity Raffle
+
+Use Ink! v4 from Swanky-node to develop a smart contract which allows people to enter a charity raffle. The rules are:
+
+* A user can send in anywhere between 0.01 and 0.1 tokens.
+* A user can only play once.
+* A user is added to the pool with one submission, regardless of money that was sent in.
+* There can be a maximum of 2 winners.
+* The raffle must go on for 15 minutes before a draw can be initiated, but the 15 minute countdown only starts once there are at least 5 players in the pool.
+* Anyone can call the draw function at any time, but it will only draw a winner when the 15 minute timer has expired.
+* The draw function can be called twice for a maximum of two winners.
+* The winners get nothing (it’s a raffle for a real world item, like a t-shirt, so ignore on-chain effects of a win) but they need to be clearly exposed by the contract, i.e. the list of winners has to be a public value dapps and users can read from the contract.
+* The collected money from the pot is automatically sent to a pre-defined address when the second winner is drawn.
+
+Happy coding!
+
diff --git a/docs/build/build-on-layer-1/builder-guides/index.md b/docs/build/build-on-layer-1/builder-guides/index.md
new file mode 100644
index 0000000..f93ba0d
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/index.md
@@ -0,0 +1,16 @@
+---
+title: Guides
+---
+
+import Figure from '/src/components/figure'
+
+# Builder Guides
+
+
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/_category_.json b/docs/build/build-on-layer-1/builder-guides/integration_toolings/_category_.json
new file mode 100644
index 0000000..8dc8bfd
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/integration_toolings/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Integration and Toolings",
+ "position": 1
+}
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/add-wallets-to-portal.md b/docs/build/build-on-layer-1/builder-guides/integration_toolings/add-wallets-to-portal.md
new file mode 100644
index 0000000..e58f94b
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/integration_toolings/add-wallets-to-portal.md
@@ -0,0 +1,96 @@
+---
+sidebar_position: 2
+---
+
+# Add wallets into Astar Portal
+
+## Overview
+
+Users can connect to Astar portal using both EVM and Substrate based wallets. Below are the basic steps and important links needed to integrate a new wallet into the [Astar portal](https://portal.astar.network/astar/assets).
+
+![22](img/22.png)
+
+## How to integrate
+
+Developers can create a PR to [our portal](https://github.com/AstarNetwork/astar-apps) for adding wallets to our portal. We'll walk you through the steps below.
+
+### Define the wallet variables
+
+The extension name `enum` value comes from:
+
+```js
+const extensions = await getInjectedExtensions();
+console.log('extensions', extensions); -> extensions[index] -> name
+```
+
+1. Add the `extension name` at the [SupportWallet](https://github.com/AstarNetwork/astar-apps/blob/ecb067e9683eb5224fac96c5bf9fa9ce4c123a7d/src/config/wallets.ts#L8) enum.
+2. Add the `SupportWallet.[new_value]` to the [WalletModalOption](https://github.com/AstarNetwork/astar-apps/blob/ecb067e9683eb5224fac96c5bf9fa9ce4c123a7d/src/config/wallets.ts#L23) array.
+3. Add the `SupportWallet.[new_value]` to the [SubstrateWallets](https://github.com/AstarNetwork/astar-apps/blob/ecb067e9683eb5224fac96c5bf9fa9ce4c123a7d/src/config/wallets.ts#L48) array only if it is a Substrate wallet.
+
+### Add wallet information
+
+For Substrate wallets, add information to the [supportWalletObj](https://github.com/AstarNetwork/astar-apps/blob/ecb067e9683eb5224fac96c5bf9fa9ce4c123a7d/src/config/wallets.ts#L64) object variable.
+
+e.g.
+
+```js
+export const supportWalletObj = {
+ [SupportWallet.TalismanNative]: {
+ img: require('/src/assets/img/logo-talisman.svg'),
+ name: 'Talisman (Native)',
+ source: SupportWallet.TalismanNative,
+ walletUrl: 'https://app.talisman.xyz/',
+ guideUrl: 'https://app.talisman.xyz/',
+ isSupportBrowserExtension: true,
+ isSupportMobileApp: false,
+ },
+};
+```
+
+For Ethereum wallets, add information to the [supportEvmWalletObj](https://github.com/AstarNetwork/astar-apps/blob/ecb067e9683eb5224fac96c5bf9fa9ce4c123a7d/src/config/wallets.ts#L130) object variable.
+
+e.g.
+
+```js
+export const supportEvmWalletObj = {
+ [SupportWallet.TalismanEvm]: {
+ img: require('/src/assets/img/logo-talisman.svg'),
+ name: 'Talisman (EVM)',
+ source: SupportWallet.TalismanEvm,
+ walletUrl: 'https://app.talisman.xyz/',
+ guideUrl: 'https://app.talisman.xyz/',
+ isSupportBrowserExtension: true,
+ isSupportMobileApp: false,
+ ethExtension: 'talismanEth',
+ },
+};
+```
+
+### Add a visual asset representing your wallet
+
+Add a small `.svg` or `.png` to the [assets](https://github.com/AstarNetwork/astar-apps/tree/main/src/assets/img) directory.
+
+## Requirements for creating a PR
+
+1. Developers must test sending transactions from our portal. Perform basic tests using the guide below:
+
+ 1. Substrate wallets (such as [Polkadot.js](https://polkadot.js.org/))
+ 1. Native token transfer
+ 2. XCM assets transfer
+ 3. XCM transfer
+ 1. Deposit
+ 2. Withdrawal
+ 4. dApp staking transfer
+ 1. Stake
+ 2. Withdrawal
+ 3. Nomination transfer
+ 2. EVM wallets (such as [MetaMask](https://metamask.io/))
+ 1. Native token transfer
+ 2. ERC20 token transfer
+ 3. XC20(XCM assets) token transfer
+ 4. XCM transfer
+ 1. Withdrawal
+
+2. Submit the [Subscan](https://astar.subscan.io/) or [Blockscout](https://blockscout.com/astar/) links (both Astar and Shiden networks) for transaction details of the items listed above.
+3. Submit screen recordings of connect, a transaction, and account switch visual interactions.
+4. Deploy the forked app and submit the staging URL ([ref](../integration_toolings/deploy-astar-portal.md)).
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/cookbook_1.md b/docs/build/build-on-layer-1/builder-guides/integration_toolings/cookbook_1.md
new file mode 100644
index 0000000..e4618b6
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/integration_toolings/cookbook_1.md
@@ -0,0 +1,241 @@
+---
+sidebar_position: 1
+---
+
+# Analyzing on-chain data using Covalent API + Python
+
+## TL;DR
+
+This cookbook will go through how to extract and analyze on-chain data of Astar Network using Python and Covalent API. This cookbook is especially useful for non-devs who are not familiar with setting up indexers to query on-chain data. All steps can be done totally free without having to use a terminal or setting up a local development environment.
+
+## What is Covalent?
+
+Covalent leverages big-data technologies to create meaning from hundreds of billions of data points, delivering actionable insights to investors and allowing developers to allocate resources to higher-utility goals within their organization. Instead of pain-stakingly sourcing data from a small handful of chains, Covalent aggregates information from across dozens of sources including nodes, chains, and data feeds. The Covalent API then sources end users with individualized data by wallet, including current and historical investment performance across all types of digital assets. Most importantly, Covalent returns this data in a rapid and consistent manner, incorporating all relevant data within one API interface.
+
+## Analyzing ArthSwap pool balance
+
+As an example in this cookbook, we will analyze the change in the balance of ceUSDC/ceUSDT pool on ArthSwap. We will be using Python in this cookbook. For non-devs who are not familiar to setting up local environment to run Python, we recommend using Jupyter Notebook.
+
+Make sure to sign up for Covalent to get the API key needed to run the code. (You can register [here](https://www.covalenthq.com/))
+
+### Step 1: Extract data
+
+Before we do any data transformation and analytics, we need a list of historical portfolio data of ceUSDC/ceUSDT pool contract as our first step. To get the information, we need to send the following request (see the reference section in this cookbook for more info on API format):
+
+```python
+GET /v1/{chain_id}/address/{address}/portfolio_v2/
+```
+
+In this request, parameter chain_id is the chain ID of the Blockchain being queried. In this cookbook, we will use chain_id = 593 (Astar Network) and contract address of ceUSDC/ceUSDT pool = 0xD72A602C714ae36D990dc835eA5F96Ef87657D5e as example. The following code uses Python to extract the data.
+
+```python
+import requests
+
+API_KEY = [YOUR_API_KEY]
+base_url = 'https://api.covalenthq.com/v1'
+blockchain_chain_id = '592'
+address = "0xD72A602C714ae36D990dc835eA5F96Ef87657D5e"
+
+def get_wallet_portfolio(chain_id, address):
+ endpoint = f'/{chain_id}/address/{address}/portfolio_v2/?key={API_KEY}'
+ url = base_url + endpoint
+ result = requests.get(url).json()
+ return(result)
+
+portfolio_data = get_wallet_portfolio(blockchain_chain_id, address)
+print(portfolio_data)
+```
+
+Below is a sample output:
+
+`{'data': {'address': '0xd72a602c714ae36d990dc835ea5f96ef87657d5e', 'updated_at': '2022-09-20T07:17:27.930341337Z', 'next_update_at': '2022-09-20T07:22:27.930341567Z', 'quote_currency': 'USD', 'chain_id': 592, 'items': [{'contract_decimals': 6, 'contract_name': 'USD Coin', 'contract_ticker_symbol': 'USDC', 'contract_address': '0x6a2d262d56735dba19dd70682b39f6be9a931d98', 'supports_erc': None, 'logo_url': '[https://logos.covalenthq.com/tokens/592/0x6a2d262d56735dba19dd70682b39f6be9a931d98.png](https://logos.covalenthq.com/tokens/592/0x6a2d262d56735dba19dd70682b39f6be9a931d98.png)', 'holdings': [{'timestamp': '2022-09-20T00:00:00Z', 'quote_rate': 0.9932833, 'open': {'balance': '391683183282', 'quote': 389052.34}, 'high': {'balance': '392123445379', 'quote': 389489.66}, 'low': {'balance': '316424219770', 'quote': 314298.88}, 'close': {'balance': '317469504720', 'quote': 315337.16}}, {'timestamp': '2022-09-19T00:00:00Z', 'quote_rate': 1.0022721, 'open': {'balance': '391991979278', 'quote': 392882.62}, 'high': {'balance': '392739045673', 'quote': 393631.4}, 'low': {'balance': '389667428685', 'quote': 390552.8}, 'close': {'balance': '391683183282', 'quote': 392573.16}},` ...
+
+### Step 2: Transform the data into lists
+
+After data extraction is done in step 1, we will transform that data into three lists so it can be easily handled using Pandas, a data analytics library for Python. The code below creates a few functions that transform our data into lists.
+
+```python
+import requests
+import json
+
+API_KEY = 'ckey_76799bb987a14e179ea6031d15c'
+base_url = 'https://api.covalenthq.com/v1'
+blockchain_chain_id = '592'
+address = "0xD72A602C714ae36D990dc835eA5F96Ef87657D5e"
+
+def get_wallet_portfolio(chain_id, address):
+ endpoint = f'/{chain_id}/address/{address}/portfolio_v2/?key={API_KEY}'
+ url = base_url + endpoint
+ result = requests.get(url).json()
+ return(result)
+
+def get_timestamp_list(sample_data):
+ timestamp = []
+ for tmp in reversed(sample_data):
+ timestamp.append(tmp["timestamp"][5:10])
+ return (timestamp)
+
+def get_token_balance_list(data):
+ token_balance_list = []
+ for tmp_data in reversed(data):
+ balance = tmp_data["open"]["balance"]
+ token_balance_list.append(int(balance) // 1000000)
+ return (token_balance_list)
+
+portfolio_data = get_wallet_portfolio(blockchain_chain_id, address)
+timestamp_list = get_timestamp_list(portfolio_data["data"]["items"][0]["holdings"])
+usdc_token_balance_list = get_token_balance_list(portfolio_data["data"]["items"][0]["holdings"])
+usdt_token_balance_list = get_token_balance_list(portfolio_data["data"]["items"][1]["holdings"])
+print(timestamp_list)
+print(usdc_token_balance_list)
+print(usdt_token_balance_list)
+```
+
+The output will look as follows. The first list is a series of timestamps, the second is liquidity of USDC (in USD), and the third is liquidity of USDT (in USD) on each day.
+
+```python
+['08-21', '08-22', '08-23', '08-24', '08-25', '08-26', '08-27', '08-28', '08-29', '08-30', '08-31', '09-01', '09-02', '09-03', '09-04', '09-05', '09-06', '09-07', '09-08', '09-09', '09-10', '09-11', '09-12', '09-13', '09-14', '09-15', '09-16', '09-17', '09-18', '09-19', '09-20']
+[317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469, 317469]
+[317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368, 317368]
+```
+
+### Step 3: Transform the data to Pandas Dataframe
+
+Now, let's transform the lists created in Step 2 into Pandas Dataframe so that they can be turned into a graph in next step.
+
+```python
+import pandas as pd
+import requests
+import json
+
+API_KEY = 'ckey_76799bb987a14e179ea6031d15c'
+base_url = 'https://api.covalenthq.com/v1'
+blockchain_chain_id = '592'
+address = "0xD72A602C714ae36D990dc835eA5F96Ef87657D5e"
+
+def get_wallet_portfolio(chain_id, address):
+ endpoint = f'/{chain_id}/address/{address}/portfolio_v2/?key={API_KEY}'
+ url = base_url + endpoint
+ result = requests.get(url).json()
+ return(result)
+
+def get_timestamp_list(sample_data):
+ timestamp = []
+ for tmp in reversed(sample_data):
+ timestamp.append(tmp["timestamp"][5:10])
+ return (timestamp)
+
+def get_token_balance_list(data):
+ token_balance_list = []
+ for tmp_data in reversed(data):
+ balance = tmp_data["open"]["balance"]
+ token_balance_list.append(int(balance) // 1000000)
+ return (token_balance_list)
+
+portfolio_data = get_wallet_portfolio(blockchain_chain_id, address)
+timestamp_list = get_timestamp_list(portfolio_data["data"]["items"][0]["holdings"])
+usdc_token_balance_list = get_token_balance_list(portfolio_data["data"]["items"][0]["holdings"])
+usdt_token_balance_list = get_token_balance_list(portfolio_data["data"]["items"][1]["holdings"])
+
+lp_df = pd.DataFrame(data = [usdc_token_balance_list, usdt_token_balance_list], index = ["USDC", "USDT"], columns = timestamp_list)
+print(lp_df.T)
+```
+
+The output will look as follows. You can see that the lists have turned into a dataframe.
+
+```python
+ USDC USDT
+08-21 446081 451625
+08-22 453840 459288
+08-23 455964 461331
+08-24 455846 461451
+08-25 456262 461089
+08-26 455285 461550
+08-27 457687 463863
+08-28 456071 462506
+08-29 460596 465996
+08-30 449226 454343
+08-31 429668 435999
+09-01 430336 435230
+09-02 331040 335945
+09-03 321951 327345
+09-04 221460 227266
+09-05 226810 231804
+09-06 237230 242222
+09-07 302571 308771
+09-08 293992 299795
+09-09 292354 297289
+09-10 292838 297973
+09-11 296315 301463
+09-12 296068 301855
+09-13 296641 301435
+09-14 408155 413254
+09-15 289567 294152
+09-16 393641 398622
+09-17 391511 395897
+09-18 392412 396156
+09-19 391991 396653
+09-20 391683 392573
+```
+
+### Step 4: Visualizing the data
+
+In this final step, we will use our dataframe to visualize the liquidity of USDC and USDT in the pool for each day.
+
+```python
+%matplotlib inline
+import pandas as pd
+import matplotlib as mpl
+import matplotlib.pyplot as plt
+import requests
+import json
+
+API_KEY = 'ckey_76799bb987a14e179ea6031d15c'
+base_url = 'https://api.covalenthq.com/v1'
+blockchain_chain_id = '592'
+address = "0xD72A602C714ae36D990dc835eA5F96Ef87657D5e"
+
+def get_wallet_portfolio(chain_id, address):
+ endpoint = f'/{chain_id}/address/{address}/portfolio_v2/?key={API_KEY}'
+ url = base_url + endpoint
+ result = requests.get(url).json()
+ return(result)
+
+def get_timestamp_list(sample_data):
+ timestamp = []
+ for tmp in reversed(sample_data):
+ timestamp.append(tmp["timestamp"][5:10])
+ return (timestamp)
+
+def get_token_balance_list(data):
+ token_balance_list = []
+ for tmp_data in reversed(data):
+ balance = tmp_data["open"]["balance"]
+ token_balance_list.append(int(balance) // 1000000)
+ return (token_balance_list)
+
+portfolio_data = get_wallet_portfolio(blockchain_chain_id, address)
+timestamp_list = get_timestamp_list(portfolio_data["data"]["items"][0]["holdings"])
+usdc_token_balance_list = get_token_balance_list(portfolio_data["data"]["items"][0]["holdings"])
+usdt_token_balance_list = get_token_balance_list(portfolio_data["data"]["items"][1]["holdings"])
+
+lp_df = pd.DataFrame(data = [usdc_token_balance_list, usdt_token_balance_list], index = ["USDC", "USDT"], columns = timestamp_list)
+lp_df.T.plot()
+```
+
+The output will look as follows:
+
+![1](img/1.png)
+
+That's it!
+
+This guide demonstrated how we can easily visualize the historical balance of ceUSDC/ceUSDT pool on ArthSwap using Covalent and Python. Creating a graph like this can be a useful reference tool for your project. For example, anyone can use the graph in this eample to see the liquidity for both USDT and USDC on 9/20 was $400K. No need to go digging for specific on-chain data.
+
+This is just a simple example. There is a lot of API opened by Covalent and endless ways to use those data to create insightful graphs, and other reference resources.
+
+## Reference
+
+- Covalent API resource
+ - [https://www.covalenthq.com/docs/api/#/0/0/USD/1](https://www.covalenthq.com/docs/api/#/0/0/USD/1)
+- Covalent docs
+ - [https://www.covalenthq.com/docs/](https://www.covalenthq.com/docs/)
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/deploy-astar-portal.md b/docs/build/build-on-layer-1/builder-guides/integration_toolings/deploy-astar-portal.md
new file mode 100644
index 0000000..8215e4d
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/integration_toolings/deploy-astar-portal.md
@@ -0,0 +1,46 @@
+---
+sidebar_position: 3
+---
+
+# Deploy Astar Portal on Vercel
+
+## Overview
+
+Submitting a staging URL whenever making a PR to [Astar Portal](https://github.com/AstarNetwork/astar-apps) from the forked repo is recommended. Here's what you need to know about deploying the forked Astar Portal repo on Vercel.
+
+## Deploying with Vercel
+
+Deploying Astar Portal (built with [Quasar Framework](https://quasar.dev/)) with [Vercel](https://vercel.com) is super easy. All you have to do is to download the [Vercel CLI](https://vercel.com/docs/cli) and log in by running:
+
+```
+$ vercel login
+```
+
+![vercel1](img/vercel1.png)
+
+Then proceed to build Astar Portal by running `$ yarn build`.
+After the build is finished, change the directory into your deploy root (example: /dist/spa) and run:
+
+```
+$ cd dist/spa
+# from /dist/spa (or your distDir)
+$ vercel
+```
+
+The Vercel CLI should now display a list of information about your deployment, such as the staging url.
+
+That’s it! You’re done!
+
+![vercel2](img/vercel2.png)
+
+## Obtain the deployed URL
+
+After you've finished deployment (see steps above), you can open your [Vercel dashboard](https://vercel.com/dashboard) to obtain the deployed URL.
+
+![vercel3](img/vercel3.jpg)
+
+![vercel4](img/vercel4.png)
+
+## References
+
+- [Deploying Quasar application with Vercel](https://quasar.dev/quasar-cli-vite/developing-spa/deploying#deploying-with-vercel)
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/1.png b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/1.png
new file mode 100644
index 0000000..a99c75a
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/1.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/22.png b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/22.png
new file mode 100644
index 0000000..2f2a374
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/22.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/python0.png b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/python0.png
new file mode 100644
index 0000000..a6fd430
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/python0.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/sidecar-diagram.jpg b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/sidecar-diagram.jpg
new file mode 100644
index 0000000..cdf2566
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/sidecar-diagram.jpg differ
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/vercel1.png b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/vercel1.png
new file mode 100644
index 0000000..16fe958
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/vercel1.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/vercel2.png b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/vercel2.png
new file mode 100644
index 0000000..c1e6f65
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/vercel2.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/vercel3.jpg b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/vercel3.jpg
new file mode 100644
index 0000000..6dfe039
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/vercel3.jpg differ
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/vercel4.png b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/vercel4.png
new file mode 100644
index 0000000..71342e2
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/integration_toolings/img/vercel4.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/use-python.md b/docs/build/build-on-layer-1/builder-guides/integration_toolings/use-python.md
new file mode 100644
index 0000000..7c2918b
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/integration_toolings/use-python.md
@@ -0,0 +1,147 @@
+# Query dApp Staking extrinsics and participating addresses with Python
+
+## TL;DR
+
+As a Substrate-based multi-VM blockchain, Astar nodes have all Polkadot or Substrate features. [Python Substrate Interface](https://github.com/polkascan/py-substrate-interface) library allows developers to query Substrate-runtime-level metadata from an Astar node and interact with the node's Polkadot or Substrate features, including querying and composing extrinsics using a native Python interface.
+
+In this guide, we will cover:
+
+- How to install Python Substrate Interface
+- How to create an API provider instance
+- How to query blocks and extrinsics, using an example of querying dApp staking participants’ addresses.
+
+---
+
+## What is Substrate?
+
+[Substrate](https://substrate.io/) is an open-source software development kit (SDK) that allows teams to quickly build highly customized blockchains. It comes with native support for connecting to Polkadot and Kusama right out of the box.
+
+All Polkadot and Kusama parachains and relay chains are built with Substrate, this include Astar and Shiden networks. Thus, Astar nodes have all the major Polkadot or Substrate features.
+
+## What is Substrate Python Interface?
+
+[Substrate Python Interface](https://github.com/polkascan/py-substrate-interface) is a Python library that specializes in interfacing with a Substrate node; querying storage, composing extrinsics, SCALE encoding/decoding, and providing additional convenience methods to deal with the features and metadata of the Substrate runtime.
+
+For interface function reference, please read [https://polkascan.github.io/py-substrate-interface/](https://polkascan.github.io/py-substrate-interface/).
+
+---
+
+## Instructions
+### 1. Install Substrate Python Interface
+
+- Before installing Substrate Python Interface, please run the following command to check if you have Python package installer [`pip`](https://pypi.org/project/pip/) installed:
+
+ ```jsx
+ pip --version
+ ```
+
+- If not, please follow the guide at [https://pip.pypa.io/en/stable/installation/](https://pip.pypa.io/en/stable/installation/) to install `pip`.
+- After making sure `pip` is installed, you can install Python Substrate Interface library by running the following command in your project directory:
+
+ ```jsx
+ pip install substrate-interface
+ ```
+
+
+---
+
+### 2. Construct an API provider Instance
+
+In order to query and interact with an Astar node, you need to first construct a `WsProvider` API provider using the WebSocket endpoint of Astar Network that you wish to interact with.
+
+You can find the list of supported endpoints from our [network RPC endpoint list](/docs/build/build-on-layer-1/environment/endpoints.md).
+
+```jsx
+# Import Python Substrate Interface
+from substrateinterface import SubstrateInterface
+
+# Construct the API provider
+ws_provider = SubstrateInterface(
+ url="wss://rpc.astar.network",
+)
+```
+
+---
+
+### 3. Retrieve blocks and extrinsics using py-substrate-interface
+
+- For demonstration purposes, we will use JupyterLab in the guide. Please feel free to download and install JupyterLab following the tutorial [here](https://docs.jupyter.org/en/latest/install.html).
+- To retrieve blocks and extrinsics in the blocks, you can use `get_block` method defined in the `py-substrate-interface`, which returns a Python dictionary containing extrinsics and metadata in a Substrate block.
+
+ ```python
+ # Import Python Substrate Interface
+ from substrateinterface import SubstrateInterface
+
+ # Construct the API provider
+ ws_provider = SubstrateInterface(
+ url="wss://rpc.astar.network",
+ )
+
+ # Retrieve the latest block
+ block = ws_provider.get_block()
+
+ # Retrieve the latest finalized block
+ block = ws_provider.get_block_header(finalized_only = True)
+
+ # Retrieve a block given its Substrate block hash
+ block_hash = "0xdd5d76dbea4cab627be320f363c6362adb1e3a5ed9bbe1b0ba4a0ac0bb028399"
+ block = ws_provider.get_block(block_hash=block_hash)
+
+ # Retrieve a block given its Substrate block number
+ block_number = 2700136
+ block = ws_provider.get_block(block_number=block_number)
+ ```
+
+- Below is an output example of querying the extrinsics and metadata of block #0 on Astar Network.
+
+ ```jsx
+ {'header': {'parentHash': '0x0000000000000000000000000000000000000000000000000000000000000000', 'number': 0, 'stateRoot': '0xc9451593261d67c47e14c5cbefeeffff5b5a1707cf81800becfc79e6df354da9', 'extrinsicsRoot': '0x03170a2e7597b7b7e3d84c05391d139a62b157e78786d8c082f29dcf4c111314', 'digest': {'logs': []}, 'hash': '0x9eb76c5184c4ab8679d2d5d819fdf90b9c001403e9e17da2e14b6d8aec4029c6'}, 'extrinsics': []}
+ ```
+
+- You can find the reference to more methods in the Substrate Python Interface [here](https://polkascan.github.io/py-substrate-interface/#substrateinterface.SubstrateInterface.get_block).
+
+---
+
+### 4. Collect the addresses participating in dApp staking
+
+In order to collect the addresses that participated in dApp staking during a certain period of time, we need to iterate through the Substrate blocks of Astar Network and iterate through the extrinsics inside each block to filter out the `bond_and_stake` calls of `dapp-staking-pallet`.
+
+- An example code:
+
+ ```python
+ # Import Python Substrate Interface
+ from substrateinterface import SubstrateInterface
+
+ # Construct the API provider
+ ws_provider = SubstrateInterface(
+ url="https://astar.api.onfinality.io/rpc?apikey=f4ccf3a8-4f71-40bf-aa21-1387919a2144"
+ )
+
+ # Define the starting and ending block
+ start_block_number = 2536100
+ end_block_number = 2536200
+
+ # Iterate through the block and extrisics
+ for block_number in range(start_block_number, end_block_number):
+ block = ws_provider.get_block(block_number=block_number)
+ for extrinsic in block['extrinsics']:
+ # Filter out the bond_and_stake calls
+ if extrinsic['call']['call_function'].name == 'bond_and_stake':
+ print(extrinsic['address'].value)
+ print(block_number)
+ ```
+
+- Below is the example output of the addresses participated in dApp staking during block #2536100 and block #2536200 and the corresponding block number containing the extrinsic.
+
+![Untitled](img/python0.png)
+
+---
+
+## FAQ
+
+For technical support, please contact our team of ambassadors and developers on [Discord](https://discord.gg/AstarNetwork). We're happy to help.
+
+## Reference
+- [Python Substrate Interface Github](https://github.com/polkascan/py-substrate-interface)
+- [Python Substrate Interface Docs](https://polkascan.github.io/py-substrate-interface)
+- [Python Substrate Metadate Docs](https://polkascan.github.io/py-substrate-metadata-docs/)
diff --git a/docs/build/build-on-layer-1/builder-guides/integration_toolings/using-sidecar.md b/docs/build/build-on-layer-1/builder-guides/integration_toolings/using-sidecar.md
new file mode 100644
index 0000000..30da306
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/integration_toolings/using-sidecar.md
@@ -0,0 +1,267 @@
+---
+sidebar_position: 4
+---
+
+# Deploying Astar Blockchain HTTP Server using API Sidecar
+
+## Overview
+
+The Substrate API Sidecar is a REST service that makes it easy to interact with Astar and Shiden.
+
+![diagram](img/sidecar-diagram.jpg)
+
+Instead of having to use the Substrate RPC directly and rely on libraries like `Astar.js` or `Polkadot.js`, you can set up a Substrate API server and interact with the blockchain. For example, you can read block history, listen to events, or submit a transaction, all through a REST API server.
+
+The source code for the Substrate Sidecar API can be found here: .
+Please refer to the README of the repository for more information.
+
+Below we will quickly walk through setting up a Substrate API Sidecar.
+
+## Quick Start
+
+Install the Sidecar API globally:
+
+```bash
+npm install -g @substrate/api-sidecar
+# OR
+yarn global add @substrate/api-sidecar
+```
+
+Make sure that you are running a local Astar collator for the service can connect to.
+
+Run the service from any directory on your machine:
+
+```bash
+substrate-api-sidecar
+```
+
+If everything works well, the terminal should display something like this:
+
+```bash
+SAS:
+ 📦 LOG:
+ ✅ LEVEL: "info"
+ ✅ JSON: false
+ ✅ FILTER_RPC: false
+ ✅ STRIP_ANSI: false
+ 📦 SUBSTRATE:
+ ✅ URL: "ws://127.0.0.1:9944"
+ ✅ TYPES_BUNDLE: undefined
+ ✅ TYPES_CHAIN: undefined
+ ✅ TYPES_SPEC: undefined
+ ✅ TYPES: undefined
+ 📦 EXPRESS:
+ ✅ BIND_HOST: "127.0.0.1"
+ ✅ PORT: 8080
+2023-01-03 16:17:59 info: Version: 14.2.2
+2023-01-03 16:17:59 warn: API/INIT: RPC methods not decorated: transaction_unstable_submitAndWatch, transaction_unstable_unwatch
+2023-01-03 16:17:59 info: Connected to chain Development on the astar-local client at ws://127.0.0.1:9944
+2023-01-03 16:17:59 info: Listening on http://127.0.0.1:8080/
+2023-01-03 16:17:59 info: Check the root endpoint (http://127.0.0.1:8080/) to see the available endpoints for the current node
+```
+
+Now, you can interact with the blockchain by sending requests to `http://127.0.0.1:8080/` followed by the endpoint.
+
+For example, `http://127.0.0.1:8080/blocks/1?finalized=true&eventDocs=true&extrinsicDocs=true` will send a request to get the first block's information with full documentation.
+The result will look something like the following:
+
+```json
+{
+ "number": "1",
+ "hash": "0xef26181b1317e8fb4263ba071190dcdb17698087aab478a7afd2539b737058eb",
+ "parentHash": "0xd27d60bd31570f15f00fc58ed59c9435845b53a6187e1862a9b1b22cc5991f81",
+ "stateRoot": "0x5597ff6dab02e2a1b2d85fae095d2ee18f6b3a50006f56281c48b1efc881bf1d",
+ "extrinsicsRoot": "0x4dfbaf7d7c5c43417d15c257078d4ad0032ce1fc5d0fc9d34b30b5028a803abc",
+ "logs": [
+ {
+ "type": "PreRuntime",
+ "index": "6",
+ "value": [
+ "0x61757261",
+ "0x6a23da3100000000"
+ ]
+ },
+ {
+ "type": "Consensus",
+ "index": "4",
+ "value": [
+ "0x66726f6e",
+ "0x01661a8c628cd7872ca3477e0f59ef0e0db29fb97f659b7fafb8f394cb9cf1c6ba00"
+ ]
+ },
+ {
+ "type": "Seal",
+ "index": "5",
+ "value": [
+ "0x61757261",
+ "0x4e7ecec532276495c0f31e06c7fde96e5e3f4e38fab5a3bd7a6a4762b06c234ea18c1f10cf5357106be6f32087f354de417ab90779b331f3bd26dfa043976589"
+ ]
+ }
+ ],
+ "onInitialize": {
+ "events": [
+ {
+ "method": {
+ "pallet": "dappsStaking",
+ "method": "NewDappStakingEra"
+ },
+ "data": [
+ "1"
+ ],
+ "docs": "New dapps staking era. Distribute era rewards to contracts."
+ }
+ ]
+ },
+ "extrinsics": [
+ {
+ "method": {
+ "pallet": "timestamp",
+ "method": "set"
+ },
+ "signature": null,
+ "nonce": null,
+ "args": {
+ "now": "1672758996005"
+ },
+ "tip": null,
+ "hash": "0x87a5b2129c9e1ea472fa3115a0f760a4f49c53f758445860c372e07c8e216fbf",
+ "info": {},
+ "era": {
+ "immortalEra": "0x00"
+ },
+ "events": [
+ {
+ "method": {
+ "pallet": "balances",
+ "method": "Deposit"
+ },
+ "data": [
+ "YQnbw3oWxBnCUarnbePrjFcrSgVPP2jqTZYzWcccmN8fXhd",
+ "1332000000000000000"
+ ],
+ "docs": "Some amount was deposited (e.g. for transaction fees)."
+ },
+ {
+ "method": {
+ "pallet": "balances",
+ "method": "Deposit"
+ },
+ "data": [
+ "YQnbw3oWxBk2zTouRxQyxnD2dDCFsGrRGQRaCeDLy7KKMdJ",
+ "1332000000000000000"
+ ],
+ "docs": "Some amount was deposited (e.g. for transaction fees)."
+ },
+ {
+ "method": {
+ "pallet": "system",
+ "method": "NewAccount"
+ },
+ "data": [
+ "YQnbw3oWxBk2zTouRxQyxnD2dDCFsGrRGQRaCeDLy7KKMdJ"
+ ],
+ "docs": "A new account was created."
+ },
+ {
+ "method": {
+ "pallet": "balances",
+ "method": "Endowed"
+ },
+ "data": [
+ "YQnbw3oWxBk2zTouRxQyxnD2dDCFsGrRGQRaCeDLy7KKMdJ",
+ "1332000000000000000"
+ ],
+ "docs": "An account was created with some free balance."
+ },
+ {
+ "method": {
+ "pallet": "system",
+ "method": "ExtrinsicSuccess"
+ },
+ "data": [
+ {
+ "weight": {
+ "refTime": "260558000",
+ "proofSize": "0"
+ },
+ "class": "Mandatory",
+ "paysFee": "Yes"
+ }
+ ],
+ "docs": "An extrinsic completed successfully."
+ }
+ ],
+ "success": true,
+ "paysFee": false,
+ "docs": "Set the current time.\n\nThis call should be invoked exactly once per block. It will panic at the finalization\nphase, if this call hasn't been invoked by that time.\n\nThe timestamp should be greater than the previous one by the amount specified by\n`MinimumPeriod`.\n\nThe dispatch origin for this call must be `Inherent`.\n\n# \n- `O(1)` (Note that implementations of `OnTimestampSet` must also be `O(1)`)\n- 1 storage read and 1 storage mutation (codec `O(1)`). (because of `DidUpdate::take` in\n `on_finalize`)\n- 1 event handler `on_timestamp_set`. Must be `O(1)`.\n# "
+ }
+ ],
+ "onFinalize": {
+ "events": []
+ },
+ "finalized": true
+}
+```
+
+You can find the full endpoint documentation from [this link](https://paritytech.github.io/substrate-api-sidecar/dist/).
+
+For transaction signing for the `/transaction` endpoint, please refer to the Polkadot documentation regarding [transaction construction](https://wiki.polkadot.network/docs/build-transaction-construction).
+
+## Connecting to a Remote Node
+
+By default, the Sidecar API will connect to the local node (`ws://127.0.0.1:9944`).
+But you can configure the service to connect to a remote node endpoint.
+
+First, start by cloning the [Sidecar API repository](https://github.com/paritytech/substrate-api-sidecar) to your system.
+
+Move to the root of the project folder and use the following command to create the configuration file:
+
+```bash
+touch .env.astar
+```
+
+Open the created `.env.astar` file with your text editor of choice, and add the following information:
+
+```env
+SAS_SUBSTRATE_URL=[RPC Endpoint]
+```
+
+Now run the following commands:
+
+```bash
+# Install the project dependencies
+yarn
+
+# Build the project
+yarn build
+
+# Start the API server locally
+NODE_ENV=astar yarn start
+```
+
+If it worked well, you should see the following console output:
+
+```bash
+SAS:
+ 📦 LOG:
+ ✅ LEVEL: "info"
+ ✅ JSON: false
+ ✅ FILTER_RPC: false
+ ✅ STRIP_ANSI: false
+ 📦 SUBSTRATE:
+ ✅ URL: "wss://astar.public.blastapi.io"
+ ✅ TYPES_BUNDLE: undefined
+ ✅ TYPES_CHAIN: undefined
+ ✅ TYPES_SPEC: undefined
+ ✅ TYPES: undefined
+ 📦 EXPRESS:
+ ✅ BIND_HOST: "127.0.0.1"
+ ✅ PORT: 8080
+2023-01-03 17:57:35 info: Version: 14.2.2
+2023-01-03 17:57:36 info: Connected to chain Astar on the astar client at wss://astar.public.blastapi.io
+2023-01-03 17:57:36 info: Listening on http://127.0.0.1:8080/
+2023-01-03 17:57:36 info: Check the root endpoint (http://127.0.0.1:8080/) to see the available endpoints for the current node
+```
+
+Of course, you can also configure the Express server (Sidecar API) or explicitly define the chain type bundles or specs.
+For more information, please refer to the README of the Substrate Sidecar API repository.
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/_category_.json b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/_category_.json
new file mode 100644
index 0000000..f59247a
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "How to leverage other Parachains",
+ "position": 0
+}
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/Untitled 1.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/Untitled 1.png
new file mode 100644
index 0000000..e97d5b8
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/Untitled 1.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/Untitled 2.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/Untitled 2.png
new file mode 100644
index 0000000..26cb41a
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/Untitled 2.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/Untitled 3.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/Untitled 3.png
new file mode 100644
index 0000000..6eaa123
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/Untitled 3.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/Untitled.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/Untitled.png
new file mode 100644
index 0000000..e279056
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/Untitled.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/img b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/img
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/img-zombienet-cookbook/img
@@ -0,0 +1 @@
+
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/interact_with_xc20.md b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/interact_with_xc20.md
new file mode 100644
index 0000000..6deca0b
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/interact_with_xc20.md
@@ -0,0 +1,235 @@
+# How to create and interact with a mintable XC20 asset via Solidity smart contract
+
+## TL;DR
+
+XC20 is an asset standard that enables users and developers to interact with them through a familiar [ERC20 interface](https://github.com/PureStake/moonbeam/blob/master/precompiles/assets-erc20/ERC20.sol) via a precompile contract (Ethereum API) while having XCM cross-chain compatibility as native Substrate assets. Since ERC20 assets can not be transferred via XCM in the Polkadot/Kusama ecosystem, you will need XC20 if you want to build cross-chain compatible assets usable in EVM.
+
+There are two types of XC20 assets, **mintable XC20**, and **external XC20** assets. Mintable XC20 assets can be issued on Astar Network based on the owner’s implementation logic. External XC20 assets are XC20 assets originally from other parachain/relaychain and transferred to Astar Network via XCM and issued by the sovereign account.
+
+In this guide, we will support you on:
+- How to create mintable XC20 assets.
+- How to send them to EVM
+- How to interact with XC20 assets via Solidity smart contract.
+
+---
+
+## Overview
+### What is an XC20
+
+XC20 is an asset standard introduced by PureStake Technologies, which combines the power of Substrate assets (native cross-chain interoperability) but allows users and developers to interact with them through a familiar [ERC20 interface](https://github.com/PureStake/moonbeam/blob/master/precompiles/assets-erc20/ERC20.sol) via a precompile contract (Ethereum API). With XC20, developers will be able to create assets that are both EVM-usable and cross-chain compatible via XCM.
+
+### What are mintable XC20 and external XC20
+
+There are two types of XC20 assets, mintable XC20, and external XC20 assets. Mintable XC20 assets can be issued on Astar Network based on the owner’s issuance logic. And external XC20 assets are XC20 assets originally from other parachain/relaychain and transferred to Astar Network via XCM and issued by the sovereign account.
+
+### What is XCM
+
+**Cross-Consensus Message Format (XCM)** aims to be a language to communicate ideas between consensus systems. One of Polkadot's promises is interoperability, and XCM is the vehicle through which it will deliver this promise. Simply, it is a standard that allows protocol developers to define the data and origins that their chains can send and receive from, including cross-chain asset transfer between parachains.
+
+---
+
+## Instructions
+### Register an XC20 asset
+
+Currently, the best way to create XC20 asset is via [Polkadot.js](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.shibuya.astar.network#/explorer). In this guide, we will create an XC20 asset using Shibuya (Astar's Testnet) as a demo.
+
+- Please visit [Polkadot.js](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.shibuya.astar.network#/explorer)
+- Go to `Network` → `Asset` → `Create`
+ - **Note**: please make sure to have at least 100 ASTR in the wallet when creating XC20 asset on Astar Network.
+- Please set the following parameters for your asset
+ - `creater account`: the account that will create this asset and set up the initial metadata.
+ - `asset name`: the descriptive name for this asset, e.g. Cookbook Token.
+ - `asset symbol`: the ticker for this asset, e.g. CKT.
+ - `asset decimals`: the number of decimals for this token. Max allowed via the UI is set to 20.
+ - `minimum balance`: the minimum balance for the asset. This is specified in the units and decimals as requested.
+ - **Note**: `minimum malance` is the same as the `Existential Deposit (ED)` of your asset. The ED exists so that accounts with very small balances, or completely empty, do not "bloat" the state of the blockchain in order to maintain high performance.
+ - **Note**: We suggest setting the `minimum balance` to `1 Pico`, which will only require 0.000000000001 unit of the asset.
+ - `asset id`: the selected id for the asset. This should not match an already-existing asset id.
+
+![Untitled](mintable-xc20-cookbook/Untitled.png)
+
+- Set the managing addresses for the XC20 asset:
+ - `creator`: the account responsible for creating the asset.
+ - `issuer`: the designated account capable of issuing or minting tokens.
+ - `admin`: the designated account capable of burning tokens and unfreezing accounts and assets.
+ - `freezer`: the designated account capable of freezing accounts and assets.
+
+![Untitled](mintable-xc20-cookbook/Untitled%201.png)
+
+---
+
+### Mint the registered XC20 asset
+
+To mint the initial supply of the registered XC20 asset, we need to open Polkadot.js with the issuer address.
+
+- Go to `Network` → `Assets` → Find the asset that you just created
+- Set the minting amount and recipient address:
+ - `mint to address`: the recipient account for this minting operation.
+ - `amount to issue`: the amount of assets to issue to the account.
+
+![Untitled](mintable-xc20-cookbook/Untitled%202.png)
+
+---
+
+### Transfer the asset parameters to a multi-sig account (suggested)
+
+The owner of the XC20 asset has many high-level accesses. Thus, to ensure the security of the mintable XC20 assets, we suggest transferring the owner, issuer, freezer, and admin to a multi-sig account after the creation and initial mint. Click [here](https://docs.astar.network/docs/user-guides/create-multisig) [INSERT PATH] for the guide to create a multi-sig wallet.
+
+- Go to `Developer` → `Extrinsics`
+- Choose `assets` extrinsics and `transferOwnership` method
+- Enter the `asset ID` of the new asset (you may find it under `Network` → `Assets`)
+- Choose `Id` for `target` and enter the address
+
+![Untitled](mintable-xc20-cookbook/Untitled%203.png)
+
+- Go to `Developer` → `Extrinsics`
+- Choose `assets` extrinsics and `setTeam` method
+- Enter the `asset ID` of the new asset (you may find it under `Network` → `Assets`)
+- Choose `Id` for `target` and enter the address
+
+![Untitled](mintable-xc20-cookbook/Untitled%204.png)
+
+---
+
+### Send the asset to EVM
+
+Since Astar Network is building a multi-VM smart contract hub, we support both EVM and WASM with two different account systems, H160 and SS58 respectively.
+
+In order to send the asset to an H160 address (address B) from the Substrate-native SS58 address (address A), we need to convert the H160 address to its mapped Substrate-native SS58 address (address B) and send the asset directly from address A to address B via [Polkadot.js](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.shibuya.astar.network#/accounts).
+
+Here are the full steps:
+
+- Convert the destination H160 address to its mapped Substrate-native SS58 address by using our [address converter](https://hoonsubin.github.io/evm-substrate-address-converter/).
+
+ ![Untitled](mintable-xc20-cookbook/Untitled%205.png)
+
+- Visit [Polkadot.js](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.shibuya.astar.network#/accounts).
+- Go to `Developer` → `Extrinsics`
+- Choose `assets` extrinsics and `transfer` method
+- Enter the `asset ID` of the new asset (you may find it under `Network` → `Assets`)
+- Choose `id` for `target`
+- Enter the mapped Substrate-native SS58 address for `AccountId`
+- Enter the amount that you hope to transfer for `Balance`
+ - Please notice if the decimal is 18, `1000000000000000000` refers to 1 unit of the asset.
+ - Call data for reference: `0x2405950300d6e68e3d545d18f2298dda31e70b63283a3ecc664f7bead5f174f6cc2f5df65d1b000000a1edccce1bc2d3`
+
+![Untitled](mintable-xc20-cookbook/Untitled%206.png)
+
+---
+
+### Confirm receiving the asset on EVM
+
+In order to confirm receiving the asset on EVM, we need to add the specific asset to the Metamask wallet, which requires the asset address on EVM. To generate the asset address on EVM, we need to use the asset ID with the following steps:
+
+- Convert the `asset ID` from hexadecimal to decimal
+- Add the prefix of `0xffffffff`
+ - for example, our Cookbook Token, CKT, has `asset ID` of `229`. Following the step above, we will have the converted address of `0xffffffff000000000000000000000000000000E5`.
+- More information can be found in the following guide: [Send XC20 Assets to EVM](/docs/build/build-on-layer-1/builder-guides/leverage_parachains/interact_with_xc20.md#send-the-asset-to-evm)
+
+![Untitled](mintable-xc20-cookbook/Untitled%207.png)
+
+---
+
+### Interact with XC20 assets via Solidity smart contract
+
+In the following section, we will demonstrate how to interact with the Cookbook Token that we created via Solidity smart contract.
+
+:::note
+💡 In order for an account to receive some of the **XC20** asset, it has to hold some native token. This can be bypassed if `isSufficient` is sufficient is set to `true`.
+:::
+
+The Solidity Interface of Mintable XC20 on Astar includes IERC20 and IERC20Plus interfaces, which are declared in [ERC20.sol](https://github.com/AstarNetwork/astar-frame/blob/polkadot-v0.9.33/precompiles/assets-erc20/ERC20.sol), and are as follows:
+
+```solidity
+interface IERC20 {
+ function name() external view returns (string memory);
+ function symbol() external view returns (string memory);
+ function decimals() external view returns (uint8);
+ function totalSupply() external view returns (uint256);
+ function balanceOf(address who) external view returns (uint256);
+ function allowance(address owner, address spender)
+ external view returns (uint256);
+ function transfer(address to, uint256 value) external returns (bool);
+ function approve(address spender, uint256 value)
+ external returns (bool);
+ function transferFrom(address from, address to, uint256 value)
+ external returns (bool);
+ event Transfer(
+ address indexed from,
+ address indexed to,
+ uint256 value
+ );
+ event Approval(
+ address indexed owner,
+ address indexed spender,
+ uint256 value
+ );
+}
+
+interface IERC20Plus is IERC20 {
+ /**
+ * @dev Returns minimum balance an account must have to exist
+ */
+ function minimumBalance() external view returns (uint256);
+
+ /**
+ * @dev Mints the specified amount of asset for the beneficiary.
+ * This operation will increase the total supply.
+ * Only usable by asset admin.
+ */
+ function mint(address beneficiary, uint256 amount) external returns (bool);
+
+ /**
+ * @dev Burns by up to the specified amount of asset from the target.
+ * This operation will increase decrease the total supply.
+ * Only usable by asset admin.
+ */
+ function burn(address who, uint256 amount) external returns (bool);
+}
+```
+
+In this guide, we are building a simple faucet to demonstrate how to interact with the XC20 assets via a Solidity smart contract. You can find the code below:
+
+```solidity
+pragma solidity >=0.8.3;
+
+import './IERC20.sol';
+
+contract CookbookFaucet {
+
+ uint256 public amountAllowed = 1000000000000000000;
+ address public tokenContract = 0xFFFFfFFf000000000000000000000000000000E5;
+ mapping(address => bool) public requestedAddress;
+
+ event SendToken(address indexed Receiver, uint256 indexed Amount);
+
+ function requestTokens() external {
+ require(requestedAddress[msg.sender] == false, "You have already claimed!");
+ IERC20 cktToken = IERC20(tokenContract);
+ require(cktToken.balanceOf(address(this)) >= amountAllowed, "Faucet is empty!");
+
+ cktToken.transfer(msg.sender, amountAllowed);
+ requestedAddress[msg.sender] = true;
+
+ emit SendToken(msg.sender, amountAllowed);
+ }
+}
+```
+
+For the next step, we use [Remix](https://remix.ethereum.org/) to deploy our code on Shibuya testnet. You can find the tutorial about using Remix for Astar deployment [here](../astar_features/use_remix.md).
+
+![Untitled](mintable-xc20-cookbook/Untitled%208.png)
+
+After sending the initial funding to the faucet contract via MetaMask, you can successfully request tokens from the faucet now!
+
+![Untitled](mintable-xc20-cookbook/Untitled%209.png)
+
+---
+
+## FAQ
+
+Please feel free to join our [Discord](https://discord.gg/astarnetwork) for technical support.
+
+## Reference
+
+- [ERC20.sol](https://github.com/AstarNetwork/astar-frame/blob/polkadot-v0.9.33/precompiles/assets-erc20/ERC20.sol)
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mint-nfts-crust.md b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mint-nfts-crust.md
new file mode 100644
index 0000000..3551777
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mint-nfts-crust.md
@@ -0,0 +1,341 @@
+# Harnessing Crust Network for NFT Minting: A Developer's Guide
+
+![](https://hackmd.io/_uploads/r12FVSxHn.jpg)
+
+The world of Non-Fungible Tokens (NFTs) has opened up intriguing possibilities for creators and collectors alike. However, stepping into this space might seem daunting, especially when it involves navigating the waters of blockchain technology. In this article, we provide an easy-to-understand, step-by-step guide for new coders interested in creating and managing NFTs via the Astar Network EVM and Crust Network while observing the underlying XCM protocol communication.
+
+- [x] [Demo NFT Minter Website](https://evm-nft-contract-poc-ui.vercel.app/#/)
+- [x] [Demo NFT Minter GUI Repo](https://github.com/AstarNetwork/evm-nft-contract-poc-ui)
+- [x] [Demo NFT Minter Contract Repo](https://github.com/AstarNetwork/evm-nft-contract-poc)
+
+
+## Step 1: Getting Started with an EVM Wallet
+Before diving into NFT creation, you'll need to set up a Web3 wallet. This wallet serves as an interface between traditional web browsers and the Ethereum blockchain, a popular home for NFTs. Wallets such as Metamask, Talisman, and Subwallet are widely used and supported.
+- Begin by connecting your Web3 wallet.
+
+```jsx
+async connect() {
+ const connect = await this.onboard.connectWallet();
+ this.wallet = await this.onboard.connectedWallet;
+
+ if (this.wallet) {
+ this.provider = new ethers.providers.Web3Provider(this.wallet.provider);
+ this.signer = this.provider.getSigner();
+ this.address = await this.signer.getAddress();
+ this.set();
+ }
+
+ return { connect };
+}
+```
+
+While there is a little more than just that, you should take a look at [this article](https://astar.network/blog/one-small-piece-of-code-one-giant-leap-for-web3-37760/) for detailed view of the onboard wallet provider.
+
+On line 10 above, you might have wondered what was this for, well it's one of those things that onboard simplifies, this sets the chain to be Shiden in our example. It simplifies the interaction with the wallet by making suggestions to the user.
+
+```jsx
+set() {
+ this.onboard.setChain({ wallet: "MetaMask", chainId: "0x150" });
+},
+```
+
+## Step 2: Dipping into Digital Signatures
+Now that you're connected, the next step involves signing a message. Why? Crust's IPFS API needs your EVM address's signature for authorized use. Think of it as your unique digital fingerprint, confirming your identity on the blockchain.
+- Go ahead and hit the "Sign" button to sign the message.
+
+**Pro tip**, there are many different ways to sign a message depending of the framework, here are two I have used for this project, first in VueJs, then in NodeJs.
+
+```jsx
+async sign() {
+ this.sig = await this.signer.signMessage(this.address);
+}
+```
+
+```nodejs=
+async function sign(address) {
+ return hre.network.provider.send(
+ "eth_sign",
+ [address, ethers.utils.hexlify(ethers.utils.toUtf8Bytes(address))]
+ )
+}
+```
+
+## Step 3: Uploading to IPFS - Your First Big Step
+With your signature in place, it's time to upload your image and metadata file to the IPFS network. This decentralized network ensures that your data remains accessible and secure.
+- To upload your files, simply select the "IPFS" button.
+
+You can now see the signature being used on line 20, then used in the auth section on line 28.
+How to add is shown on line 42, how to get stats on it on line 53.
+
+And just like that, we already know what our `tokenURI` will be so let's pin that!
+
+```jsx
+async ipfs() {
+ const tokenId = await this.getNextTokenId();
+ const now = Date.now();
+
+ const metadata = {
+ name: `ShidenCrust Singulars #${tokenId}`,
+ description:
+ "This is the POC collection of NFTs on Shiden with the metadata and image stored on Crust",
+ image: "",
+ edition: tokenId,
+ date: now,
+ creator: "Greg Luneau from Astar",
+ attributes: [
+ { trait_type: "Smart Contract Chain", value: "Shiden.Network" },
+ { trait_type: "Decentralized Cloud Storage", value: "Crust.Network" },
+ { trait_type: "Virtual Machine", value: "EVM" },
+ ],
+ };
+
+ const authHeaderRaw = `eth-${this.address}:${this.sig}`;
+ const authHeader = Buffer.from(authHeaderRaw).toString("base64");
+ const ipfsW3GW = ["https://crustipfs.xyz", "https://gw.crustfiles.app"];
+
+ // 1. Create IPFS instant
+ const ipfs = create({
+ url: `${ipfsW3GW[1]}/api/v0`,
+ headers: {
+ authorization: `Basic ${authHeader}`,
+ },
+ });
+
+ // 2. Add files to ipfs
+ const options = {
+ wrapWithDirectory: true,
+ };
+
+ const imageFileDetails = {
+ path: tokenId + ".png",
+ content: await this.image(),
+ };
+
+ const cidImage = await ipfs.add(imageFileDetails, options);
+ metadata.image = `ipfs://${cidImage.cid.toString()}/${
+ imageFileDetails.path
+ }`;
+
+ this.files.push({
+ cid: cidImage.cid.toString(),
+ size: cidImage.size,
+ });
+
+ // 3. Get file status from ipfs
+ const fileStatImage = await ipfs.files.stat(
+ `/ipfs/${cidImage.cid.toString()}/${imageFileDetails.path}`
+ );
+
+ const metadataFileDetails = {
+ path: tokenId + ".json",
+ content: JSON.stringify(metadata),
+ };
+
+ const cidMetadata = await ipfs.add(metadataFileDetails, options);
+
+ this.files.push({
+ cid: cidMetadata.cid.toString(),
+ size: cidMetadata.size,
+ });
+
+ // 3. Get file status from ipfs
+ this.metadatafileStat = await ipfs.files.stat(
+ `/ipfs/${cidMetadata.cid.toString()}/${metadataFileDetails.path}`
+ );
+
+ this.tokenURI = `https://crustipfs.live/ipfs/${cidMetadata.cid.toString()}/${
+ metadataFileDetails.path
+ }`;
+}
+```
+
+There is a little helper function that should not be overlooked. It's a good example of a basic interaction with a smart contract, in this instance we want to know the latest `tokenID` that was minted.
+
+```jsx
+async getNextTokenId() {
+ const abi = ["function currentTokenId() view returns (uint256)"];
+ const provider = new ethers.providers.Web3Provider(this.wallet.provider);
+ const signer = provider.getSigner();
+ const contract = new ethers.Contract(this.contractAddress, abi, signer);
+
+ let currentTokenId = await contract.currentTokenId();
+ return currentTokenId.add(1).toNumber();
+}
+```
+
+## Step 4: Pinning - Securing Your Data
+Once your files are on the IPFS network, you'll need to pin them. This process anchors your data to the network, ensuring it remains accessible over time. Pinning involves a payment - once for the image and once for the metadata file - and includes a XCM transfer to the Crust Network.
+- To pin your files, click on the "Pin" button.
+
+Line 18 shows how to get your SDN balance, this way you could advise the user if there is not enough for the transactions. Line 28 shows how to get the price of storing this file on the Crust Network. Line 37 places the order and the payment is made.
+
+
+```jsx
+async pin() {
+ // Define StorageOrder contract ABI
+ const StorageOrderABI = [
+ "function getPrice(uint size) public view returns (uint price)",
+ "function placeOrder(string cid, uint64 size) public payable",
+ "function placeOrderWithNode(string cid, uint size, address nodeAddress) public payable",
+ "event Order(address customer, address merchant, string cid, uint size, uint price)",
+ ];
+
+ // Define StorageOrder contract address for Shiden network
+ const StorageOrderAddress = "0x10f15729aEFB5165a90be683DC598070F91367F0";
+
+ // Get signer and provider
+ const provider = new ethers.providers.Web3Provider(this.wallet.provider);
+ const signer = provider.getSigner();
+
+ // Get balance
+ this.balance = await provider.getBalance(this.address);
+ console.log("balance:", ethers.utils.formatEther(this.balance), "SDN");
+
+ // Get prices and place orders for each file
+ for (const file of this.files) {
+ const storageOrder = new ethers.Contract(
+ StorageOrderAddress,
+ StorageOrderABI,
+ signer
+ );
+ const price = await storageOrder.getPrice(file.size);
+ console.log(
+ `Price for file CID ${file.cid} with size ${
+ file.size
+ }: ${ethers.utils.formatEther(price)} SDN`
+ );
+
+ console.log("file.cid, file.size, price", file.cid, file.size, price);
+
+ const txResponse = await storageOrder.placeOrder(file.cid, file.size, {
+ value: price,
+ });
+ const txReceipt = await txResponse.wait();
+ console.log(
+ `File CID ${file.cid} with size ${file.size} pinned successfully!`
+ );
+ console.log(`Transaction hash: ${txReceipt.transactionHash}`);
+ }
+}
+```
+
+If you want to know more about the specific parameters of the Crust Network checkout [their wiki](https://wiki.crust.network/docs/en/buildGettingStarted).
+
+### XCM
+> That's great but you said it was an XCM transaction?
+
+It is, it's also well hidden. If we peer into the smart contract Crust deployed on Shiden, you can see two `xcmtransactor` function call. One to transfer SDN to pay for all the pinning fees and the second to do the pinning as remote transaction call.
+
+```solidity
+ function placeOrder(string memory cid, uint64 size) public payable {
+ require(sizeLimit >= size, "Size exceeds the limit");
+
+ uint price = getPrice(size);
+ require(msg.value >= price, "No enough SDN to place order");
+
+ uint256 parachainId = 2012;
+ // Transfer the SDN through XCMP
+ address[] memory assetId = new address[](1);
+ assetId[0] = SDN_ADDRESS;
+ uint256[] memory assetAmount = new uint256[](1);
+ assetAmount[0] = preSendAmount;
+ uint256 feeIndex = 0;
+ xcmtransactor.assets_reserve_transfer(assetId, assetAmount, corrAddress, false, parachainId, feeIndex);
+
+ // Place cross chain storage order
+ uint256 feeAmount = preSendAmount / 10;
+ uint64 overallWeight = 8000000000;
+ // cid: HiMoonbaseSC, size: 1024
+ bytes memory callData = buildCallBytes(cid, size);
+ xcmtransactor.remote_transact(
+ parachainId,
+ false,
+ SDN_ADDRESS,
+ feeAmount,
+ callData,
+ overallWeight
+ );
+ }
+```
+
+
+## Step 5: Minting Your First NFT
+With your files securely pinned, you're ready to mint your NFT. This is where the magic happens - your digital asset becomes a unique, blockchain-verified NFT!
+- To mint your NFT, simply hit the "Mint" button.
+
+On line 13 is where we mint this new marvelous ShidenCrust Singular NFT using the `tokenURI` we made ealier. Line 16 shows how you can retreive the offical `tokenID`. Line 20 shows how to retreive the `tokenURI` from a `tokenID`
+
+```jsx
+async mint() {
+ // Get signer and provider
+ const provider = new ethers.providers.Web3Provider(this.wallet.provider);
+ const signer = provider.getSigner();
+
+ const contract = new ethers.Contract(
+ this.contractAddress,
+ this.FactoryNFT.abi,
+ signer
+ );
+
+ // Mint the NFT
+ const txResponse = await contract.mintNFT(this.tokenURI);
+ const txReceipt = await txResponse.wait();
+ const [transferEvent] = txReceipt.events;
+ const { tokenId } = transferEvent.args;
+ console.log("NFT minted successfully!");
+ console.log(`NFT tokenId: ${tokenId}`);
+
+ const tokenURIonchain = await contract.tokenURI(tokenId);
+ console.log("tokenURI", tokenURIonchain);
+ this.tofuURL = `https://tofunft.com/nft/shiden/${this.contractAddress}/${tokenId}`;
+}
+```
+
+## Step 6: Admiring Your NFT
+You've done it - you've created your first NFT! To view your NFT, head to the tofuNFT.com marketplace. Do keep in mind that it might take a minute or two for your NFT to appear.
+- To view your NFT, click on the "View" button.
+
+Well that's it, the link to a viewer of the NFT was built on line 22 above, it's that simple.
+
+![](https://hackmd.io/_uploads/B1PwYBlBn.png)
+
+## But wait there is more!
+
+Let's go back in time to know how this mighty contract was deployed!
+## Step 0: Deploying the NFT Factory
+
+The NFT Factory is a smart contract that serves as the foundation for our NFT creation process.
+
+Deploying an NFT Factory may seem intimidating if you're new to the world of blockchain and smart contracts, but don't worry, I've broken it down into manageable steps in [this github repo](https://github.com/AstarNetwork/evm-nft-contract-poc) for reference.
+
+Line 9 deployes the contract, don't forget to save the deployed address, (that's what was inside `this.contractAddress`) you'll need it for interations we've done above.
+
+```js
+// deploy.js
+
+async function main() {
+ // Load the contract and the provider
+ const [signer] = await ethers.getSigners();
+ console.log("Deploying contract with account:", signer.address);
+
+ const FactoryNFT = await ethers.getContractFactory("FactoryNFT");
+ const factoryNFT = await FactoryNFT.deploy(); //deploying the contract
+
+ await factoryNFT.deployed(); // waiting for the contract to be deployed
+
+ console.log("FactoryNFT deployed to:", factoryNFT.address); // Returning the contract address
+}
+
+main()
+ .then(() => process.exit(0))
+ .catch((error) => {
+ console.error(error);
+ process.exit(1);
+ });
+
+```
+
+---
+## And there you have it
+a step-by-step beginner's guide to creating and managing NFTs using the Astar Network EVM and Crust Network with XCM. As you become more comfortable with these steps, you'll be well on your way to exploring the exciting and innovative world of NFTs. Welcome to the frontier of digital creation!
+
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 1.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 1.png
new file mode 100644
index 0000000..ac45be5
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 1.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 2.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 2.png
new file mode 100644
index 0000000..a3fd539
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 2.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 3.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 3.png
new file mode 100644
index 0000000..82735cb
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 3.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 4.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 4.png
new file mode 100644
index 0000000..d0c9f00
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 4.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 5.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 5.png
new file mode 100644
index 0000000..3fe6fd2
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 5.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 6.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 6.png
new file mode 100644
index 0000000..636c660
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 6.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 7.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 7.png
new file mode 100644
index 0000000..6280fc9
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 7.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 8.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 8.png
new file mode 100644
index 0000000..c680218
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 8.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 9.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 9.png
new file mode 100644
index 0000000..a3d4670
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled 9.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled.png b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled.png
new file mode 100644
index 0000000..7becac2
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/Untitled.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/img0 b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/img0
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mintable-xc20-cookbook/img0
@@ -0,0 +1 @@
+
diff --git a/docs/build/build-on-layer-1/builder-guides/leverage_parachains/zombienet.md b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/zombienet.md
new file mode 100644
index 0000000..d57a971
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/leverage_parachains/zombienet.md
@@ -0,0 +1,220 @@
+# How to set up a Zombienet for XCM testing
+
+## TL;DR
+
+Zombienet is a testing framework for Substrate-based blockchains, providing a simple CLI tool that allows users to spawn any Substrate-based blockchains including Astar, with the Polkadot relaychain. The assertions used in the tests can include on-chain storage, metrics, logs, and custom javascript scripts that interact with the chain.
+
+To test XCM-related features, there are mainly two options: Rococo or a local Zombienet. But some parachains may not deploy testnets to Rococo, or some XCM-related testings (XC-20 asset registration, HRMP channel opening, etc.) may require `sudo` access to the testnet which is in the testnet operator’s hands. Thus, the ideal choice for XCM testings is local Zombienet.
+
+In this guide, we will support you on how to set up Zombienet, how to spawn and configure the testnet with the latest release of the Polkadot relaychain, Astar, and how to test XCM-related features with local Zombienet.
+
+---
+
+## What is Zombienet?
+
+Zombienet is a testing framework for Substrate-based blockchains, providing a simple CLI tool that allows users to spawn any Substrate-based blockchains including Astar, and the Polkadot relaychain. The assertions used in the tests can include on-chain storage, metrics, logs, and custom javascript scripts that interact with the chain.
+
+In this guide, we are setting up a local testnet environment with Polkadot relaychains with our parachains connected.
+
+## What is XCM?
+
+**Cross-Consensus Message Format (XCM)** aims to be a language to communicate ideas between consensus systems. One of Polkadot's promises is interoperability, and XCM is the vehicle through which it will deliver this promise. Simply, it is a standard that allows protocol developers to define the data and origins that their chains can send and receive from, including cross-chain asset transfer between parachains.
+
+---
+
+## Set up Zombienet CLI
+
+In this section, we will set up Zombienet CLI using a binary. You can also set up Zombienet with docker, kubernetes, and more using the guide [here](https://github.com/paritytech/zombienet#requirements-by-provider).
+
+- Let’s create a folder in the root directory to hold the binaries
+
+ ```jsx
+ mkdir cookbook-zombienet
+ cd cookbook-zombienet
+ ```
+
+- Go to [Zombienet](https://github.com/paritytech/zombienet/releases) and download the binary built for your local environment.
+ - Please don’t forget to replace the release version number in the command line to the latest release.
+ - In this cookbook, we are using [Zombienet v1.3.22](https://github.com/paritytech/zombienet/releases/download/v1.3.22/zombienet-macos)
+- Move the binary to our guide folder.
+
+ ```jsx
+ mv ~/downloads/zombienet-macos ~/cookbook-zombienet
+ ```
+
+- Add execution permission to the Zombienet CLI binary file.
+ **Note**: if you are using a Mac, you may need an extra step to configure the security settings:
+ - Go to Apple menu > System Settings > Privacy & Security.
+ - Under security, add the `astar` binary file that you just downloaded to the whitelist.
+ - Continue with the following command.
+
+ ```jsx
+ chmod +x zombienet-macos
+ ```
+
+- Confirm if the binary is executable in your local environment.
+
+ ```jsx
+ ./zombienet-macos --help
+ ```
+
+- When the Zombienet CLI is installed correctly, you should see the following info:
+
+ ```jsx
+ Usage: zombienet [options] [command]
+
+ Options:
+ -c, --spawn-concurrency Number of concurrent spawning process to launch, default is 1
+ -p, --provider Override provider to use (choices: "podman", "kubernetes", "native")
+ -m, --monitor Start as monitor, do not auto cleanup network
+ -d, --dir Directory path for placing the network files instead of random temp one (e.g. -d /home/user/my-zombienet)
+ -f, --force Force override all prompt commands
+ -h, --help display help for comma
+ ```
+
+
+---
+
+### Build the `polkadot` binary file
+
+In order to spawn a testnet with a relaychain and two parachains, we need the corresponding binary files for Polkadot relaychain and Astar Network. In this section, we will build the `polkadot` binary file.
+
+- First, let’s clone the `polkadot` source code
+
+ ```jsx
+ git clone https://github.com/paritytech/polkadot
+ ```
+
+- Make sure you have the latest Rust and the support tools installed so that you can compile the `polkadot` source code smoothly.
+
+ ```jsx
+ rustup update
+ brew install protobuf
+ ```
+
+- Checkout the latest release (v0.9.34 for now), compile the `polkadot` source code, and build the `polkadot` binary file.
+
+ ```jsx
+ cd polkadot
+ git checkout release-v0.9.34
+ cargo build --release
+ ```
+
+- After the compilation, you will have a `polkadot` binary file. Rename the old Polkadot folder and move the `polkadot` binary to our guide folder.
+
+ ```jsx
+ mv ~/cookbook-zombienet/polkadot ~/cookbook-zombienet/polkadot-built
+ mv ~/cookbook-zombienet/polkadot-built/target/release/polkadot ~/cookbook-zombienet
+ ```
+
+---
+
+### Download `astar-collator` binary file
+
+- Download the latest release of the [astar-collator](https://github.com/AstarNetwork/Astar/releases) for macOS or Ubuntu from https://github.com/AstarNetwork/Astar/releases
+
+:::note
+Please make sure you are running a macOS or Ubuntu with the appropriate version. For macOS, please use the versions above MacOS 12.0.
+:::
+
+- Move the binary file to our cookbook folder.
+
+ ```jsx
+ mv ~/downloads/astar-collator ~/cookbook-zombienet
+ ```
+
+- Add execution permission to the binary file
+ **Note**: if you are using a Mac, you may need an extra step to configure the security settings:
+ - Go to Apple menu > System Settings > Privacy & Security.
+ - Under security, add the `astar` binary file that you just downloaded to the whitelist.
+ - Continue with the following command.
+
+ ```jsx
+ chmod +x ./astar-collator
+ ```
+
+---
+
+### Download the configuration file for Zombienet
+
+In order to spawn the Zombienet, we need to add a configuration file to specify the configurations. We have a configuration file ready that is configured to two parachains named `shiden-dev` and `shibuya-dev`, and a relaychain named `rococo-local`: [here](https://github.com/AstarNetwork/Astar/tree/master/third-party/zombienet).
+
+- Download the configuration file from [here](https://github.com/AstarNetwork/Astar/blob/master/third-party/zombienet/multi_parachains.toml).
+
+ ```jsx
+ curl -o multi_parachains.toml https://raw.githubusercontent.com/AstarNetwork/Astar/master/third-party/zombienet/multi_parachains.toml
+ ```
+
+---
+
+### Start the Zombienet with the configuration file
+
+- Start the Zombienet with the configuration file
+
+ ```jsx
+ ./zombienet-macos -p native spawn multi_parachains.toml
+ ```
+
+- After starting the Zombienet successfully, you will be able to see the local testnet endpoint and explorer link as shown below:
+
+ ![Untitled](img-zombienet-cookbook/Untitled.png)
+
+---
+
+## Set up cross-chain assets on two parachains
+
+The HRMP channels between `shiden-dev` and `shibuya-dev` are opened as configured in the `multi_parachains.toml` configuration file.
+
+To proceed to the next step of XCM testing, we only need to register the native assets of `shiden-dev` and `shibuya-dev` on each other to pay for XCM execution on the remote chain.
+
+- Go to [Polkadot.JS Explorer](https://polkadot.js.org/apps/?rpc=ws://127.0.0.1:51931#/explorer) (or the link specified in `Direct Link` of `collator1` )
+ - Click `Developer → Extrinsics → Decode` and input the following call data, to register `xcSDN` on `shibuya-dev`. Please make sure to submit the extrinsics via `Alice`'s account which have enough `SBY` balance.
+
+ ```jsx
+ 0x63000b02102401910100d43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d0102093d002410910114786353444e14786353444e12003600010101005d1f91013601010101005d1f02286bee
+ ```
+
+- Go to [Polkadot.JS Explorer](https://polkadot.js.org/apps/?rpc=ws://127.0.0.1:51934#/explorer) (or the link specified in `Direct Link` of `collator2` )
+ - Click `Developer → Extrinsics → Decode` and input the following call data, to register `xcSBY` on `shiden-dev`. Please make sure to submit the extrinsics via `Alice`'s account which have enough `SDN` balance.
+
+ ```jsx
+ 0x63000b02102401210300d43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d0102093d00241021031478635342591478635342591200360001010100411f2103360101010100411f02286bee
+ ```
+
+---
+
+## Execute a simple remote call from Shiden to Shibuya
+
+In this section, we will create an XCM remote call that will send an instruction from `shiden-dev` to execute `System::remark_with_event` on `shibuya-dev`.
+
+For more details on how to create a remote execution call and how the derived account works, we will explain in another guide.
+
+- Send some `SBY` to Alice’s derived account on **Shibuya** - `5Cvcv8RvSsp6go2pQ8FRXcGLAzNp5eyC8Je7KLHz5zFwuUyT` to pay for the gas fee of executing `System::remark_with_event`.
+ - The remote call won’t be executed via Alice's account on Shibuya directly, but with a new derived account. Thus, we need to send `SBY` to the derived account.
+
+ ![Untitled](img-zombienet-cookbook/Untitled%201.png)
+
+- Initiate the remote call by inputting the following call data in **Shiden’s** `Developer → Extrinsics → Decode`.
+
+ ```jsx
+ 0x330001010100411f021400040000000013000064a7b3b6e00d130000000013000064a7b3b6e00d00060102286bee140a0808abba140d010004000101002611a3b92e2351f8b6c98b7b0654dc1daab45b2619ea357a848d4fe2b5ae1863
+ ```
+
+- After 2 blocks, you will be able to observe the executed `System::remark_with_event` in **Shibuya’s** explore under the recent blocks.
+
+ ![Untitled](img-zombienet-cookbook/Untitled%202.png)
+
+ ![Untitled](img-zombienet-cookbook/Untitled%203.png)
+
+
+---
+
+## FAQ
+
+Please join our [Discord](https://discord.com/invite/Z3nC9U4) for technical support.
+
+## Reference
+
+- [Zombienet](https://github.com/paritytech/zombienet)
+- [Astar Documentation](https://docs.astar.network/docs/xcm/integration/zombienet-testing)
+- [Bruno Galvao](https://hackmd.io/@brunopgalvao/S1Ilj5zA5)
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/builder-guides/xvm_wasm/_category_.json b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/_category_.json
new file mode 100644
index 0000000..9c43dcb
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "XVM and Wasm",
+ "position": 2
+}
diff --git a/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/01.png b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/01.png
new file mode 100644
index 0000000..b60703f
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/01.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/02.png b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/02.png
new file mode 100644
index 0000000..546c24e
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/02.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/03.png b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/03.png
new file mode 100644
index 0000000..7f8e2fa
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/03.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/04.png b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/04.png
new file mode 100644
index 0000000..e5df9e2
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/04.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/05.png b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/05.png
new file mode 100644
index 0000000..ed11ddf
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/05.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/06.png b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/06.png
new file mode 100644
index 0000000..94e9a0c
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/06.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/07.png b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/07.png
new file mode 100644
index 0000000..883de65
Binary files /dev/null and b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/img/07.png differ
diff --git a/docs/build/build-on-layer-1/builder-guides/xvm_wasm/manage_psp22_asset.md b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/manage_psp22_asset.md
new file mode 100644
index 0000000..18f5726
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/manage_psp22_asset.md
@@ -0,0 +1,87 @@
+---
+sidebar_position: 2
+---
+
+# Create and manage a PSP22 assets on Shibuya
+## TL;DR
+This guide will be help you in create and manage your PSP22 assets.
+
+---
+
+## What is a PSP22 asset?
+The PSP22 Fungible Token standard was inspired by ERC20. It targets every parachain that integrates with pallet contracts to enable WASM smart contracts. Defined at an ABI level, any language that compiles to WASM (and isn’t explicitly restricted to ink!) can use it. What PSP22 is on Polkadot is what ERC20 is on Ethereum.
+
+## Create a PSP22 contract
+In this guide, we will use [OpenBrush](https://openbrush.io/) and their contract studio to build our PSP22 contract. OpenBrush contract studio is the fastest and easiest way to create your smart contract. It allows you to add extensions that will fit your needs for your asset.
+
+![01](img/01.png)
+
+### Extensions:
+
+- **Metadata**: this allows you to enter the metadata of your asset during deployment.
+- **Mintable**: allows you to create an `amount` of tokens and assign them to the `account`, increasing the total supply.
+- **Burnable**: allows token holders to destroy both their own tokens and those they have an allowance for.
+- **Wrapper**: allows you to wrap your PSP22 token in a PSP22Wrapper token which can be used, for example, in governance.
+- **FlashMint**: allows the user to perform a flash loan on the token by minting the borrowed amount and then burning it along with fees for the loan.
+- **Pausable**: allows you to pause all token operations.
+- **Capped**: allows you to implement with a supply cap, analog to ERC20Capped.
+
+Not available in the contract studio, but another utility is the [TokenTimelock](https://docs.openbrush.io/smart-contracts/psp22/utils/token-timelock): a utility for locking PSP22 tokens for a specified time. [This contract](https://docs.openbrush.io/smart-contracts/psp22/utils/token-timelock) will lock user's `PSP22` tokens until the time specified when they can withdraw them.
+
+---
+
+## Compile your PSP22 contract
+When you decide on your PSP22 contract, you can download all files needed to compile by clicking on the ‘Download’ button on the top right. After downloading, unzip the files.
+:::caution
+Make sure your environment is set to compile ink! smart contract. If your environment is not set, follow the guide [here](https://docs.astar.network/docs/builder-guides/xvm_wasm/setup_your_ink_environment).
+:::
+### Step 1
+You can now open your Terminal and navigate to the folder with the downloaded files.
+
+![02](img/02.png)
+
+### Step 2
+Next is to compile your smart contract by using this line:
+
+```rust
+cargo +nightly contract build
+```
+When compiling is finished, you should see the following screen where 3 files are created. The contract, wasm, and JSON file.
+
+![03](img/03.png)
+You can find the files in your folder under `target > ink`.
+
+---
+
+### Deploy your PSP22 contract on Shibuya
+Astar ecosystem has 3 networks: Astar, our mainnet, connected to Polkadot; Shiden, our canary network; and Shibuya, our testnet. Deploying and using your contract is the same on all our networks.
+
+### Step 1
+Go to our testnet [Shibuya](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.shibuya.astar.network#/accounts). In this guide, we will use [Polkadot.JS](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.shibuya.astar.network#/accounts), but you can also use the contract UI or our Swanky all-in-one tool.
+
+:::caution
+Make sure you have an account on Shibuya with testnet tokens. You can get your testnet tokens through our faucet.
+:::
+Navigate to the contract dashboard `Developer > Contracts`:
+
+![04](img/04.png)
+
+### Step 2
+We will now upload our contract and set the initial state. The PSP22 contract used in this guide has the metadata extension added to the contract. If you didn’t add this, you will not have the same screen.
+
+![05](img/05.png)
+
+By adding the metadata extension, we can now set all the information for my asset. To finish, click on ‘Deploy’ and ‘Sign’ your message.
+
+![06](img/06.png)
+
+When deployed, your new contract will be visible with your other contracts.
+
+![07](img/07.png)
+
+---
+
+## Reference
+
+- [Ink! official documentation](https://use.ink/)
+- [OpenBrush](https://openbrush.io/)
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/builder-guides/xvm_wasm/pseudo_random.md b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/pseudo_random.md
new file mode 100644
index 0000000..4f88900
--- /dev/null
+++ b/docs/build/build-on-layer-1/builder-guides/xvm_wasm/pseudo_random.md
@@ -0,0 +1,70 @@
+---
+sidebar_position: 3
+---
+
+# How to Generate Pseudo-Random Numbers in Ink! Smart Contract
+Generating random numbers is an essential feature of many decentralized applications, but generating truly random numbers in a trustless, decentralized environment is a challenging problem. In this article, we will show you how to implement a simple pseudo-random function in your Ink! smart contract and generate pseudo-random numbers within a specified range.
+
+## **Implementation**
+
+First, create a new Ink! smart contract and modify the **`PseudoRandom`** struct to include the **`salt`** variable. The **`salt`** will be incremented by 1 each time the **`get_pseudo_random`** function is called.
+
+```rust
+#[ink(storage)]
+pub struct PseudoRandom {
+ salt: u64,
+}
+```
+
+Then, update the **`get_pseudo_random`** function to take an input parameter for the maximum value in the range, and to return a number between 0 and the maximum value in the range using the following code:
+
+```rust
+use ink::env::hash;
+
+#[ink(message)]
+pub fn get_pseudo_random(&mut self, max_value: u8) -> u8 {
+ let seed = self.env().block_timestamp();
+ let mut input: Vec = Vec::new();
+ input.extend_from_slice(&seed.to_be_bytes());
+ input.extend_from_slice(&self.salt.to_be_bytes());
+ let mut output = ::Type::default();
+ ink::env::hash_bytes::(&input, &mut output);
+ self.salt += 1;
+ let number = output[0] % (max_value + 1);
+ number
+}
+```
+
+This function generates a hash value that is based on the block timestamp and the incremented **`salt`** value. The **`max_value`** parameter is used to determine the maximum value in the range. The modulo operator **`% (max_value + 1)`** is then used to return a number between 0 and the maximum value in the range.
+
+## **Usage**
+
+To generate a pseudo-random number within a specified range, simply call the **`get_pseudo_random`** function with the maximum value in the range as the input parameter. For example, to generate a number between 0 and 99, you would call the function with a **`max_value`** of 99:
+
+```rust
+let mut my_contract = PseudoRandom::new();
+let random_number = my_contract.get_pseudo_random(99);
+```
+
+### **Example Unit Test**
+
+To ensure that the **`get_pseudo_random`** function works as expected, you can write a unit test that calls the function with different **`max_value`** parameters and checks that the generated random numbers are within the expected range. Here's an example unit test that you can add to your Ink! smart contract:
+
+```rust
+#[test]
+fn test_get_pseudo_random() {
+ let mut contract = PseudoRandom::new();
+ for max_value in 1..=100 {
+ let result = contract.get_pseudo_random(max_value);
+ assert!(result <= max_value);
+ }
+}
+```
+
+## **Conclusion**
+
+By implementing a pseudo-random function in your Ink! smart contract, you can generate pseudo-random numbers within a specified range in a decentralized and trustless environment. However, it is important to note that the **`get_pseudo_random`** function does not provide the same level of security and trust as a true verifiable random function (VRF).
+
+While the function uses the block timestamp and a salt value to generate a hash value, which is then used to generate a pseudo-random number, it may still be possible for an attacker to predict the output of the function. Additionally, this implementation may not be suitable for applications that require high levels of security, such as gambling or financial applications.
+
+If you require a truly verifiable and secure random function for your smart contract, you may want to consider using an external oracle solution or a specialized random number generator that is specifically designed for use in smart contracts.
diff --git a/docs/build/build-on-layer-1/environment/_category_.json b/docs/build/build-on-layer-1/environment/_category_.json
new file mode 100644
index 0000000..0f43ced
--- /dev/null
+++ b/docs/build/build-on-layer-1/environment/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Build Environment",
+ "position": 2
+}
diff --git a/docs/build/build-on-layer-1/environment/chopsticks.md b/docs/build/build-on-layer-1/environment/chopsticks.md
new file mode 100644
index 0000000..db82856
--- /dev/null
+++ b/docs/build/build-on-layer-1/environment/chopsticks.md
@@ -0,0 +1,137 @@
+---
+sidebar_position: 7
+---
+
+# Chopsticks & E2E Tests
+
+:::note
+Create parallel realities of our Substrate networks.
+:::
+
+Forking a live blockchain at any block with the corresponding memory and state offers several benefits for developers, users, and the blockchain ecosystem as a whole. This capability allows for more accurate testing of transactions and upgrades, which can lead to better decision-making and more robust blockchain networks. Key benefits include:
+
+1. Improved security: By forking a live blockchain, developers can test proposed changes, identify vulnerabilities, and address potential security threats in a controlled environment, reducing the risk of attacks on the main network.
+
+2. Enhanced performance: Testing transactions and upgrades on a forked version of the blockchain enables developers to analyze the performance implications of their changes, such as transaction throughput and latency.
+
+3. Reduced downtime: Forking a live blockchain for testing purposes can help minimize network downtime during upgrades or other maintenance activities, as it allows developers to thoroughly test and validate changes before implementation on the main network.
+
+4. Increased user confidence: Users can feel more confident in the stability and reliability of the blockchain network when they know that proposed changes have been rigorously tested in an environment that closely mirrors the live network.
+
+5. Facilitated innovation: Forking a live blockchain provides a space for experimentation, allowing developers to test new features, protocols, and consensus algorithms without disrupting the main network.
+
+6. Streamlined consensus-building: Forking a live blockchain enables developers to present real-world test results to stakeholders, which can help build consensus for proposed changes.
+
+7. Simplified debugging: Debugging transactions and smart contracts on a forked version of the blockchain allows developers to isolate and address issues more easily, ensuring that only well-vetted code is introduced to the main network.
+
+## Chopsticks
+
+Setting up a parallel reality for our networks is easy with chopsticks.
+
+This documentation focuses on the use of chopsticks with the Astar networks.
+For more details on the options and settings, refer to the [Chopsticks repository readme file](https://github.com/AcalaNetwork/chopsticks)
+
+### Dev mode
+
+You can run forked version of the network at any block using this command.
+
+```sh
+npx @acala-network/chopsticks@latest -c astar
+```
+
+or
+
+```sh
+npx @acala-network/chopsticks@latest -c shiden
+```
+
+and simply examine the local setup by using PJS on it.
+
+
+### XCM mode
+
+You can also run continous modes for XCM development with HRMP channesl
+
+Just Astar connected to Shiden:
+
+```sh
+npx @acala-network/chopsticks@latest xcm -p astar -p shiden
+```
+
+Or with a relaychain like so:
+
+```sh
+npx @acala-network/chopsticks@latest xcm -r polkadot -p astar -p statemint
+```
+
+#### Specific block
+
+You can specify a block number on the cli using the -c and not the XCM cli format.
+
+```sh
+npx @acala-network/chopsticks@latest -c astar -b 3500000
+```
+
+To use a specific block number in the XCM mode, you'll need to download the `.yml` file and modify the block number within.
+
+```sh
+endpoint: wss://astar.api.onfinality.io/public-ws
+mock-signature-host: true
+block: 3600000
+...
+```
+
+Then just start you XCM as usual.
+
+```sh
+npx @acala-network/chopsticks@latest xcm -p astar.yml
+```
+
+#### Creating blocks
+
+To create new blocks, you'll need to connect to the websocket port of that node. To do so, you can use `wscat`, just install it if your system does not have it. For example `sudo apt-get install wscat`.
+
+```sh
+wscat -c ws://127.0.0.1:8000 -x '{ "jsonrpc": "2.0", "id": 1, "method": "dev_newBlock", "params": [{"count": 100}] }'
+```
+
+You can also do it with the WsProvider:
+
+```js
+const { WsProvider } = require("@polkadot/api");
+
+async function main() {
+ const provider = new WsProvider("ws://localhost:8000");
+ await provider.isReady;
+ await provider.send("dev_newBlock", [{ count: 10 }]);
+ console.log("10 new blocks created");
+}
+
+main()
+ .catch(console.error)
+ .finally(() => process.exit());
+```
+
+## Config Settings
+
+In the short form of the parachain/relaychain cli parameter, the configs are getting pulled from the github repo on demand. For example, here is [Astar's](https://github.com/AcalaNetwork/chopsticks/blob/master/configs/astar.yml) config. You can always download it locally and modify it's content to suit your needs. Note that the configs already has Alice with:
+- 100k of ASTR, DOT & USDT on Polkadot and 100k of SDN, KSM & USDT on Kusama
+
+## E2E Tests
+
+End-to-end (E2E) tests offers numerous benefits, such as it enables developers to accurately assess the security, performance, and scalability of proposed changes in a controlled environment that closely mirrors the live network. This approach fosters innovation, simplifies debugging, and streamlines consensus-building among stakeholders. Ultimately, it contributes to more stable, reliable, and efficient blockchain networks, increasing user confidence and promoting long-term success.
+
+These tests uses chopstick to do end to end testing and validate results with previous tests.
+
+```sh
+git clone git@github.com:AcalaNetwork/e2e-tests.git
+cd e2e-tests
+yarn
+yarn test ./tests/xcm-transfer/kusama-relay.test.ts
+```
+
+or for more verbose logging when developping tests, use the playground:
+
+```sh
+yarn vitest --inspect playground --single-thread
+```
diff --git a/docs/build/build-on-layer-1/environment/dev-container.md b/docs/build/build-on-layer-1/environment/dev-container.md
new file mode 100644
index 0000000..73cf465
--- /dev/null
+++ b/docs/build/build-on-layer-1/environment/dev-container.md
@@ -0,0 +1,8 @@
+# Swanky Dev Container
+
+
+Together with the `swanky-cli` tool, you can have the entire environment preinstalled and preconfigured inside a Docker container.
+
+To use this setup, you need Visual Studio Code and a running Docker engine.
+
+Detailed instructions on how to configure and utilize the Dev Container can be found on the [swanky-dev-container Github](https://github.com/AstarNetwork/swanky-dev-container)."
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/environment/endpoints.md b/docs/build/build-on-layer-1/environment/endpoints.md
new file mode 100644
index 0000000..e93301d
--- /dev/null
+++ b/docs/build/build-on-layer-1/environment/endpoints.md
@@ -0,0 +1,110 @@
+---
+sidebar_position: 2
+---
+
+# Network RPC endpoints
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+## What is RPC Node?
+* RPC nodes can be queried for information and used to initiate transactions.
+* The purpose of RPC nodes is to allow decentralized applications and clients to communicate with a Blockchain network.
+* RPC nodes listen for requests, respond with the necessary data, or execute the requested transaction.
+* Common usage of RPC nodes includes querying the Blockchain for information, sending transactions, and executing smart contract functions.
+* RPC nodes are important for decentralized applications to function and interact with a Blockchain network, allowing for decentralized exchange and other use cases.
+
+## Public Endpoints
+
+:::info
+The free endpoints below are dedicated to end users, they can be used to interact with dApps or deploy/call smart contracts.
+They limit the rate of API calls, so they are not suitable for high demand, such as a dApp UI constantly scraping blockchain data or an indexer.
+:::
+:::tip
+To meet the demands of a production dApp you can run an [archive node](/docs/build/build-on-layer-1/nodes/archive-node/index.md) **or** get your own API key from one of our [infrastructure partners](/docs/build/build-on-layer-1/integrations/node-providers/index.md).
+:::
+
+
+
+
+| | Public endpoint Astar |
+| --- | --- |
+| Network | Astar |
+| Parent chain | Polkadot |
+| ParachainID | 2006 |
+| HTTPS | Astar Team: https://evm.astar.network |
+| | Alchemy: Get started [here](https://www.alchemy.com/astar) |
+| | BlastAPI: https://astar.public.blastapi.io |
+| | Dwellir: https://astar-rpc.dwellir.com |
+| | OnFinality: https://astar.api.onfinality.io/public |
+| | Pinknode: Get started [here](https://www.pinknode.io/) |
+| | Automata 1RPC: https://1rpc.io/astr, get started [here](https://www.1rpc.io) |
+| Websocket | Astar Team: wss://rpc.astar.network |
+| | Alchemy: Get started [here](https://www.alchemy.com/astar) |
+| | BlastAPI: wss://astar.public.blastapi.io |
+| | Dwellir: wss://astar-rpc.dwellir.com |
+| | OnFinality: wss://astar.api.onfinality.io/public-ws |
+| | Pinknode: Get started [here](https://www.pinknode.io/) |
+| | Automata 1RPC: wss://1rpc.io/astr, get started [here](https://www.1rpc.io) |
+| chainID | 592 |
+| Symbol | ASTR |
+
+
+
+
+
+| | Public endpoint Shiden |
+| --- | --- |
+| Network | Shiden |
+| Parent chain | Kusama |
+| ParachainID | 2007 |
+| HTTPS | Astar Team: https://evm.shiden.astar.network |
+| | BlastAPI: https://shiden.public.blastapi.io |
+| | Dwellir: https://shiden-rpc.dwellir.com |
+| | OnFinality: https://shiden.api.onfinality.io/public |
+| | Pinknode: Get started [here](https://www.pinknode.io/) |
+| Websocket | Astar Team: wss://rpc.shiden.astar.network |
+| | BlastAPI: wss://shiden.public.blastapi.io |
+| | Dwellir: wss://shiden-rpc.dwellir.com |
+| | OnFinality: wss://shiden.api.onfinality.io/public-ws |
+| | Pinknode: Get started [here](https://www.pinknode.io/) |
+| chainID | 336 |
+| Symbol | SDN |
+
+
+
+
+
+
+
+| | Public endpoint Shibuya |
+| --- | --- |
+| Network | Shibuya (parachain testnet) |
+| Parent chain | Tokyo relay chain (hosted by Astar Team) |
+| ParachainID | 1000 |
+| HTTPS | Astar Team: https://evm.shibuya.astar.network (only EVM/Ethereum RPC available) |
+| | BlastAPI: https://shibuya.public.blastapi.io |
+| | Dwellir: https://shibuya-rpc.dwellir.com |
+| Websocket | Astar Team: wss://rpc.shibuya.astar.network |
+| | BlastAPI: wss://shibuya.public.blastapi.io |
+| | Dwellir: wss://shibuya-rpc.dwellir.com |
+| chainID | 81 |
+| Symbol | SBY |
+
+
+
+
+
+| | Public endpoint zKatana |
+| --- | --- |
+| Network | zKatana (zkEVM testnet) |
+| Parent chain | Sepolia |
+| ChainID | 1261120 |
+| HTTPS | Startale Labs: https://rpc.startale.com/zkatana |
+| | Gelato: https://rpc.zkatana.gelato.digital |
+| Websocket | Gelato: wss://ws.zkatana.gelato.digital |
+| Symbol | ETH |
+
+
+
diff --git a/docs/build/build-on-layer-1/environment/faucet.md b/docs/build/build-on-layer-1/environment/faucet.md
new file mode 100644
index 0000000..db043de
--- /dev/null
+++ b/docs/build/build-on-layer-1/environment/faucet.md
@@ -0,0 +1,63 @@
+---
+sidebar_position: 3
+---
+
+# Test Tokens
+
+A faucet is the site/place where you can get test tokens. Faucets are available for all Shibuya accounts and empty Astar and Shiden accounts. Use them to make sure your wallet has enough assets to cover the cost of deployment and pay transaction gas.
+
+Let's look at three ways to get SBY for your Shibuya account.
+
+:::info
+This guide will also work for ASTR and SDN assets on Astar and Shiden networks.
+:::
+
+## Astar Portal
+
+To access the faucet visit [the portal](https://portal.astar.network/assets), and click on the `Faucet` button.
+
+![1](img/1.png)
+
+Then, click the `I'm not a robot` checkbox, and click **Confirm**.
+
+![2](img/2.png)
+
+## Astar Discord Server
+
+Once you join the [Discord Server](https://discord.gg/AstarNetwork), you will be able to see the **#shibuya-faucet** channel.
+
+![3](img/3.png)
+
+In the **#shibuya-faucet** channel, please type `/drip`, and click on **network**.
+
+![4](img/4.png)
+
+Select the network.
+
+![5](img/5.png)
+
+Click on **address** and paste your address.
+
+![6](img/6.png)
+![7](img/7.png)
+![8](img/8.png)
+
+If your inputs are valid, you will receive SBY tokens from the faucet.
+
+![9](img/9.png)
+
+
+## Chaindrop Portal
+
+Faucet is currently only available for Shibuya.
+To access the faucet visit the [chaindrop portal](https://chaindrop.org/?chainid=81&token=0xeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee), and select Shibuya from the token list.
+
+![1](img/chaindrop_1.png)
+
+Paste your wallet address in the `beneficiary` field.
+Click the `I'm not a robot` checkbox, to validate the Captcha.
+
+![2](img/chaindrop_2.png)
+
+Click `Send Me` to recieve SBY in your wallet; the amount recieved will be displayed.
+Click on the `success` link to view your transaction on explorer.
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/environment/img/1.png b/docs/build/build-on-layer-1/environment/img/1.png
new file mode 100644
index 0000000..dfc2421
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/1.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/1n.png b/docs/build/build-on-layer-1/environment/img/1n.png
new file mode 100644
index 0000000..17d4ab6
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/1n.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/2.png b/docs/build/build-on-layer-1/environment/img/2.png
new file mode 100644
index 0000000..27d2aa1
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/2.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/2n.png b/docs/build/build-on-layer-1/environment/img/2n.png
new file mode 100644
index 0000000..500bd03
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/2n.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/3.png b/docs/build/build-on-layer-1/environment/img/3.png
new file mode 100644
index 0000000..1b92534
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/3.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/3n.png b/docs/build/build-on-layer-1/environment/img/3n.png
new file mode 100644
index 0000000..4902162
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/3n.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/4.png b/docs/build/build-on-layer-1/environment/img/4.png
new file mode 100644
index 0000000..31265df
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/4.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/5.png b/docs/build/build-on-layer-1/environment/img/5.png
new file mode 100644
index 0000000..ff2a070
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/5.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/6.png b/docs/build/build-on-layer-1/environment/img/6.png
new file mode 100644
index 0000000..d18b670
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/6.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/7.png b/docs/build/build-on-layer-1/environment/img/7.png
new file mode 100644
index 0000000..c4a7678
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/7.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/8.png b/docs/build/build-on-layer-1/environment/img/8.png
new file mode 100644
index 0000000..7fcba7f
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/8.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/9.png b/docs/build/build-on-layer-1/environment/img/9.png
new file mode 100644
index 0000000..17ec4df
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/9.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/chaindrop_1.png b/docs/build/build-on-layer-1/environment/img/chaindrop_1.png
new file mode 100644
index 0000000..6d0eb30
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/chaindrop_1.png differ
diff --git a/docs/build/build-on-layer-1/environment/img/chaindrop_2.png b/docs/build/build-on-layer-1/environment/img/chaindrop_2.png
new file mode 100644
index 0000000..fb96671
Binary files /dev/null and b/docs/build/build-on-layer-1/environment/img/chaindrop_2.png differ
diff --git a/docs/build/build-on-layer-1/environment/index.md b/docs/build/build-on-layer-1/environment/index.md
new file mode 100644
index 0000000..ecf2d61
--- /dev/null
+++ b/docs/build/build-on-layer-1/environment/index.md
@@ -0,0 +1,17 @@
+import Figure from "/src/components/figure"
+
+# Set up the Development Environment
+
+
+Knowledge about how to set up various environments is not required before you get started, however, it may be helpful to review the following sections to learn more about the purpose of each Environment, and their specific requirements.
+
+For example, to build and test Wasm smart contracts, an ink! Environment with a Swanky node is appropriate. On Layer 2, a different kind of environment is required.
+
+When you are ready to deploy a smart contract to production, you can use the information contained within this section to configure an RPC endpoint.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs/build/build-on-layer-1/environment/ink_environment.md b/docs/build/build-on-layer-1/environment/ink_environment.md
new file mode 100644
index 0000000..1013459
--- /dev/null
+++ b/docs/build/build-on-layer-1/environment/ink_environment.md
@@ -0,0 +1,151 @@
+---
+sidebar_position: 1
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Ink! Environment
+
+## Overview
+
+This guide is designed for those who are new to ink! or Wasm smart contracts in the Astar ecosystem. Before you begin, ensure your environment supports Rust.
+
+---
+
+## What is ink!
+
+Ink! is a Rust eDSL developed by Parity, that specifically targets smart contract development for Substrate’s `pallet-contracts`. Ink! is not reinventing a programming language, rather, adapting a subset of Rust to serve smart contract developers, specifically. If this isn't reason enough on its own to convince you to learn more about ink!, you can find many more [here](https://use.ink/why-rust-for-smart-contracts).
+
+A frequently asked question when discussing Wasm is: Why use WebAssembly for smart contracts in the first place? You can find all the answers [here](https://use.ink/why-webassembly-for-smart-contracts).
+
+## Ink! Environment Setup
+
+### Rust and Cargo
+
+Rust and Cargo are prerequisites for compiling Wasm smart contracts. The easiest way to obtain Cargo is by installing the current stable release of [Rust](https://www.rust-lang.org/) using [rustup](https://rustup.rs/). Installing Rust using `rustup` will also install `cargo`. On Linux and macOS systems, you can do that with the following:
+
+```rust
+curl https://sh.rustup.rs -sSf | sh
+# Configure
+source ~/.cargo/env
+```
+
+This will download a script and start the installation. If you are using Windows, visit the [Rust website](https://www.rust-lang.org/tools/install) and follow the instructions to install Rust. Configure source control to pull the latest stable release and add nightly + Wasm target with the following:
+
+```bash
+rustup default stable
+rustup update
+rustup update nightly
+rustup component add rust-src
+rustup component add rust-src --toolchain nightly
+rustup target add wasm32-unknown-unknown --toolchain nightly
+```
+
+:::caution
+Due to a bug in `cargo-contract`, building contracts with **rust nightly 1.70.0 or higher will fail**.
+It is advised to use rustc v1.69.0 or older until the issue is resolved from `cargo-contract` side.
+For better dev experience it is advised to create a [rust-toolchain file](https://rust-lang.github.io/rustup/overrides.html#the-toolchain-file)
+in the root of your project directory with following values.
+
+```toml
+[toolchain]
+channel = "1.69.0"
+components = [ "rustfmt", "rust-src" ]
+targets = [ "wasm32-unknown-unknown" ]
+profile = "minimal"
+```
+
+See more [here](https://github.com/paritytech/cargo-contract/issues/1058)
+:::
+
+### Ink! [CLI](https://use.ink/getting-started/setup#ink-cli)
+
+The first and most important tool we will be installing is [cargo-contract](https://github.com/paritytech/cargo-contract), a CLI tool for setting up and managing WebAssembly smart contracts written with ink!
+
+As a prerequisite using older versions of ink! you may need to install the [binaryen](https://github.com/WebAssembly/binaryen) package, used to optimize WebAssembly contract bytecode.
+
+Many package managers have it preinstalled, for example [Debian/Ubuntu](https://tracker.debian.org/pkg/binaryen), [Homebrew](https://formulae.brew.sh/formula/binaryen), and [Arch Linux](https://archlinux.org/packages/community/x86_64/binaryen/).
+
+
+
+
+- Using `apt-get`
+
+```sh
+apt-get update
+apt-get -y install binaryen
+```
+
+- Using `apt`
+
+```sh
+apt update
+apt -y install binaryen
+```
+
+
+
+
+
+```sh
+pacman -S binaryen
+```
+
+
+
+
+
+```sh
+brew install binaryen
+```
+
+
+
+
+
+Find binary releases at https://github.com/WebAssembly/binaryen/releases
+
+
+
+
+
+---
+
+Two other dependencies need to be satisfied to link the ink! contract, for example to warn users about using APIs in a way that could lead to security issues.
+
+```rust
+cargo install cargo-dylint dylint-link
+```
+
+After you've installed the package, execute the following:
+
+```rust
+cargo install cargo-contract --force --locked
+```
+
+Use `--force` to ensure you are updated to the most recent `cargo-contract` version.
+
+If you need to install an older version of `cargo-contract` use the following command by adding your desired version:
+
+```bash
+cargo install cargo-contract --force --version 1.5.1
+```
+
+You can then use `cargo contract --help` to start exploring all available commands.
+
+---
+
+## Dev container
+
+The above process can be automated by utilizing a preinstalled and preconfigured dev container.
+
+Detailed instructions about how to use and configure a dev container can be found on [swanky-dev-container Github](https://github.com/AstarNetwork/swanky-dev-container)
+
+## References
+
+- [Ink! Github repo](https://github.com/paritytech/ink)
+- [Ink! Intro repo](https://paritytech.github.io/ink/)
+- [Ink! Official Documentation](https://use.ink)
+- [Ink! Rust doc](https://paritytech.github.io/ink/ink_lang/)
+- [swanky-dev-container](https://github.com/AstarNetwork/swanky-dev-container)
diff --git a/docs/build/build-on-layer-1/environment/light-client.md b/docs/build/build-on-layer-1/environment/light-client.md
new file mode 100644
index 0000000..85f6692
--- /dev/null
+++ b/docs/build/build-on-layer-1/environment/light-client.md
@@ -0,0 +1,115 @@
+---
+sidebar_position: 3
+---
+
+# Light Client node with no RPC
+
+:::note
+Integrate with the Astar networks using a light client.
+:::
+
+This documentation will guide you on how to connect to the Astar Network using a light client (ScProvider) with the Polkadot.js API and Substrate Connect.
+
+Prerequisites:
+- Node.js (https://nodejs.org/en/download/)
+- npm (https://www.npmjs.com/get-npm)
+
+## Overview
+Astar Network is a scalable smart contract platform on Polkadot that supports Ethereum compatibility. Connecting to the Astar Network using a light client (ScProvider) allows you to interact with the network without having to sync the entire blockchain.
+
+1. Initialize a new Node.js project
+Create a new directory for your project and navigate to it in your terminal or command prompt. Initialize a new Node.js project by running:
+
+```bash
+mkdir astar-light-client
+cd astar-light-client
+npm init -y
+```
+
+2. Install required packages
+Install the required packages by running:
+
+```bash
+npm install @polkadot/api @substrate/connect
+```
+
+3. Create the chain specification file
+Create a new file named `astar.json` in the `chain-specs` directory of your project. You can download the latest Astar chain specification file from the Astar GitHub repository or use this link.
+
+Create a new directory named `chain-specs` and save the `astar.json` file in it:
+
+```bash
+mkdir chain-specs
+wget https://raw.githubusercontent.com/AstarNetwork/astar-apps/main/src/config/api/polkadot/chain-specs/astar.json -P chain-specs
+```
+
+4. Create a script to connect to the Astar Network
+Create a new file named `connect-astar.js` in your project directory and paste the following script into it:
+
+```javascript
+const { ApiPromise, ScProvider } = require("@polkadot/api");
+const Sc = require("@substrate/connect");
+
+async function queryInfo(api) {
+ const assetMetadata = await api.query.assets.metadata.entries();
+
+ assetMetadata.map((asset) => {
+ let h = asset[1].toHuman();
+ console.log(JSON.stringify(h));
+ });
+}
+
+async function setup() {
+ const jsonParachainSpecAstar = require("./chain-specs/astar.json");
+ const astarSpec = JSON.stringify(jsonParachainSpecAstar);
+
+ const relayProvider = new ScProvider(Sc, Sc.WellKnownChain.polkadot);
+ const provider = new ScProvider(Sc, astarSpec, relayProvider);
+
+ await provider.connect();
+ const api = await ApiPromise.create({ provider });
+
+ console.log("Connected to Astar Network using ScProvider (light client)");
+ await queryInfo(api);
+ process.exit();
+}
+
+setup();
+```
+
+This script sets up a connection to the Astar Network using a light client (ScProvider) by leveraging Substrate Connect and the Polkadot.js API.
+
+5. Run the script
+Now, you can run the script to connect to the Astar Network:
+
+```bash
+node connect-astar.js
+```
+
+If the connection is successful, the script will output the following message:
+
+```txt
+Connected to Astar Network using ScProvider (light client)
+{"deposit":"86,000,000,000","name":"MochiMochiCoin","symbol":"MMC","decimals":"2","isFrozen":false}
+...
+{"deposit":"0","name":"Liquid DOT","symbol":"LDOT","decimals":"10","isFrozen":false}
+```
+
+Congratulations! You have successfully connected to the Astar Network using a light client (ScProvider) with the Polkadot.js API and Substrate Connect. You can now use the api instance to interact with the Astar Network and query data or submit transactions.
+
+## Run it in Replit
+
+```mdx-code-block
+import Iframe from 'react-iframe';
+
+
+```
diff --git a/docs/build/build-on-layer-1/environment/local-network.md b/docs/build/build-on-layer-1/environment/local-network.md
new file mode 100644
index 0000000..97648a8
--- /dev/null
+++ b/docs/build/build-on-layer-1/environment/local-network.md
@@ -0,0 +1,80 @@
+---
+sidebar_position: 5
+---
+
+# Running Local Network
+
+Now, let's spin up a local network on a standalone node.
+
+## Get the Latest Binary
+
+You can obtain the latest binary in one of the following ways:
+
+- Download the latest binary from Github.
+- Build it from source.
+
+If you would like to download the binary, visit the [Release page of the Astar Github repository](https://github.com/AstarNetwork/Astar/releases). There, you can find the pre-built binaries for MacOS and Ubuntu, as well as Docker images. If you prefer to build it from the source, [this readme](https://github.com/AstarNetwork/Astar#building-from-source) can guide you through the process.
+
+After you obtain the binary, you can rename the file to `astar`, and add execution permission by running the following command:
+
+```sh
+chmod +x ./astar
+```
+
+You should then be able to execute the binary. To see whether you can run the node, let's check the binary version.
+
+```sh
+./astar --version
+# astar-collator xxx
+```
+
+## Run the Local Network
+
+You are now ready to run the local network, using the following command:
+
+```sh
+./astar --port 30333 --rpc-port 9944 --rpc-cors all --alice --dev
+```
+
+What this command means:
+
+- Use port 30333 for P2P TCP connection
+- Use port 9944 for WebSocket/Http connection
+- Accept any origin for HTTP and WebSocket connections
+- Enable Alice session keys
+- Launch network in development mode
+
+You can see the full list of the command options using the `help` subcommand:
+
+```sh
+./astar help
+# shiden xxx
+#
+# Stake Technologies
+# Astar parachain collator
+# ...
+```
+
+When you have successfully launched the local network, you will see the following messages in your terminal:
+
+![1](img/1n.png)
+
+OK! Let's explore your local network now.
+
+## Access Your Local Network via Polkadot.js Apps Portal
+
+Visit the following URL:
+
+
+
+There, you will see the following screen:
+
+![2](img/2n.png)
+
+This represents your local network. In this local network, some native tokens have already been issued to a few accounts. Let's visit the [Account page](https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944#/accounts) from the Accounts Tab.
+
+![3](img/3n.png)
+
+Here, you can see that ALICE and BOB have around 1,000 tokens. In the following section, you will deploy your smart contract and interact with it by paying the transaction fees using these tokens.
+
+In the Polkadot explorer, it's only possible to interact with the Substrate RPC, so to interact with the Ethereum RPC, you will have to use MetaMask.
diff --git a/docs/build/build-on-layer-1/environment/zombienet-testing.md b/docs/build/build-on-layer-1/environment/zombienet-testing.md
new file mode 100644
index 0000000..968a0d1
--- /dev/null
+++ b/docs/build/build-on-layer-1/environment/zombienet-testing.md
@@ -0,0 +1,107 @@
+---
+sidebar_position: 6
+---
+
+# Zombienet
+
+:::note
+This page is intended for developers & builders.
+:::
+
+## The Problem
+
+Finding the right environment for testing XCM features, both standalone or with smart contracts, can be difficult since it requires at least two parachains with open HRMP channels and a Relay Chain. The parachains need to have the features we rely on in testing.
+
+At the moment, only production networks, like `Astar` or `Shiden`, have access to a great number of HRMP channels with various parachains. Testing in production isn't a good idea though and should be avoided. A possible alternative is to use `Rococo` as a Relay Chain and the connected parachains, if they are available. Not all projects deploy a testnet on Rococo.
+
+## Zombienet
+
+The approach that gives users the most control is to setup a local test network, with both the Relay Chain and parachain(s). This is very simple & easy to do using the [zombienet](https://github.com/paritytech/zombienet) tool provided by Parity. Simply put, users can download arbitrary Relay Chain and parachain binaries (or use images) to setup a configurable local test network. Users will have access to all privileged actions on the Relay Chain and on the parachains which makes testing much easier.
+
+For example, user can download `polkadot` binary together with `astar-collator` and `cumulus-parachain` to spin up a testnetwork with `polkadot` as Relay Chain, `astar` as one parachain and `statemint` as the second parachain.
+
+`Zombienet` documentation can be found [here](https://paritytech.github.io/zombienet/), including the installation instructions, CLI usage, guide with examples and many more. Users are advised to consult this document to get a better understanding of the tool.
+
+## Shibuya - Shiden Test Network
+
+### Overview
+
+A quick reminder - `Shibuya` is a test network with no market value used by `Astar & Shiden` team to test features before deploying into production. It uses a custom **Rococo** based Relay Chain. `Shiden` is a production canary-type network connected to `Kusama`. These two parachains aren't aware of one another and do not communicate in live networks.
+
+However, using `zombienet`, users can setup a local test network where one parachain would be `Shibuya` and the other would be `Shiden` with **HRMP** channels opened between them. This is incredibly useful for testing & integration because it gives users the option for cross-chain communication between two smart-contract oriented parachains which support Wasm smart contracts, amongst many other features.
+
+The following instructions will explain how to setup & configure local _Shibuya - Shiden test network_.
+
+### Basic Setup Instructions
+
+For users who already know what they are doing, please check [this](https://github.com/AstarNetwork/Astar/tree/master/third-party/zombienet) folder in `Astar` repository for _ready-to-use_ `zombienet` configurations.
+
+1. For the sake of simplicity, prepare a folder called `zombienet` into which **ALL** binaries and config files will be placed.
+
+2. Download `zombienet` binary appropriate for your operating system (or install it in any way you prefer).
+
+3. Download `polkadot` binary and `astar-collator` binary. They can be found as part of release in official [polkadot](https://github.com/paritytech/polkadot/releases) and [Astar](https://github.com/AstarNetwork/Astar/releases) repositories.
+
+4. Use the configuration file `multi_parachains.toml` which can be found [here](https://github.com/AstarNetwork/Astar/tree/master/third-party/zombienet). Make sure to check the file to get a sense of which parameters are being used.
+
+5. Start the network. This is a command example: `./zombienet-linux -p native spawn multi_parachains.toml`. However, you can start the network in your preferred way (e.g. using `podman` or `kubernetes`).
+
+6. Lots of useful information will be printed, like commands used to generate chain specifications and commands used to start the network (useful for extracting RPC ports). At the end, a table with direct links to node's `polkadot-js page` will be created. Use this to interact with chain from your browser.
+
+7. After a minute or two, block production on both `Shibuya` and `Shiden` should start. This will usually happen after **Relay Chain** reaches block 11, which will trigger a new session. HRMP channels will automatically be configured between the parachains (check the configuration file).
+
+8. The test network is running and users can interract with Relay Chain and both parachains. It is now possible to deploy EVM and Wasm smart contracts, send XCM instructions and do everything else what is possible on live chains. In addition, users have direct access to `Alice` account, which has `sudo` privileges.
+
+> We will provide automated way of performing these setup actions in the future.
+
+### Basic Cross-Chain Assets Setup Instructions
+
+After completing the previous steps, a local test network with two parachains is running.
+The following steps will explain how to setup basic cross-chain payable assets for both parachains.
+
+The aim is to configure a cross-chain **SDN** asset on `Shibuya` and cross-chain **SBY** asset on `Shiden`. Both assets will be configured as _payable_, meaning they can be used to pay for XCM execution on the remote chain.
+
+For all steps, encoded call data will be provided to simplify the process for the user. Call data can be copy/pasted into field under `Developer -> Extrinsic -> Decode`. **ALL** calls should be executed as `Alice`.
+
+1. In the local `Shibuya polkadot-js` explorer, create a new asset for the cross-chain SDN wrapper. Configure it to be sufficient and payable.
+```
+0x0b000c630024015000d43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d010463003600010101005d1f5063003601010101005d1f070010a5d4e8
+```
+
+2. In the local `Shiden polkadot-js` explorer, create a new asset for the cross-chain SBY wrapper. Same concept as in the previous step.
+```
+0x0b000c630024015000d43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d01046300360001010100411f506300360101010100411f070010a5d4e8
+```
+
+3. In the local `Shibuya polkadot-js` explorer, send **1000 SBY** to `Alice` on Shiden.
+```
+0x3302010101005d1f0100010100d43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d010400000000170000a0dec5adc9353600000000
+```
+
+4. In the local `Shiden polkadot-js` explorer, send **1000 SDN** to `Alice` on Shibuya.
+```
+0x330201010100411f0100010100d43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d010400000000170000a0dec5adc9353600000000
+```
+
+5. Check that `Alice` on both chains received the assets (minus the execution fee).
+
+### Basic Remote Execution Instructions
+
+After completing the previous steps, cross-chain SDN and SBY wrappers are configured as payable and sufficient assets.
+The following steps will explain how to execute a cross-chain remote call. `Alice` will send an instruction from `Shiden` to execute `System::remark_with_event` on `Shibuya`.
+
+`Alice` isn't able to directly control `Alice` on the destination chain, instead a new account will be derived. More information can be found [INSERT LINK TO GUIDE] (/docs/learn/interoperability/xcm/building-with-xcm/xc-remote-transact.md#derived-remote-accounts).
+
+1. Calculate `Alice's` derived account on `Shibuya` when sending instructions from `Shiden`.
+```
+> ./xcm-tools remote-account -p 2007 -a 0xd43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d
+5Cvcv8RvSsp6go2pQ8FRXcGLAzNp5eyC8Je7KLHz5zFwuUyT
+```
+
+2. Fund the `5Cvcv8RvSsp6go2pQ8FRXcGLAzNp5eyC8Je7KLHz5zFwuUyT` account on `Shibuya` (send it some SBY tokens).
+
+3. On `Shiden`, as `Alice`, send an XCM sequence instructing `Shibuya` to execute `System::remark_with_event`.
+```
+0x330001010100411f021400040000000013000064a7b3b6e00d130000000013000064a7b3b6e00d00060102286bee200a07144173746172140d010004000101002611a3b92e2351f8b6c98b7b0654dc1daab45b2619ea357a848d4fe2b5ae1863
+```
+4. Observe successful XCM execution on `Shibuya` and inspect the block to observe `remark` event.
diff --git a/docs/build/build-on-layer-1/index.md b/docs/build/build-on-layer-1/index.md
index 2b8ae52..714f4da 100644
--- a/docs/build/build-on-layer-1/index.md
+++ b/docs/build/build-on-layer-1/index.md
@@ -4,4 +4,16 @@ title: Build on Astar Substrate
import Figure from '/src/components/figure'
-# Why build on Astar Substrate? (Layer 1)
\ No newline at end of file
+# Overview of Astar Substrate - A Layer 1 network secured by Polkadot
+
+## Network Features
+
+## Get Started
+
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/integrations/_category_.json b/docs/build/build-on-layer-1/integrations/_category_.json
new file mode 100644
index 0000000..8900a04
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Integrate Toolings",
+ "position": 9
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/integrations/api/_category_.json b/docs/build/build-on-layer-1/integrations/api/_category_.json
new file mode 100644
index 0000000..9127a2c
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/api/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "API",
+ "position": 1
+}
+
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/integrations/api/astar.js.md b/docs/build/build-on-layer-1/integrations/api/astar.js.md
new file mode 100644
index 0000000..b6da65c
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/api/astar.js.md
@@ -0,0 +1,53 @@
+---
+sidebar_position: 1
+---
+
+# Astar.js
+
+## Overview
+
+The astar.js library provides application developers with the ability to query nodes and interact with Astar/Shiden/Shibuya chains using Javascript/Typescript.
+
+# Getting Started
+
+- Install dependencies
+
+```bash
+yarn add @polkadot/api @astar-network/astar-api@beta
+```
+
+- Create API instance
+
+```ts
+import { ApiPromise } from '@polkadot/api';
+import { WsProvider } from '@polkadot/rpc-provider';
+import { options } from '@astar-network/astar-api';
+
+async function main() {
+ const provider = new WsProvider('ws://localhost:9944');
+ // OR
+ // const provider = new WsProvider('wss://shiden.api.onfinality.io/public-ws');
+ const api = new ApiPromise(options({ provider }));
+ await api.isReady;
+
+ // Use the api
+ // For example:
+ console.log((await api.rpc.system.properties()).toHuman());
+
+ process.exit(0);
+}
+
+main()
+```
+
+- Use api to interact with node
+
+```ts
+// query and display account data
+const data = await api.query.system.account('5F98oWfz2r5rcRVnP9VCndg33DAAsky3iuoBSpaPUbgN9AJn');
+console.log(data.toHuman())
+```
+
+## Cookbook
+
+More documentation and examples can be found in the astar.js [wiki](https://github.com/astarNetwork/astar.js/wiki)
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/integrations/api/gas_api.md b/docs/build/build-on-layer-1/integrations/api/gas_api.md
new file mode 100644
index 0000000..92d4713
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/api/gas_api.md
@@ -0,0 +1,86 @@
+---
+sidebar_position: 2
+---
+
+# Gas/Tip API
+
+## Overview
+
+Gas is the unit of measure for the amount of computational resources will be required to process transactions and smart contracts. Essentially a transaction fee, the term originated from Ethereum, in which context it refers to computation undertaken on the Ethereum Virtual Machine (EVM). Since Ethereum was founded, numerous EVM-compatible networks have emerged and adopted similar models.
+
+The term can be considered analogous to the gas that powers a car engine: it's the fluctuating, occasionally expensive cost of operation. More complex smart contracts require more gas to power their computation, just as a bigger, more powerful car takes more gas to run.
+
+The gas price api is a service that allows you to obtain the various gas prices of the Astar network for various transaction times. Gas fee are provided in wei.
+
+Tips are used for native transaction. Tips are an optional transaction fee that users can add. Tips are not part of the inclusion fee and are an incentive to block authors for prioritizing a transaction, and the entire tip goes directly to the block author.
+
+
+## GAS API
+
+- Shibuya:
+- Shiden:
+- Astar:
+
+## Response
+
+```json
+{
+ "code": 200,
+ "data": {
+ "slow": 1265049135,
+ "average": 2233842329,
+ "fast": 10261948525,
+ "timestamp": 1651782278481,
+ "eip1559": {
+ "priorityFeePerGas": {
+ "slow": 265049135,
+ "average": 1233842329,
+ "fast": 9261948525
+ },
+ "baseFeePerGas": 1000000000
+ }
+ }
+}
+```
+
+## Response parameters
+
+- slow: This is the price of gas for a transaction that will take a long time to execute.
+- average: This is the price of gas for a transaction that will take a medium amount of time to execute.
+- fast: This is the price of gas for a transaction that will take a short amount of time to execute.
+
+## EIP1559:
+
+With EIP1559 transactions, gas fees are divided into two parts: the base fee and priority fee.
+
+The Base Fee, which is determined by the network itself, is the same for each block.
+The Priority Fee, which is optional, and determined by the user, is a tip to validators, and incentivizes them to prioritize the transaction.
+
+EIP-1559's purpose is essentially to make gas fees more transparent and predictable for users. Previously, users would 'bid' with a high enough total fee to ensure miners were incentivized to pick up your transaction in a reasonable amount of time. This meant the market price was highly volatile in correspondence to demand.
+
+- priorityFeePerGas: The variable part of the gas fee. Determined by the user.
+- baseFeePerGas: The fixed part of the gas fee. Determined by the network.
+
+### Tip API
+
+- Shibuya:
+- Shiden:
+- Astar:
+
+Response
+```
+{
+ "code":200,
+ "data":{
+ "tip":{
+ "slow":"746510000000",
+ "average":"4119200000000",
+ "fast":"8501250000000"
+ }
+ }
+}
+```
+
+- **slow**: The tip for a transaction that takes a long time to execute.
+- **average**: The tip for a transaction that takes a medium time to execute.
+- **fast**: The tip for a transaction that takes a short time to execute.
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/integrations/bridges/_category_.json b/docs/build/build-on-layer-1/integrations/bridges/_category_.json
new file mode 100644
index 0000000..e1f736d
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/bridges/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Bridges",
+ "position": 3
+}
diff --git a/docs/build/build-on-layer-1/integrations/bridges/cbridge.md b/docs/build/build-on-layer-1/integrations/bridges/cbridge.md
new file mode 100644
index 0000000..bfacbe4
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/bridges/cbridge.md
@@ -0,0 +1,71 @@
+---
+sidebar_position: 1
+---
+
+# Celer cBridge
+
+## Overview
+
+A guide on how to transfer assets from Ethereum & Binance Smart Chain to the Astar ecosystem. Make sure you have some ASTR to pay gas fees before bridging. You can buy ASTR token on exchanges.
+
+
+
+## Product Page
+
+
+
+
+## Contracts
+
+Token contract adresses on Astar:
+
+```json
+USDT: 0x3795C36e7D12A8c252A20C5a7B455f7c57b60283
+USDC: 0x6a2d262D56735DbA19Dd70682B39F6bE9a931D98
+DAI: 0x6De33698e9e9b787e09d3Bd7771ef63557E148bb
+WETH: 0x81ECac0D6Be0550A00FF064a4f9dd2400585FE9c
+BNB: 0x7f27352D5F83Db87a5A3E00f4B07Cc2138D8ee52
+BUSD: 0x4Bf769b05E832FCdc9053fFFBC78Ca889aCb5E1E
+WSDN: 0x75364D4F779d0Bd0facD9a218c67f87dD9Aff3b4
+MATIC: 0xdd90E5E87A2081Dcf0391920868eBc2FFB81a1aF
+AAVE: 0xfcDe4A87b8b6FA58326BB462882f1778158B02F1
+CRV: 0x7756a83563f0f56937A6FdF668E7D9F387c0D199
+```
+
+## How to withdraw ASTR from Exchanges
+
+First, visit [Astar Portal](https://portal.astar.network/balance/wallet) with Polkadot.js. If you don't have Polkadot.js extension, you can get it [here](https://polkadot.js.org/extension/).
+
+![1](img/1.png)
+
+Click your **Native address**. Once you click ?, you will see the comment below.
+
+![2](img/2.png)
+
+This is the address that you should use when you withdraw ASTR tokens from exchanges.
+
+## How to bridge assets from Ethereum to Astar EVM
+
+In this tutorial we will demonstrate how to bridge USDC from Ethereum to Astar. By doing so, you will be adding liquidity to our network, which benefits the ecosystem overall, and for which we convey our deepest gratitude.
+
+Visit cBridge and input a currency you would like to transfer.
+
+![3](img/3.png)
+
+![4](img/4.png)
+
+After the transaction, you will see:
+
+![5](img/5.png)
+
+and you will receive some tokens in your MetaMask, on the destination network.
+
+## The difference betweeen USDT vs. bridgedUSDT
+
+When Tether USD on Astar comes through Statemint, it will be the native USDT token in the Astar ecosystem.
+
+ceUSDT on Astar is a wrapped version of Ethereum USDT, supported by Celer cBridge and liquidity network. Due to this, ceUSDT is not as versatile as native USDT. For example, native USDT registered as XC20 can be used for both Wasm and EVM projects in the Astar ecosystem, but bridged (ce)USDT cannot be used for Wasm projects.
+
+## Support
+
+If you have any questions, feel free to join any of our communities and our Ambassadors will assist you. And remember that Astar team or Ambassadors will never message or DM you first!
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/1.png b/docs/build/build-on-layer-1/integrations/bridges/img/1.png
new file mode 100644
index 0000000..abede6c
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/1.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/10.png b/docs/build/build-on-layer-1/integrations/bridges/img/10.png
new file mode 100644
index 0000000..6d4cd92
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/10.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/2.png b/docs/build/build-on-layer-1/integrations/bridges/img/2.png
new file mode 100644
index 0000000..1fcc579
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/2.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/3.png b/docs/build/build-on-layer-1/integrations/bridges/img/3.png
new file mode 100644
index 0000000..998aa29
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/3.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/4.png b/docs/build/build-on-layer-1/integrations/bridges/img/4.png
new file mode 100644
index 0000000..ac889dc
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/4.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/5.png b/docs/build/build-on-layer-1/integrations/bridges/img/5.png
new file mode 100644
index 0000000..ce25ca9
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/5.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/6.png b/docs/build/build-on-layer-1/integrations/bridges/img/6.png
new file mode 100644
index 0000000..543aca5
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/6.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/7.png b/docs/build/build-on-layer-1/integrations/bridges/img/7.png
new file mode 100644
index 0000000..c7eff1a
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/7.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/8.png b/docs/build/build-on-layer-1/integrations/bridges/img/8.png
new file mode 100644
index 0000000..57431c0
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/8.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/9.png b/docs/build/build-on-layer-1/integrations/bridges/img/9.png
new file mode 100644
index 0000000..cc8b5d0
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/9.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/wanchain1.png b/docs/build/build-on-layer-1/integrations/bridges/img/wanchain1.png
new file mode 100644
index 0000000..1c80ffd
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/wanchain1.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/wanchain2.png b/docs/build/build-on-layer-1/integrations/bridges/img/wanchain2.png
new file mode 100644
index 0000000..a9cc9c6
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/wanchain2.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/wanchain3.png b/docs/build/build-on-layer-1/integrations/bridges/img/wanchain3.png
new file mode 100644
index 0000000..feb0f52
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/wanchain3.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/wanchain4.png b/docs/build/build-on-layer-1/integrations/bridges/img/wanchain4.png
new file mode 100644
index 0000000..84da339
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/wanchain4.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/img/wanchain5.png b/docs/build/build-on-layer-1/integrations/bridges/img/wanchain5.png
new file mode 100644
index 0000000..93ef706
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/bridges/img/wanchain5.png differ
diff --git a/docs/build/build-on-layer-1/integrations/bridges/wanchain.md b/docs/build/build-on-layer-1/integrations/bridges/wanchain.md
new file mode 100644
index 0000000..b3f5505
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/bridges/wanchain.md
@@ -0,0 +1,108 @@
+---
+sidebar_position: 2
+---
+import wanchain1 from "./img/wanchain1.png"
+import wanchain2 from "./img/wanchain2.png"
+import wanchain3 from "./img/wanchain3.png"
+import wanchain4 from "./img/wanchain4.png"
+import wanchain5 from "./img/wanchain5.png"
+
+
+# Wanchain Bridge
+
+## Overview
+
+A guide on how to transfer native Tether USDT between Astar, Arbitrum, Avalanche C-Chain, BNB Chain, Ethereum, OKC, Polygon, Wanchain and Tron using USDT XFlows.
+
+## About USDT XFlows
+
+USDT XFlows is a decentralized cross-chain solution that enables native-to-native cross-chain transfers between blockchains where USDT is natively minted by Tether. XFlows leverages the power of Wanchain’s cross-chain bridges to provide easy, non-custodial transfers between chains without the need for centralized exchanges or wrapped assets. Find more information on Wanchain [product page](https://bridge.wanchain.org/).
+
+## Contracts
+
+Native USDT contract addresses:
+
+```
+USDT @ Arbitum:
+
+0xFd086bC7CD5C481DCC9C85ebE478A1C0b69FCbb9
+
+xcUSDT @ Astar:
+
+0xfFFfffFF000000000000000000000001000007C0
+
+USDT @ Avalanche C-Chain:
+
+0x9702230A8Ea53601f5cD2dc00fDBc13d4dF4A8c7
+
+BUSD @ BNB Chain:
+
+0x55d398326f99059fF775485246999027B3197955
+
+USDT @ Ethereum:
+
+0xdAC17F958D2ee523a2206206994597C13D831ec7
+
+USDT @ OKX Chain:
+
+0x382bb369d343125bfb2117af9c149795c6c65c50
+
+USDT @ Polygon:
+
+0xc2132D05D31c914a87C6611C10748AEb04B58e8F
+
+USDT @ Tron:
+
+TR7NHqjeKQxGTCi8q8ZY4pL8otSzgjLj6t
+
+USDT @ Wanchain:
+
+0x11e77E27Af5539872efEd10abaA0b408cfd9fBBD
+```
+
+
+## Native Tether USDT vs. xcUSDT on Astar
+
+Tether issues USDT, the blockchain industry’s biggest stablecoin by means of total market capitalization, on Polkadot’s “common good” generic asset parachain, Statemint. By leveraging XCM, Polkadot’s cross-consensus communication protocol, native Tether USDT can be transferred to parachains like Astar as “xcUSDT”. xcUSDT is more versatile than wrapped USDT and can be used for both Wasm and EVM projects in the Astar ecosystem.
+
+## How to bridge native Tether USDT from Ethereum to Astar EVM
+
+This section shows how to bridge native Tether USDT from Ethereum to Astar.
+
+> **Note**: Cross-chain transactions from other EVM networks such as Arbitrum, Avalanche C-Chain, BNB Chain, Ethereum, OKC, Polygon and others follow the same process.
+
+### Step 1
+ Visit the [Wanchain Bridge web portal](https://bridge.wanchain.org/). Initiate a cross-chain transaction to move your $USDT from Ethereum to Astar.
+
+
+Select “USDT” from the drop-down menu. Choose “Ethereum” and “Astar” as your `From` and `To` networks respectively. Finally, input your destination address in the `Recipient` field as well as the amount of $USDT you want to send. Click `Next`.
+
+
+
+Confirm that the “Recipient” address does not belong to a centralised exchange then click “Confirm”.
+
+
+
+Confirm that all the details are correct then click “Confirm”.
+
+
+
+### Step 2
+Wait for your cross-chain transaction to complete. It is now processing.
+
+While your cross-chain transaction is processing, the status will change three times:
+
+ Processing (1/2)
+ Processing (2/2)
+ Success
+
+### Step 3
+ Confirm the receipt of your funds. Your cross-chain transaction is complete!
+
+Once your cross-chain transaction is complete, you’ll see your $xcUSDT balance on Astar. The cross-chain transaction status will display as `Success`.
+
+
+
+Support
+
+If you have any questions, feel free to join any of our communities and our Ambassadors will assist you. And remember that the Astar team or Ambassadors will never message or DM you first!
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/integrations/dapp-listing/_category_.json b/docs/build/build-on-layer-1/integrations/dapp-listing/_category_.json
new file mode 100644
index 0000000..d69c8e2
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/dapp-listing/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "dApp Listings",
+ "position": 2
+}
diff --git a/docs/build/build-on-layer-1/integrations/dapp-listing/dappradar.md b/docs/build/build-on-layer-1/integrations/dapp-listing/dappradar.md
new file mode 100644
index 0000000..7aa10df
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/dapp-listing/dappradar.md
@@ -0,0 +1,59 @@
+---
+sidebar_position: 2
+---
+
+# DappRadar {#dappradar-en-page-id}
+
+## Introduction to DappRadar
+
+[DappRadar] was started in 2018 with the goal of delivering high quality, accurate insights about decentralized applications to a global audience, and rapidly became the go-to, trusted industry source. Since then, DappRadar has become a defacto standard for dApp discovery with more than ten thousand applications listed over twenty protocols. In their own words, *“Across the globe consumers are discovering dapps and managing their NFT/DeFi portfolios with . We’re visited by over 500,000 users every month, our data powers leading industry partners and our quarterly reports are the trusted authority on multichain dapp market insight*.”
+Astar and Shiden are live on DappRadar, where you will find them under [Astar dApps](https://dappradar.com/rankings/protocol/astar) and [Shiden dApps](https://dappradar.com/rankings/protocol/shiden).
+
+![1](img/1.png)
+
+You can [submit your project](https://dappradar.com/dashboard/submit-dapp) to DappRadar by providing background on your project including a short and a full description, website URL, and logo. Only a subset of the fields are required, but you are encouraged to complete as many as possible.
+
+:::caution
+DappRadar contains user-generated content. You should verify any information with your own research. Astar/Shiden is a permissionless network. Any project can deploy its contracts to Astar/Shiden.
+:::
+
+## Required Content
+
+At a minimum, you must include the following information to submit your project/dApp to DappRadar:
+
+- dApp Name
+- Logo (250 x 250 pixel png or jpg)
+- Category
+- Website URL
+- Short Description (160 characters or less)
+- Full Description
+
+## How to Submit your dApp
+
+First, you'll need to [create a DappRadar account](https://auth.dappradar.com/email-register) and verify your email. Once ready with the required content, you can head to the where you can take the following steps:
+
+![2](img/2.png)
+![3](img/3.png)
+
+- Enter your project’s name
+- Upload your dApp's logo (250 by 250 pixel PNG or JPG, 150KB max)
+- Select the relevant category for your dApp
+- Include the URL for your dApp
+
+![4](img/4.png)
+
+- Write a short description (160 characters or less)
+- Specify the protocols your dApp is deployed on. You can select multiple protocols here, such as Astar and Shiden
+- After selecting at least one protocol, you'll be prompted to enter your dApp's contract address(es) for each protocol. Please try to fill all the contract address(es) on your dApp for better accuracy of the data.
+- Write a full description for your dApp
+
+![5](img/5.png)
+![6](img/6.png)
+
+- Optionally, provide social / media links. (These will be shown on the dApp page, so we highly recommend you do so.)
+- It is also optional but recommended that you provide screenshots of your dApp. You also have the option of providing a YouTube link to a demo of your dApp.
+- Review the terms and conditions, and press **Submit a dApp**
+
+Submissions are reviewed by the DappRadar team and will be published if the dApp is deemed suitable for listing. For any Astar/Shiden-related questions you can reach out to us in [Discord](https://discord.gg/astarnetwork). For DappRadar questions, support is available in the [DappRadar Discord](https://discord.com/invite/4ybbssrHkm) or you can contact [developers@dappradar.com](mailto:developers@dappradar.com).
+
+[DappRadar]: https://dappradar.com/
diff --git a/docs/build/build-on-layer-1/integrations/dapp-listing/defillama.md b/docs/build/build-on-layer-1/integrations/dapp-listing/defillama.md
new file mode 100644
index 0000000..f8aa17a
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/dapp-listing/defillama.md
@@ -0,0 +1,236 @@
+---
+sidebar_position: 1
+---
+
+# Defi Llama {#defillama-en-page-id}
+
+Defi Llama provides inclusive, non-biased, and community-driven statistics for the decentralized finance industry.
+
+Astar and Shiden are live on Defi Llama, and you can find homepages for top DeFi apps in the Astar ecosystem under [Astar Defi](https://defillama.com/chain/Astar) and [Shiden Defi](https://defillama.com/chain/Shiden).
+
+## How to list an Astar/Shiden DeFi project on Defi Llama
+
+To list on Defi Llama:
+
+1. Fork the [Adapters repo](https://github.com/DefiLlama/DefiLlama-Adapters) ("Fork" button towards the top right of the repo page).
+2. Add a new folder with the same name as the project to `projects/`.
+3. Write an [SDK adapter](https://app.gitbook.com/o/-LgGrgOEDyFYjYWIb1DT/s/-M8GVK5H7hOsGnYqg-7q-872737601/integration/dapp-listing/defillama#how-to-write-an-sdk-adapter) (or a [fetch adapter](https://app.gitbook.com/o/-LgGrgOEDyFYjYWIb1DT/s/-M8GVK5H7hOsGnYqg-7q-872737601/integration/dapp-listing/defillama#how-to-write-a-fetch-adapter) if you cant use the SDK for this project) in the new folder.
+4. Make a Pull Request with the changes on your fork, to the main Defi Llama Adapters repo, with a brief explanation of what you changed.
+5. Wait for someone to either comment on or merge your Pull Request. There is no need to ask for someone to check your PR as they are monitored actively.
+6. Once your PR has been merged, please allow 24 hours for the front-end team to load your listing onto the UI.
+
+## How to Build an Adapter
+
+An adapter is just some code that:
+
+1. Collects data on a protocol by calling some endpoints or making some blockchain calls.
+2. Computes the TVL of a protocol and returns it.
+
+### Types of Adapters
+
+Right now there are two types of adapters co-existing within the repository:
+
+- [Fetch adapters](https://app.gitbook.com/o/-LgGrgOEDyFYjYWIb1DT/s/-M8GVK5H7hOsGnYqg-7q-872737601/integration/dapp-listing/defillama#how-to-write-a-fetch-adapter): These calculate the TVL directly and exports a fetch method.
+- [SDK adapters](https://app.gitbook.com/o/-LgGrgOEDyFYjYWIb1DT/s/-M8GVK5H7hOsGnYqg-7q-872737601/integration/dapp-listing/defillama#how-to-write-an-sdk-adapter): These use the SDK and return all the assets locked along with their balances.
+
+### Which Adapter Type should I Develop?
+
+Right now Defi Llama SDK only supports EVM chains, so if your project is in any of these chains you should develop an SDK-based adapter, while if your project is on another chain a fetch adapter is likely the way to go. If your project is not on an EVM chain but you are able to give us historical data, we can help support this if you message us in Discord.
+
+## How to Write an SDK Adapter
+
+### Basic Adapter
+
+Below, you can see an example of the one AstridDAO used on Astar Network (ASTR). Let's walk through it to get a better understanding of how it works.
+
+```js
+const { sumTokens } = require('../helper/unwrapLPs')
+const { getFixBalances } = require('../helper/portedTokens')
+
+const WASTAR = "0x19574c3c8fafc875051b665ec131b7e60773d2c9"
+const chain = 'astar'
+
+const CONTRACT_ADDRESSES = {
+ // Pools holding ASTR.
+ ACTIVE_POOL: "0x70724b57618548eE97623146F76206033E67086e",
+ DEFAULT_POOL: "0x2fE3FDf91786f75C92e8AB3B861588D3D051D83F",
+};
+
+async function tvl(ts, _block, chainBlocks ) {
+ const block = chainBlocks[chain]
+ const balances = {}
+ const tokensAndOwners = Object.values(CONTRACT_ADDRESSES).map(owner => [WASTAR, owner])
+ await sumTokens(balances, tokensAndOwners, block, chain);
+ (await getFixBalances(chain))(balances)
+ return balances
+}
+
+module.exports = {
+ timetravel: true,
+ start: 915830,
+ methodology: "Total locked ASTR (in wrapped ERC-20 form) in ActivePool and DefaultPool",
+ astar: {
+ tvl,
+ },
+};
+```
+
+This adapter consists of 3 main sections. First, listing any dependencies we want to use. Next, an async function containing the code for calculating TVL (where the bulk of the code usually is). Finally, the module exports.
+
+**Line 13 - Input Parameters**
+
+1. The first param taken by the function (line 13) will be a timestamp. In your testing, this will be the current timestamp, but when we backfill chart data for your protocol, past timestamps will also be input.
+2. Next is the mainnet block height corresponding to the timestamp in the first param.
+
+**Line 15 - Initializing The Balances Object**
+
+SDK adapters always export balance objects, which is a dictionary where all the keys are either token addresses or Coingecko token IDs. On this line, we initialize the balance object to be empty.
+
+If a token balance has an address key, the Defi Llama SDK will manage any raw to real amount conversion for you (so you don't need to worry about ERC20 decimals). If a token balance has a Coingecko ID key, you will need to process the decimals and use a real token amount in the balances object.
+
+:::caution
+If you export token addresses in your balances object that aren't on CoinGecko, Defi Llama won't be able to fetch prices for the tokens. You can check which addresses are supported by visiting the token page on CoinGecko and checking the 'Contract' field on the right (pictured above).
+:::
+
+**Line 17 - Adding Data To The Balances Object**
+
+In the SDK there are utilities to add data to the balances dictionary. sdk.util.sumSingleBalance() takes 3 parameters:
+
+1. The object you want to add token balances to.
+2. The token key you want to add to. We will transform the token address.
+3. The balance we want to add. (NB: If we were using ae CoinGecko ID in position 2, we'd need to divide collateralBalance by 10 ** `token_decimal_places` to convert the raw balance to a real balance).
+
+**Line 22 - Module Exports**
+
+The module exports must be constructed correctly, and use the correct keys, so that the Defi Llama UI can show your data. Nest chain TVL (and separate types of TVL like staking, pool2 etc) inside the chain key (eg '`astar`', '`shiden`').
+
+Please also let us know:
+
+- timetravel (bool) - if we can backfill data with your adapter. Most SDK adapters will allow this, but not all. For example, if you fetch a list of live contracts from an API before querying data on-chain, timetravel should be 'false'.
+- misrepresentedTokens (bool) - if you have used token substitutions at any point in the adapter this should be 'true'.
+- methodology (string) - this is a small description that will explain to Defi Llama users how the adapter works out your protocol's TVL.
+- start (number) - the earliest block height the adapter will work at.
+
+### Testing
+
+Once you are done writing it you can verify that it returns the correct value by running the following code:
+
+```bash
+npm install
+# Replace with your adapter's name
+node test.js projects/astriddao/index.js
+```
+
+If the adapter runs successfully, the console will show you a breakdown of your project's TVL in USD. If it all looks accurate, you're ready to submit.
+
+## Submit
+
+Submit a PR to the [adapter repository on Github](https://github.com/DefiLlama/DefiLlama-Adapters)!
+
+---
+
+## How to Write a Fetch Adapter
+
+Fetch adapters export a function, called `fetch`, which returns a project's total TVL (in USD) as a number. The following basic adapter would always return a TVL of $100:
+
+```js
+async function fetch() {
+ return 100;
+}
+
+module.exports = {
+ fetch
+}
+```
+
+Fetch adapters only allow us to get the TVL at the current time, so it's impossible to fill old values on a protocol's TVL chart or recompute them, thus leading to charts that look jumpy. To solve this we introduced SDK adapters, which allow us to retrieve a protocol's TVL at any point in time.
+
+:::caution
+Fetch adapters can only be used for projects on non-EVM chains. Where possible, [SDK adapters](https://docs.llama.fi/list-your-project/how-to-write-a-fetch-adapter) are preferred to fetch adapters because on-chain calls are more transparent.
+:::
+
+Third-party APIs should be used where possible to reduce bias. If third-party APIs are not available for the data you need, proprietary APIs can be used if they're open source.
+
+---
+
+Examples
+
+```js
+const retry = require('async-retry')
+const { GraphQLClient, gql } = require('graphql-request')
+
+async function fetch() {
+ var endpoint ='https://api.thegraph.com/subgraphs/name/balancer-labs/balancer';
+ var graphQLClient = new GraphQLClient(endpoint)
+
+ var query = gql`
+ {
+ balancers(first: 5) {
+ totalLiquidity,
+ totalSwapVolume
+ }
+ }
+ `;
+ const results = await retry(async bail => await graphQLClient.request(query))
+ return parseFloat(results.balancers[0].totalLiquidity);
+}
+
+module.exports = {
+ fetch
+}
+```
+
+```js
+const retry = require('async-retry')
+const axios = require("axios");
+const BigNumber = require("bignumber.js");
+
+async function fetch() {
+ let price_feed = await retry(async bail => await axios.get('https://api.coingecko.com/api/v3/simple/price?ids=bitcoin&vs_currencies=usd&include_market_cap=true&include_24hr_vol=true&include_24hr_change=true'))
+ let response = await retry(async bail => await axios.get('https://api.etherscan.io/api?module=stats&action=tokensupply&contractaddress=0xeb4c2781e4eba804ce9a9803c67d0893436bb27d&apikey=H6NGIGG7N74TUH8K2X31J1KB65HFBH2E82'))
+ let tvl = new BigNumber(response.data.result).div(10 ** 8).toFixed(2);
+ return (tvl * price_feed.data.bitcoin.usd);
+}
+
+module.exports = {
+ fetch
+}
+```
+
+```js
+const retry = require('async-retry')
+const axios = require("axios");
+const BigNumber = require("bignumber.js");
+
+async function fetch() {
+ var price_feed = await retry(async bail => await axios.get('https://api.coingecko.com/api/v3/simple/price?ids=thorchain&vs_currencies=usd&include_market_cap=true&include_24hr_vol=true&include_24hr_change=true'))
+
+ var res = await retry(async bail => await axios.get('https://chaosnet-midgard.bepswap.com/v1/network'))
+ var tvl = await new BigNumber((parseFloat(res.data.totalStaked) * 2) + parseFloat(res.data.bondMetrics.totalActiveBond) + parseFloat(res.data.bondMetrics.totalStandbyBond)).div(10 ** 8).toFixed(2);
+ tvl = tvl * price_feed.data.thorchain.usd;
+ return tvl;
+}
+
+module.exports = {
+ fetch
+}
+```
+
+## How to update a project
+
+If you'd like to update the code used to calculate the TVL of a DeFi project already listed on DefiLlama:
+
+1. Fork the [Adapters repo](https://github.com/DefiLlama/DefiLlama-Adapters) (button towards the top right of the repo page).
+2. Make your changes to the fork (generally easiest by cloning your new fork into a desktop IDE).
+3. Make a Pull Request from your fork, to the main Defi Llama Adapters repo, with a brief explanation of what you changed.
+4. Wait for someone to either comment on or merge your Pull Request. There is no need to ask for someone to check your PR as there a monitored regularly.
+
+If you'd like to update the metadata (name, logo, description etc) of a project already listed on DefiLlama:
+
+1. Join the [DefiLlama Discord server](https://discord.gg/bQNGsqgD).
+2. Message the `#listings` channel about the changes you'd like to make.
+3. Wait for a response from the team
+
+If you don't have a Discord account, you can always reach the Defi Llama team through Github or Twitter. Responses are generally quicker on Discord.
+
+If you'd like to list or update the listing for an NFT project, also get in contact with the Defi Llama team over Discord.
diff --git a/docs/build/build-on-layer-1/integrations/dapp-listing/img/1.png b/docs/build/build-on-layer-1/integrations/dapp-listing/img/1.png
new file mode 100644
index 0000000..79b7392
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/dapp-listing/img/1.png differ
diff --git a/docs/build/build-on-layer-1/integrations/dapp-listing/img/2.png b/docs/build/build-on-layer-1/integrations/dapp-listing/img/2.png
new file mode 100644
index 0000000..9b5beac
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/dapp-listing/img/2.png differ
diff --git a/docs/build/build-on-layer-1/integrations/dapp-listing/img/3.png b/docs/build/build-on-layer-1/integrations/dapp-listing/img/3.png
new file mode 100644
index 0000000..7e0832e
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/dapp-listing/img/3.png differ
diff --git a/docs/build/build-on-layer-1/integrations/dapp-listing/img/4.png b/docs/build/build-on-layer-1/integrations/dapp-listing/img/4.png
new file mode 100644
index 0000000..e173fbb
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/dapp-listing/img/4.png differ
diff --git a/docs/build/build-on-layer-1/integrations/dapp-listing/img/5.png b/docs/build/build-on-layer-1/integrations/dapp-listing/img/5.png
new file mode 100644
index 0000000..ef44184
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/dapp-listing/img/5.png differ
diff --git a/docs/build/build-on-layer-1/integrations/dapp-listing/img/6.png b/docs/build/build-on-layer-1/integrations/dapp-listing/img/6.png
new file mode 100644
index 0000000..523cfe9
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/dapp-listing/img/6.png differ
diff --git a/docs/build/build-on-layer-1/integrations/index.md b/docs/build/build-on-layer-1/integrations/index.md
new file mode 100644
index 0000000..59db61e
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/index.md
@@ -0,0 +1,10 @@
+# Integrations
+
+Here you will find some common services available to developers building dApps on Astar Network, including sample configurations, and guides for many important elements of our infrastructure. For zkEVM-specific integrations please visit [this section](/docs/build/build-on-layer-2/integrations/index.md).
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs/build/build-on-layer-1/integrations/indexers/_category_.json b/docs/build/build-on-layer-1/integrations/indexers/_category_.json
new file mode 100644
index 0000000..ac73412
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/indexers/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "GraphQL & Indexers",
+ "position": 5
+}
diff --git a/docs/build/build-on-layer-1/integrations/indexers/bluez.md b/docs/build/build-on-layer-1/integrations/indexers/bluez.md
new file mode 100644
index 0000000..6e340c0
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/indexers/bluez.md
@@ -0,0 +1,30 @@
+---
+title: Bluez.app NFT API
+sidebar_position: 6
+---
+
+# Bluez.app OpenAPI
+
+Bluez.app is a community driven marketplace for Astar network. Bluez API provides quick and convenient access to comprehensive NFT data for developers such as transaction history, ownership details, pricing trends, and more.
+
+:::note
+You will need to [obtain an API key](https://docs.google.com/forms/d/e/1FAIpQLSf5Fa3Tapwakj5O--peMN9woGc54gXLyOXB1aSG5ewciT0FPQ/viewform) to use the Bluez API.
+:::
+
+## How to use the Bluez.app OpenAPI
+
+First, obtain an API key [here](https://docs.google.com/forms/d/e/1FAIpQLSf5Fa3Tapwakj5O--peMN9woGc54gXLyOXB1aSG5ewciT0FPQ/viewform). Once you have obtained your key, head over to the [playground](https://api.bluez.app/api/#/) where you'll be able to reference various queries, and try them live before using them within your specific application.
+
+### Examples of GET queries available through this API:
+
+/nft/v3/{apiKey}/getNFTsForOwner
+
+/nft/v3/{apiKey}/getNFTMetadata
+
+/nft/v3/{apiKey}/getNFTsForContract
+
+/nft/v3/{apiKey}/getOwnersForNFT
+
+/nft/v3/{apiKey}/getOwnersForContract
+
+/nft/v3/{apiKey}/getNFTSales
diff --git a/docs/build/build-on-layer-1/integrations/indexers/covalent.md b/docs/build/build-on-layer-1/integrations/indexers/covalent.md
new file mode 100644
index 0000000..55fcd9f
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/indexers/covalent.md
@@ -0,0 +1,66 @@
+---
+title: Covalent
+sidebar_position: 7
+---
+
+# Covalent Indexing and Querying API
+[Covalent](https://www.covalenthq.com/?utm_source=astar&utm_medium=partner-docs) is a hosted blockchain data solution providing access to historical and current on-chain data for 100+ supported blockchains, including [Astar](https://www.covalenthq.com/docs/networks/astar/?utm_source=astar&utm_medium=partner-docs).
+
+Covalent maintains a full archival copy of every supported blockchain, meaning every balance, transaction, log event, and NFT asset data is available from the genesis block. This data is available via:
+
+1. [Unified API](#unified-api) - Incorporate blockchain data into your app with a familiar REST API
+2. [Increment](#increment) - Create and embed custom charts with no-code analytics
+
+**Use Covalent if you need:**
+* Structured and enhanced on-chain data well beyond what you get from RPC providers
+* Broad and deep multi-chain data at scale
+* Enterprise-grade performance
+
+> **[Sign up to start building on Astar](https://www.covalenthq.com/platform/?utm_source=astar&utm_medium=partner-docs)**
+
+
+## Unified API
+
+[![example-api-response-json](https://www.datocms-assets.com/86369/1686098284-example-api-response-json-astar.png)](https://www.covalenthq.com/docs/api/balances/get-token-balances-for-address/?utm_source=astar&utm_medium=partner-docs)
+
+The Covalent API is RESTful and offers the following for Astar:
+
+| **Features**| |
+|---|---|
+| Response Formats | JSON, CSV |
+| Real-Time Data Latency | 2 blocks |
+| Batch Data Latency | 30 minutes |
+| Supported Network: `chainName`, `chainId` | Mainnet: `astar-mainnet`, `592` Testnet: `astar-shibuya`, `81` |
+| API Tiers | [Free tier](https://www.covalenthq.com/docs/unified-api/pricing/?utm_source=astar&utm_medium=partner-docs#free-tier) [Premium tier](https://www.covalenthq.com/docs/unified-api/pricing/?utm_source=astar&utm_medium=partner-docs#premium-tier) |
+| API Categories | [Balances](https://www.covalenthq.com/docs/api/balances/get-token-balances-for-address/?utm_source=astar&utm_medium=partner-docs) [NFTs](https://www.covalenthq.com/docs/api/nft/get-nfts-for-address/?utm_source=astar&utm_medium=partner-docs) [Transactions](https://www.covalenthq.com/docs/api/transactions/get-transactions-for-address/?utm_source=astar&utm_medium=partner-docs) [Security](https://www.covalenthq.com/docs/api/security/get-token-approvals-for-address/?utm_source=astar&utm_medium=partner-docs) [Log Events & Others](https://www.covalenthq.com/docs/api/base/get-log-events-by-contract-address/?utm_source=astar&utm_medium=partner-docs)
+
+### Get started
+- [API Key](https://www.covalenthq.com/platform/?utm_source=astar&utm_medium=partner-docs) - sign up for free
+- [Quickstart](https://www.covalenthq.com/docs/unified-api/quickstart/?utm_source=astar&utm_medium=partner-docs) - summary of key resources to get you building immediately on blockchain
+- [API Reference](https://www.covalenthq.com/docs/api/?utm_source=astar&utm_medium=partner-docs) - try all the endpoints directly from your browser
+- [Guides](https://www.covalenthq.com/docs/unified-api/guides/?utm_source=astar&utm_medium=partner-docs) - learn how to build dapps, fetch data and extend your Web3 knowledge
+
+## Increment
+
+[![example-increment-chart](https://www.datocms-assets.com/86369/1684974544-increment-example-partner-docs.png)](https://www.covalenthq.com/platform/increment/#/?utm_source=astar&utm_medium=partner-docs)
+
+Increment is a novel no-code charting and reporting tool powered by Covalent, revolutionizing how the Web3 space approaches analytics. Many analytics tools let you write SQL to create charts, but *Increment is the only one to encode business logic - Reach, Retention, and Revenue - into an SQL compiler that can write valid SQL for you.*
+
+### Use cases
+Increment can be used for:
+
+- [Analyzing Blockchain Networks](https://www.covalenthq.com/docs/increment/data-models/chain-gdp/?utm_source=astar&utm_medium=partner-docs)
+- [Analyzing DEXs](https://www.covalenthq.com/docs/increment/data-models/swap-land/?utm_source=astar&utm_medium=partner-docs)
+- [Analyzing NFT Marketplaces](https://www.covalenthq.com/docs/increment/data-models/jpeg-analysis/?utm_source=astar&utm_medium=partner-docs)
+
+For example, click on the following table to get the latest number of active wallets, transactions and tokens by day, week, month or year for Astar:
+[![example-network-status-increment-ftm](https://www.datocms-assets.com/86369/1686100924-example_network_status_increment_general.png)](https://www.covalenthq.com/docs/networks/astar/?utm_source=astar&utm_medium=partner-docs#network-status)
+
+
+### Get started
+
+- [Increment](https://www.covalenthq.com/platform/increment/#/?utm_source=astar&utm_medium=partner-docs) - login via the Covalent Platform
+- [Docs](https://www.covalenthq.com/docs/increment/?utm_source=astar&utm_medium=partner-docs) - learn how to use Increment to build dynamic, custom charts
+- [Data Models Demo](https://www.covalenthq.com/docs/increment/data-models/model-intro/?utm_source=astar&utm_medium=partner-docs) - build analytics in 3 clicks
+- [Explore Models. Seek Alpha.](https://www.covalenthq.com/platform/increment/#/pages/covalent/chain-gdp/?utm_source=astar&utm_medium=partner-docs) - browse all data models
+- [Use Models. Become Alpha.](https://www.covalenthq.com/platform/increment/#/sql/query_b6c88fd8604f49d5920ca86fa7/?utm_source=astar&utm_medium=partner-docs) - use a data model
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/1.gif b/docs/build/build-on-layer-1/integrations/indexers/img/1.gif
new file mode 100644
index 0000000..bd80425
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/1.gif differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/2.gif b/docs/build/build-on-layer-1/integrations/indexers/img/2.gif
new file mode 100644
index 0000000..839a131
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/2.gif differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/3.gif b/docs/build/build-on-layer-1/integrations/indexers/img/3.gif
new file mode 100644
index 0000000..555b2d4
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/3.gif differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/4.png b/docs/build/build-on-layer-1/integrations/indexers/img/4.png
new file mode 100644
index 0000000..c3d018b
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/4.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/5.webp b/docs/build/build-on-layer-1/integrations/indexers/img/5.webp
new file mode 100644
index 0000000..13146fa
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/5.webp differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/6.webp b/docs/build/build-on-layer-1/integrations/indexers/img/6.webp
new file mode 100644
index 0000000..44087a8
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/6.webp differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/7.png b/docs/build/build-on-layer-1/integrations/indexers/img/7.png
new file mode 100644
index 0000000..ed26d3b
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/7.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/8.png b/docs/build/build-on-layer-1/integrations/indexers/img/8.png
new file mode 100644
index 0000000..76fa5af
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/8.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/9.png b/docs/build/build-on-layer-1/integrations/indexers/img/9.png
new file mode 100644
index 0000000..94a3e40
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/9.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio1.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio1.png
new file mode 100644
index 0000000..87deb4e
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio1.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio10.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio10.png
new file mode 100644
index 0000000..487e16a
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio10.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio11.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio11.png
new file mode 100644
index 0000000..e89f901
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio11.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio12.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio12.png
new file mode 100644
index 0000000..cad1517
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio12.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio13.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio13.png
new file mode 100644
index 0000000..0da1d88
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio13.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio14.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio14.png
new file mode 100644
index 0000000..c826af5
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio14.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio15.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio15.png
new file mode 100644
index 0000000..2890f5b
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio15.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio16.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio16.png
new file mode 100644
index 0000000..fbae78c
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio16.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio17.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio17.png
new file mode 100644
index 0000000..90cbcce
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio17.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio18.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio18.png
new file mode 100644
index 0000000..2899ab2
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio18.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio19.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio19.png
new file mode 100644
index 0000000..417b437
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio19.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio2.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio2.png
new file mode 100644
index 0000000..29f7727
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio2.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio20.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio20.png
new file mode 100644
index 0000000..1110320
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio20.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio21.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio21.png
new file mode 100644
index 0000000..a781212
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio21.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio22.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio22.png
new file mode 100644
index 0000000..90fd5b1
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio22.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio23.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio23.png
new file mode 100644
index 0000000..839750c
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio23.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio3.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio3.png
new file mode 100644
index 0000000..a7ad220
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio3.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio4.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio4.png
new file mode 100644
index 0000000..41592da
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio4.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio5.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio5.png
new file mode 100644
index 0000000..a9027e4
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio5.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio55.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio55.png
new file mode 100644
index 0000000..79242fa
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio55.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio6.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio6.png
new file mode 100644
index 0000000..a68e8bf
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio6.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio7.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio7.png
new file mode 100644
index 0000000..317c6dd
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio7.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio8.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio8.png
new file mode 100644
index 0000000..63993a8
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio8.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/sentio9.png b/docs/build/build-on-layer-1/integrations/indexers/img/sentio9.png
new file mode 100644
index 0000000..dcc060c
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/sentio9.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/subscan/contract_detail.png b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/contract_detail.png
new file mode 100644
index 0000000..02b5431
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/contract_detail.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/subscan/contract_verify.png b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/contract_verify.png
new file mode 100644
index 0000000..00e20fc
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/contract_verify.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/subscan/contracts.png b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/contracts.png
new file mode 100644
index 0000000..93e5ea5
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/contracts.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/subscan/failed.png b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/failed.png
new file mode 100644
index 0000000..c7bf8f5
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/failed.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/subscan/read_call.png b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/read_call.png
new file mode 100644
index 0000000..e037421
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/read_call.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/subscan/transaction_detail.png b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/transaction_detail.png
new file mode 100644
index 0000000..709aded
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/transaction_detail.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/subscan/transactions.png b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/transactions.png
new file mode 100644
index 0000000..ff0a23a
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/transactions.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/subscan/verified_contract.png b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/verified_contract.png
new file mode 100644
index 0000000..8746171
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/subscan/verified_contract.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/img/subsquidGraphiql.png b/docs/build/build-on-layer-1/integrations/indexers/img/subsquidGraphiql.png
new file mode 100644
index 0000000..6ec0f11
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/indexers/img/subsquidGraphiql.png differ
diff --git a/docs/build/build-on-layer-1/integrations/indexers/index.md b/docs/build/build-on-layer-1/integrations/indexers/index.md
new file mode 100644
index 0000000..505ca62
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/indexers/index.md
@@ -0,0 +1,61 @@
+# GraphQL Data Sources and Indexers
+
+Blockchain developers are often faced with the challenge of obtaining data from various API and data sources. Traditional methods may involve directly interacting with each parachain's API, which can be time-consuming and complex. However, leveraging GraphQL data sources simplifies this process, enabling developers to fetch data from multiple sources seamlessly.
+
+Feel free to use these existing GraphQL endpoints provided for the following parachains:
+
+1. [Astar](https://squid.subsquid.io/gs-explorer-astar/graphql)
+2. [Shiden](https://squid.subsquid.io/gs-explorer-shiden/graphql)
+3. [Shibuya](https://squid.subsquid.io/gs-explorer-shibuya/graphql)
+
+## What is GraphQL?
+
+GraphQL is a query language for APIs and a runtime for executing those queries with your existing data. It provides an efficient and powerful alternative to REST and offers significant advantages when dealing with complex data models.
+
+GraphQL allows clients to define the structure of the responses they receive. This means that instead of receiving a fixed data structure from a server, clients can request specific data they need, leading to more efficient data loading and a reduction in data over-fetching.
+
+## Why Use GraphQL for Parachain Data?
+
+Parachains in the Astar ecosystem often provide vast amounts of data. However, accessing and manipulating this data using traditional REST APIs can be challenging, particularly when you need to combine data from different parachains.
+
+Using GraphQL, you can retrieve specific data from multiple parachains using a similar query, allowing for efficient data retrieval and manipulation. GraphQL APIs for these parachains provide a unified interface to interact with the chains, irrespective of the individual data structures used by each parachain.
+
+## How to Query Data from Parachains using GraphQL
+
+Below is a simple example of how to fetch data from a parachain using GraphQL. We will use a GraphQL client, such as Apollo Client or urql, to execute the query.
+
+```javascript
+import { ApolloClient, InMemoryCache, gql } from '@apollo/client';
+
+// Replace with the GraphQL endpoint of the parachain you want to interact with
+const client = new ApolloClient({
+ uri: 'https://squid.subsquid.io/gs-explorer-astar/graphql',
+ cache: new InMemoryCache()
+});
+
+client.query({
+ query: gql`
+ query GetParachainData {
+ // Your query here
+ }
+ `
+}).then(response => console.log(response.data))
+ .catch(error => console.error(error));
+```
+
+Replace the `query GetParachainData` section with the actual GraphQL query you want to execute. The returned data will be in the structure you define in the query.
+
+GraphQL provides a powerful tool for developers in the Astar ecosystem. By using GraphQL data sources, you can efficiently fetch and manipulate data from multiple parachains, simplifying data retrieval and potentially improving the performance of your applications.
+
+Remember, each parachain may expose different data and operations in their GraphQL API. Always refer to the respective API Explorer to understand the data and operations available.
+
+## Custom Indexing
+
+Take a look at these for your own custom GraphQL indexing needs:
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs/build/build-on-layer-1/integrations/indexers/onfinality.md b/docs/build/build-on-layer-1/integrations/indexers/onfinality.md
new file mode 100644
index 0000000..2ef58c2
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/indexers/onfinality.md
@@ -0,0 +1,43 @@
+---
+sidebar_position: 5
+---
+
+# OnFinality Unified NFT API
+
+## Introduction
+
+OnFinality's Unified NFT API will provide access to NFTs and their metadata for all popular standards across the Polkadot and Kusama ecosystems and beyond, in a single, simple request.
+
+The highly flexible GraphQL Unified API includes Collections, NFTs, Transactions and Metadata for all ERC721 and ERC1155 NFTs on different Polkadot and Kusama networks (including Astar and Shiden).
+
+The OnFinality NFT API will just be the first Unified API offered by OnFinality, with plans to expand to transactions, staking, and more.
+
+## Prerequisites
+
+OnFinality's Unified NFT API is provided as an open source project, and a publicly hosted GraphQL API that developers can start querying today
+
+## Getting started
+
+Paste your queries into [https://nft-beta.onfinality.io/public](https://nft-beta.onfinality.io/public) and press play.
+
+Public GraphQL Endpoint (Beta): [nft-beta.api.onfinality.io](https://nft-beta.api.onfinality.io).
+
+*The Beta version of our public endpoint is intended for development, experimentation, and validation purposes. It should not be used in a production environment. We will be launching the production endpoint and service in the coming weeks.*
+
+A public rate limit is applied, contact support@onfinality.io to receive a higher rate limit.
+
+## Troubleshooting
+
+For optimal performance and responsible use of our shared resources, we recommend:
+
+- ❌ Avoid using the equalToInsensitive filter, which has poor performance.
+- ✅ Convert addresses to lower case and use the equalTo filter
+- ✅ Use pagination, e.g. first: 10
+
+## Learn more
+
+The complete schema is available here [https://github.com/OnFinality-io/api-nft/blob/main/schema.graphql](https://github.com/OnFinality-io/api-nft/blob/main/schema.graphql)
+
+Visit our official docs for the list of supported networks/standards and latest updates to this feature! [https://documentation.onfinality.io/support/unified-nft-api](https://documentation.onfinality.io/support/unified-nft-api)
+
+We'd also love to hear from you, contact support@onfinality.io to put in a request for more networks and standards!
diff --git a/docs/build/build-on-layer-1/integrations/indexers/sentio.md b/docs/build/build-on-layer-1/integrations/indexers/sentio.md
new file mode 100644
index 0000000..85d79fb
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/indexers/sentio.md
@@ -0,0 +1,269 @@
+---
+title: Sentio Debugger
+sidebar_position: 8
+---
+
+import Figure from '/src/components/figure'
+
+# Sentio Debugger
+
+## Basic
+
+Sentio debugger is a tool that helps developers understand how transactions work.
+
+Search for specific transactions on the [Explorer page](https://app.sentio.xyz/explorer)
+
+
+
+The Transaction Explorer has a few key features, including:
+
+## Transaction Information
+
+Sentio provides standard information about specific transactions.
+
+### Transaction Metadata
+
+For each transaction, Sentio adds standard transaction metadata, and a link to the block explorer page on the **Overview** tab.
+
+
+
+### Events
+
+Events are decoded where ABIs are available, and are otherwise displayed according to *best effort* on the **Events** tab.
+
+
+
+### State Diff
+
+When a transaction causes state changes, Sentio lists them on the **State** tab.
+
+
+
+### Contract Code Explorer
+
+Sentio provides a code explorer for all the related code on the **Contracts** tab.
+
+
+
+## Trace the Money
+
+The best way to understand a transaction is to trace the money. Sentio provides both **Balance Change** and **Fund Flow** analysis tools.
+
+### Balance Change
+
+While a transaction is executing, multiple contracts may have their balances updated. Sentio displays the balance changes that occur during a transaction.
+
+
+
+For example, in this MEV arbitrage transaction above, each party involved has a balance of different assets increasing and decreasing, except one address (0xa0d...) which has only an increasing asset, indicating that it made the arbitrage profit.
+
+### Fund Flows
+
+Sentio provides detailed and **ordered** fund flows. In the following example we visualize the process of how an arbitrageur made a profit by utilizing several trading venues.
+
+
+
+## Trace and Call
+
+Sentio provides trace view of transactions.
+
+### Trace modes and options
+
+**Trace mode:** Full trace mode includes cross-contract calls (CALL) and in-contract calls (JUMP).
+
+
+
+You can also hide in-contract calls (JUMP) by turning off Full trace.
+
+**Options:** Users can hide static calls and select the level of trace displayed.
+
+
+
+**Call Graph:** Sentio provides the call graph that shows the contract interactions within a transaction.
+
+
+
+## Debugging
+
+To understand a transaction even further, developers can use the **Debugger** tab to visualize the execution line-by-line.
+
+
+
+### Debugger tab layout
+
+**Traces**
+
+On the upper-left section, Sentio shows the trace of the transaction, this is the same as *trace and call.* Users can use this to select a location and execute directly to that position.
+
+
+
+
+**Stack Traces**
+
+The bottom-left section contains the current call stack information, for example:
+
+
+
+### Single-Step Mode
+
+:::info
+To use single-step mode:
+- Turn on single-step mode.
+- (optional) Use Debug Build -- Sentio will recompile the contract with different compiler parameters to achieve the best source mapping. See **Limitations** below.
+:::
+
+
+
+
+**Use Debugger**
+
+The debugger has standard definitions of:
+
+- Step-Over: Move to the next line of execution.
+- Step-Into: If there is a function, steps into the function.
+- Step-Out: If we are in a function, steps out the function to the upper level.
+- Continue: This is the standard break-point.
+- Restart: Restart from the beginning.
+
+**Inspect Variables**
+
+The debugger automatically shows the local variables within the call context, and all the contract variables.
+
+
+
+The debugger also supports adding **user defined watched variables (similar to a regular debugger.)**
+
+
+
+**Limitations**
+
+- Contracts compiled with the viaIR option are not fully supported.
+- When debugging a release build, since they are fully optimized, source-mapping issues and unexpected execution orders may present themselves.
+- When debugging a debug build, gas usage is ignored, which may cause different code execution. e.g. if the original transaction runs out of gas, the debug build will indicate the transaction fully executes.
+
+### Function-only Mode
+
+If single-step mode is turned off, the debugger will behave at the *function* level.
+
+**Use the debugger**
+
+The debugger has standard definitions of:
+
+- Next: proceeds to the next function call (depth first search order)
+- Previous: reverts to the previous function call
+- Step Over: proceeds to the next function call (**does not** follow nested calls)
+- Step Up: goes up one level
+
+**Inspect the variables**
+
+In this mode, developers can visualize **Inputs**, **Return Value** and **Gas info.**
+
+
+
+## Simulation
+
+The Sentio simulator allows you to run simulations and analyze the data collected in great detail.
+You can quickly begin simulations through the Sentio [UI](https://docs.sentio.xyz/sentio-debugger/simulation/simulation-ui) or by calling the [API](https://docs.sentio.xyz/sentio-debugger/simulation/simulation-api).
+
+### Simulation UI
+
+**From existing transaction**
+The simplest way to start a simulation is to click the simulator button as shown below, on a transaction that has been opened.
+
+
+
+In this case, it will copy all the parameters from the existing transaction and you could make adjustments on top of it. Like block number, block index, gas fee, block header, state, etc.
+
+
+
+Click the simulate transaction button will save this run to the simulation history of your project and show you the result, just like what you see from the normal debugger UI.
+
+**Direct Build**
+
+You can also click the simulator button on the left navigation bar and go to the simulator page which shows all the history simulations. Click the simulation button on the right corner will pop a similar UI but without prepopulated transaction data.
+
+
+
+**Override Contract**
+
+Use the compilations tab to upload a local contract compilation folder.
+
+
+
+When doing the simulation, choose the contract override.
+
+
+
+### Simulation API
+
+#### Create Simulation
+
+For all simulation API calls, you should have an API key, and pass it by header with the field api-key. Refer to [API Key](https://docs.sentio.xyz/references/concepts/api-key) for how to obtain one.
+
+The simulation body should be included in the request body. You can follow the example below.
+
+```
+curl --location 'https://app.sentio.xyz/api/v1/solidity/simulate' \
+--header 'api-key: ' \
+--header 'Content-Type: application/json' \
+--data '{
+ "projectOwner": "",
+ "projectSlug": "",
+ "simulation": {
+ "networkId": "1", // Chain ID, "1" for Ethereum mainnet. See chainlist.org for details
+ "blockNumber": "17415072",
+ "transactionIndex": "97", // transaction index in the block
+
+ // standard field for evm transactions
+ "from": "0x5e8bb488e85ea732e17150862b1acfc213a7c13d",
+ "to": "0xef1c6e67703c7bd7107eed8303fbe6ec2554bf6b",
+ "value": "0x0",
+ "gas": "0x31ae2",
+ "gasPrice": "0xe59a1adbe",
+ "input": "0x3593564c000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000647dffef0000000000000000000000000000000000000000000000000000000000000002080c000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000160000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000003077b58d5d378391980000000000000000000000000000000000000000000000000000000032b2ced3e40e9d100000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002000000000000000000000000082646b22a3960da69ef7a778c16dd6fb85dd999000000000000000000000000c02aaa39b223fe8d0a0e5c4f27ead9083c756cc200000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000032b2ced3e40e9d1",
+
+ // overrides
+ "stateOverrides": {
+ "0x0811fd1808e14f0b93f0514313965a5f142c5539": {
+ "balance": "0x1111111111111111"
+ }
+ },
+ "blockOverride": {
+ "baseFee": "0x0"
+ }
+ }
+}'
+```
+
+Your simulations will be saved, and a unique ID for each simulation is included in the response. It will be useful for fetching simulation details.
+
+#### Get Detail Trace
+
+State Diff
+Endpoint: https://app.sentio.xyz/api/v1/solidity/state_diff
+API key is required.
+
+
+
+Example:
+
+```
+curl --location 'https://app.sentio.xyz/api/v1/solidity/state_diff?networkId=1&txId.simulationId=pVwBCxr3&projectOwner=&projectSlug=' \
+--header 'api-key: '
+```
+
+#### Trace Decoded Trace
+
+Endpoint: https://app.sentio.xyz/api/v1/solidity/call_trace
+API key is required.
+
+
+
+Example:
+
+```
+curl --location 'https://app.sentio.xyz/api/v1/solidity/call_trace?withInternalCalls=true&networkId=1&txId.simulationId=pVwBCxr3&projectOwner=&projectSlug=' \
+--header 'api-key: '
+```
+
+For more information about Sentio Debugger and for information not listed here, visit their [official documentation](https://docs.sentio.xyz) page.
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/integrations/indexers/subquery.md b/docs/build/build-on-layer-1/integrations/indexers/subquery.md
new file mode 100644
index 0000000..ffd54b8
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/indexers/subquery.md
@@ -0,0 +1,360 @@
+---
+sidebar_position: 2
+---
+
+# SubQuery
+
+## What is SubQuery?
+
+SubQuery is an open-source and universal blockchain data indexer for developers that provides fast, flexible, reliable, and decentralised APIs to power leading multi-chain apps. Our goal is to save developers time and money by eliminating the need of building their own indexing solution and instead, fully focus on developing their applications.
+
+SubQuery's superior indexing capabilities support Astar native, EVM and WASM-based smart contracts all out of the box. (In reality a Docker container!) Starter projects are provided allowing developers to get up and running and index blockchain data in minutes.
+
+Another one of SubQuery's competitive advantages is the ability to aggregate data not only within a chain but across blockchains all within a single project. This allows the creation of feature-rich dashboard analytics or multi-chain block scanners.
+
+Other advantages include superior performance with multiple RPC endpoint configurations, multi-worker capabilities and a configurable caching architecture. To find out more, visit our [documentation](https://academy.subquery).
+
+## Prerequisites
+
+[Docker](https://docs.docker.com/get-docker/): Containerization platform for software solutions.
+
+Subquery CLI: Command line tool for creating SubQuery projects. Install this by running the following:
+
+```bash
+npm install -g @subql/cli
+```
+
+## Getting started
+
+This quick start guide introduces SubQuery's Substrate WASM support by using an example project in Astar's Network. The example project indexes all Transactions and Approvals from the [Astar Wasm based lottery contract](https://astar.subscan.io/account/bZ2uiFGTLcYyP8F88XzXa13xu5Mmp13VLiaW1gGn7rzxktc), as well as dApp staking events from [Astar's dApp Staking](https://docs.astar.network/docs/dapp-staking/) functions.
+
+This project is unique, as it indexes data from both Astar's Substrate execution layer (native Astar pallets and runtime), and with smart contract data from Astar's WASM smart contract layer, within the same SubQuery project and into the same dataset. A very similar approach can be taken with indexing Astar's EVM layer too.
+
+Initialise the SubQuery Starter Project with `subql init` and then choose `Substrate` as the network family, `Astar` as the network and then select astar-wasm-starter for the purposes of this guide.
+
+```bash
+~$ subql init astar-demo
+? Select a network family Substrate
+? Select a network Astar
+? Select a template project (Use arrow keys or type to search)
+❯ astar-evm-starter Astar EVM project template tutorial
+ astar-wasm-starter Astar WASM project template tutorial
+ astar-starter Starter project for Astar
+ Other Enter a custom git endpoint
+```
+
+Continue with the set-up by following the prompt and customising the parameters or accepting the defaults.
+
+Visit the [SubQuery quick start guide](https://academy.subquery.network/quickstart/quickstart.html) for more details.
+
+## Customizing the project in 3 simple steps
+There are 3 important files that need to be modified. These are:
+
+1. The GraphQL Schema in schema.graphql
+2. The Project Manifest in project.yaml
+3. The Mapping functions in src/mappings/ directory
+
+### 1. Customize the schema file
+
+The `schema.graphql` file determines the shape of your data from SubQuery due to the mechanism of the GraphQL query language. Hence, updating the GraphQL Schema file is the perfect place to start. It allows you to define your end goal right at the start.
+
+The Astar-wasm-starter project has four entities. Transaction, Approval, DApp, and DAppReward (which has a [foreign key](https://academy.subquery.network/build/graphql.html#one-to-many-relationships) to Dapp). These index basic block data such as the timestamp, height and hash along with from and contract addresses and the value.
+
+```graphql
+type Transaction @entity {
+ id: ID! # Transaction hash
+ transactionHash: String
+ blockHeight: BigInt
+ blockHash: String
+ timestamp: Date
+ value: BigInt
+ from: String!
+ to: String!
+ contractAddress: String!
+}
+
+type Approval @entity {
+ id: ID! # Transaction hash
+ blockHeight: BigInt
+ value: BigInt
+ hash: String
+ owner: String!
+ spender: String!
+ contractAddress: String!
+}
+
+type DApp @entity {
+ id: ID! #EVM is a required field
+ accountID: String!
+ totalStake: BigInt!
+}
+
+type DAppReward @entity {
+ id: ID!
+ dApp: DApp!
+ accountID: String!
+ eraIndex: Int!
+ balanceOf: BigInt!
+}
+```
+
+When you make any changes to the schema file, please ensure that you regenerate your types directory via `yarn codegen` or `npm run-script codegen`
+
+You will find the generated models in the `/src/types/models` directory.
+
+Check out the [GraphQL Schema](https://academy.subquery.network/build/graphql.html) documentation to get in-depth information on `schema.graphql` file.
+
+### 2. The project manifest file
+The Project Manifest (`project.yaml`) file works as an entry point to your project. It defines most of the details on how SubQuery will index and transform the chain data. For Substrate/Polkadot chains, there are three types of mapping handlers:
+
+- [BlockHanders](https://academy.subquery.network/build/manifest/polkadot.html#mapping-handlers-and-filters): On each and every block, run a mapping function
+- [EventHandlers](https://academy.subquery.network/build/manifest/polkadot.html#mapping-handlers-and-filters): On each and every Event that matches optional filter criteria, run a mapping function
+- [CallHanders](https://academy.subquery.network/build/manifest/polkadot.html#mapping-handlers-and-filters): On each and every extrinsic call that matches optional filter criteria, run a mapping function
+
+For [EVM](https://academy.subquery.network/build/substrate-evm.html) and [WASM](https://academy.subquery.network/build/substrate-wasm.html) data processors on Substrate/Polkadot chains, there are only two types of mapping handlers:
+
+- [EventHandlers](https://academy.subquery.network/build/substrate-wasm.html#event-handlers): On each and every Event that matches optional filter criteria, run a mapping function
+- [CallHanders](https://academy.subquery.network/build/substrate-wasm.html#call-handlers): On each and every extrinsic call that matches optional filter criteria, run a mapping function
+
+### Substrate Manifest section
+
+**Since we are planning to index all Polkadot transfers, we need to update the `datasources` section as follows:**
+
+```yaml
+dataSources:
+ - kind: substrate/Runtime
+ # This is the datasource for Astar's Native Substrate processor
+ startBlock: 1
+ mapping:
+ file: ./dist/index.js
+ handlers:
+ - handler: handleNewContract
+ kind: substrate/EventHandler
+ filter:
+ module: dappsStaking
+ method: NewContract
+ - handler: handleBondAndStake
+ kind: substrate/EventHandler
+ filter:
+ module: dappsStaking
+ method: BondAndStake
+ - handler: handleUnbondAndUnstake
+ kind: substrate/EventHandler
+ filter:
+ module: dappsStaking
+ method: UnbondAndUnstake
+ - handler: handleReward
+ kind: substrate/EventHandler
+ filter:
+ module: dappsStaking
+ method: Reward
+```
+
+This indicates that you will be running a `handleNewContract` mapping function whenever there is an event emitted from the `NewContract` method on the `dappsStaking` pallet. Similarly we will run other mapping functions for the three other events being emitted from the `dappsStaking` to other mapping functions. This covers most interactions with the dApp staking feature that we are interested in.
+
+Check out our [Manifest File](https://academy.subquery.network/build/manifest/polkadot.html) documentation to get more information about the Project Manifest (`project.yaml`) file.
+
+### WASM Manifest Section
+
+If you're not using the [WASM starter template](https://github.com/subquery/subql-starter/tree/main/Astar/astar-wasm-starter) then please add the Wasm Datasource as a dependency using `yarn add @subql/substrate-wasm-processor`.
+
+Here we are indexing all transfers and approve contract call events from the Astar contract `bZ2uiFGTLcYyP8F88XzXa13xu5Mmp13VLiaW1gGn7rzxktc`. First you will need to import the contract ABI defintion. You can copy the entire JSON and save as a file `./erc20Metadata.json` in the `abis` directory.
+
+This section in the Project Manifest now imports all the correct definitions and lists the triggers that we look for on the blockchain when indexing. We add another section the datasource beneath the above [substrate manifest section](#substrate-manifest-section).
+
+```yaml
+dataSources:
+ - kind: substrate/Runtime
+ # This is the datasource for Astar's Native Substrate processor
+ ...
+ - kind: substrate/Wasm
+ # This is the datasource for Astar's Wasm processor
+ startBlock: 3281780
+ processor:
+ file: ./node_modules/@subql/substrate-wasm-processor/dist/bundle.js
+ options:
+ abi: erc20
+ # contract: "a6Yrf6jAPUwjoi5YvvoTE4ES5vYAMpV55ZCsFHtwMFPDx7H" # Shibuya
+ contract: "bZ2uiFGTLcYyP8F88XzXa13xu5Mmp13VLiaW1gGn7rzxktc" # Mainnet
+ assets:
+ erc20:
+ file: ./abis/erc20Metadata.json
+ mapping:
+ file: ./dist/index.js
+ handlers:
+ - handler: handleWasmEvent
+ kind: substrate/WasmEvent
+ filter:
+ # contract: "a6Yrf6jAPUwjoi5YvvoTE4ES5vYAMpV55ZCsFHtwMFPDx7H" # Shibuya
+ contract: "bZ2uiFGTLcYyP8F88XzXa13xu5Mmp13VLiaW1gGn7rzxktc" # Mainnet
+ identifier: "Transfer"
+ - handler: handleWasmCall
+ kind: substrate/WasmCall
+ filter:
+ selector: "0x681266a0"
+ method: "approve"
+```
+
+The above code indicates that you will be running a `handleWasmEvent` mapping function whenever there is an `Transfer` event on any transaction from the Astar contract. Similarly, we will run the `handleWasmCall` mapping function whenever there is a `approve` log on the same contract
+
+Check out our [Substrate Wasm](https://academy.subquery.network/build/substrate-wasm.html) documentation to get more information about the Project Manifest (`project.yaml`) file for Substrate WASM contracts.
+
+### 3. Customize the mapping file
+
+Mapping functions define how chain data is transformed into the optimised GraphQL entities that we previously defined in the `schema.graphql` file.
+
+Navigate to the default mapping function in the `src/mappings` directory. There are multiple exported functions: `handleWasmCall`, `handleWasmEvent`, `handleNewContract`, `handleBondAndStake`, `handleUnbondAndUnstake`, and `handleReward`. We wont go into all here but you should be able to figure out what each is doing.
+
+```ts
+type ApproveCallArgs = [AccountId, Balance];
+
+export async function handleWasmCall(
+ call: WasmCall
+): Promise {
+ logger.info(`Processing WASM Call at ${call.blockNumber}`);
+ const approval = new Approval(`${call.blockNumber}-${call.idx}`);
+ approval.hash = call.hash;
+ approval.owner = call.from.toString();
+ approval.contractAddress = call.dest.toString();
+ if (typeof call.data !== "string") {
+ const [spender, value] = call.data.args;
+ approval.spender = spender.toString();
+ approval.value = value.toBigInt();
+ } else {
+ logger.info(`Decode call failed ${call.hash}`);
+ }
+ await approval.save();
+}
+```
+
+The `handleWasmCall` function receives event data from the WASM execution environment whenever an call matches the filters that was specified previously in the `project.yaml`. It instantiates a new `Approval` entity and populates the fields with data from the Wasm Call payload. Then the `.save()` function is used to save the new entity (_SubQuery will automatically save this to the database_).
+
+```ts
+export async function handleBondAndStake(event: SubstrateEvent): Promise {
+ logger.info(
+ `Processing new Dapp Staking Bond and Stake event at ${event.block.block.header.number}`
+ );
+ const {
+ event: {
+ data: [accountId, smartContract, balanceOf],
+ },
+ } = event;
+ // Retrieve the dapp by its ID
+ let dapp: DApp = await DApp.get(smartContract.toString());
+ if (!dapp) {
+ dapp = DApp.create({
+ id: smartContract.toString(),
+ accountID: accountId.toString(),
+ totalStake: BigInt(0),
+ });
+ }
+
+ dapp.totalStake += (balanceOf as Balance).toBigInt();
+ await dapp.save();
+}
+```
+
+The `handleBondAndStake` function receives Substrate event data from the native Substrate environment whenever an event matches the filters that was specified previously in the `project.yaml`. It extracts the various data from the event payload (in Substrate it's stored as a array of Codecs), then checks if an existing DApp record exists. If none exists (e.g. it's a new dApp), then it instantiates a new one and then updates the total stake to reflect the new staking mount. Then the `.save()` function is used to save the new/updated entity (_SubQuery will automatically save this to the database_).
+
+Check out our mappings documentation for [Substrate](https://academy.subquery.network/build/mapping/polkadot.html) and for the [Substrate WASM data processor](https://academy.subquery.network/build/substrate-wasm.html) to get detailed information on mapping functions for each type.
+
+## Build Your Project
+
+Next, build your work to run your new SubQuery project. Run the build command from the project's root directory `yarn build` or `npm run-script build`. Note, whenever you make changes to your mapping functions, make sure to rebuild your project.
+
+
+## Run Your Project Locally with Docker
+
+SubQuery provides a Docker container to run projects very quickly and easily for development purposes.
+
+The docker-compose.yml file defines all the configurations that control how a SubQuery node runs. For a new project, which you have just initialised, you won't need to change anything.
+
+Run the following command under the project directory: `yarn start:docker` or `npm run-script start:docker`. It may take a few minutes to download the required images and start the various nodes and Postgres databases.
+
+Visit [Running SubQuery Locally](https://academy.subquery.network/run_publish/run.html) to get more information on the file and the settings.
+
+## Query Your Project
+
+Once the container is running, navigate to http://localhost:3000 in your browser and run the sample GraphQL command provided in the README file. Below is an example query from the Astar-wasm-starter project.
+
+```graphql
+query {
+ transactions(first: 3, orderBy: BLOCK_HEIGHT_ASC) {
+ totalCount
+ nodes {
+ id
+ timestamp
+ blockHeight
+ transactionHash
+ blockHash
+ contractAddress
+ from
+ value
+ }
+ }
+}
+```
+
+Note:
+There is a _Docs_ tab on the right side of the playground which should open a documentation drawer. This documentation is automatically generated and helps you find what entities and methods you can query. To learn more about the GraphQL Query language [here](https://academy.subquery.network/run_publish/graphql.html).
+
+You should see results similar to those below:
+
+```json
+{
+ "data": {
+ "transactions": {
+ "totalCount": 17,
+ "nodes": [
+ {
+ "id": "3281781-0",
+ "timestamp": "2023-04-04T14:37:54.532",
+ "blockHeight": "3281781",
+ "transactionHash": "0x4f57e6ab4e8337375871fe4c8f7ae2e71601ea7fbd135b6f8384eb30db31ec44",
+ "blockHash": "0x6d65fe39ae469afd74d32e34a61382b1bbda37983dea745ea2afe58e57d4afbc",
+ "contractAddress": "bZ2uiFGTLcYyP8F88XzXa13xu5Mmp13VLiaW1gGn7rzxktc",
+ "from": "WJWxmJ27TdMZqvzLx18sZpH9s5ir9irFm1LRfbDeByamdHf",
+ "value": "25000000000000000000"
+ },
+ {
+ "id": "3281792-0",
+ "timestamp": "2023-04-04T14:40:06.386",
+ "blockHeight": "3281792",
+ "transactionHash": "0xbe8d6f09a96ff44e732315fbeff2862e9bdeb8353612a0bfab10632c410d8135",
+ "blockHash": "0xaa09e8060068931a58a162c150ccb73e0b4de528185f1da92b049ab31c299e5a",
+ "contractAddress": "bZ2uiFGTLcYyP8F88XzXa13xu5Mmp13VLiaW1gGn7rzxktc",
+ "from": "aFNoZEM64m1ifrHAwEPEuhfRM5L7kjnPhmtYjZaQHX2zb6y",
+ "value": "32000000000000000000"
+ },
+ {
+ "id": "3281797-1",
+ "timestamp": "2023-04-04T14:41:06.786",
+ "blockHeight": "3281797",
+ "transactionHash": "0xfdb111a314ee4e4460a3f2ab06221d5985c50e8f5cbae5a12f4f73b222d5954c",
+ "blockHash": "0xeb4e49463e174fc993417e852f499ddc6e3c4a15f355a576a74772604f2132e5",
+ "contractAddress": "bZ2uiFGTLcYyP8F88XzXa13xu5Mmp13VLiaW1gGn7rzxktc",
+ "from": "aFNoZEM64m1ifrHAwEPEuhfRM5L7kjnPhmtYjZaQHX2zb6y",
+ "value": "57000000000000000000"
+ }
+ ]
+ }
+ }
+}
+```
+
+![4](img/4.png)
+
+## Next steps
+
+SubQuery's indexing experience is designed to be as fast and as simple as possible allowing developers to index data from the blockchain in minutes with the help of the starter project and a docker environment.
+
+It is also flexible to enable indexing across different chains and filtering only the data relevant to your application making it lightweight, fast and efficient.
+
+We are excited to help you on your indexing journey so please reach out to us at the various links below to see how we can help further.
+
+## Resources
+* [SubQuery Network](https://subquery.network/)
+* [SubQuery Documentation](https://academy.subquery.network/)
+* [SubQuery Discord](https://discord.com/invite/subquery)
+* [SubQuery Twitter](https://twitter.com/SubQueryNetwork)
+* [SubQuery Blog](https://blog.subquery.network/)
diff --git a/docs/build/build-on-layer-1/integrations/indexers/subscan.md b/docs/build/build-on-layer-1/integrations/indexers/subscan.md
new file mode 100644
index 0000000..cba19e2
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/indexers/subscan.md
@@ -0,0 +1,67 @@
+---
+sidebar_position: 3
+---
+
+# Subscan
+## Introduction
+
+A Substrate ecological explorer that not only allows ordinary users to view WASM smart contract and transaction detail but also provides developers with WASM smart contract verification and read/call capabilities.
+
+## Prerequisites
+ - [Subscan WASM smart contract dashboard for Astar network](https://astar.subscan.io/wasm_contract_dashboard)
+ - Basic WASM smart contract knowledge
+ - polkadot{.js} extension (optional, it's used to call WASM smart contract)
+
+## Getting started
+
+### View WASM Transactions and Detail
+
+Visit [Transactions](https://astar.subscan.io/wasm_transaction) under the WASM category in Subscan navbar.
+
+![transactions](./img/subscan/transactions.png)
+
+Click on the link in list item to view transaction Detail.
+
+![transaction_detail](./img/subscan/transaction_detail.png)
+
+### View WASM Smart Contracts and Detail
+
+Visit [Transactions](https://astar.subscan.io/wasm_transaction) under WASM category in Subscan navbar.
+
+![contracts](./img/subscan/contracts.png)
+
+Click on the link in the list item to view contract detail.
+
+![contract_detail](./img/subscan/contract_detail.png)
+
+### Verify WASM Smart Contract
+
+Visit [WASM Contract Verification Tool](https://astar.subscan.io/verify_wasm_contract) under Tools category in Subscan
+navbar or [Contract tab in contract detail](https://astar.subscan.io/wasm_contract/bZ2uiFGTLcYyP8F88XzXa13xu5Mmp13VLiaW1gGn7rzxktc?tab=contract) to verify WASM smart contract
+
+![contract_verify](./img/subscan/contract_verify.png)
+
+As the tip section says, we provide [docker images](https://quay.io/repository/subscan-explorer/wasm-compile-build?tab=tags) for developers to compile contract. This ensures that Subscan is consistent with the contract deployer compilation environment.
+
+After compiling contract in docker, you need to fill in the Contract Verification form and submit Code and Cargo file from docker. The contract verification process will run in the background and may take 5 to 10 minutes. Once it's done, you'll see contract abi and source code in contract detail, and you can read/call the contract as you like now.
+
+![verified_contract](./img/subscan/verified_contract.png)
+
+### Read/Call WASM Smart Contract
+Visit [Contract tab in contract detail](https://astar.subscan.io/wasm_contract/aBmKPunRKt9VaW6AuMS8ZUhpSYZqHJHYKhvjdNb1M4VQgqS?tab=contract&contractTab=read). Please note that read/call features only apply to verified WASM smart contract, and you need to connect to polkadot{.js} extension before calling contract.
+
+![read_call](./img/subscan/read_call.png)
+
+## Troubleshooting
+
+**I've submitted the contract verification form, but the contract is still not verified**
+
+It happens mainly in two situations:
+1. The verifying process is still undergoing. Just wait for 5 to 10 minutes and check again
+2. Verification failed. In this case, you'll see Last Compiled Code Hash as follows. Please check the parameters and confirm using the above docker image for compilation.
+
+![failed](./img/subscan/failed.png)
+
+## Learn more
+
+[Contracts API Docs](https://support.subscan.io/#contracts-api) by Subscan
diff --git a/docs/build/build-on-layer-1/integrations/indexers/subsquid.md b/docs/build/build-on-layer-1/integrations/indexers/subsquid.md
new file mode 100644
index 0000000..88f6f73
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/indexers/subsquid.md
@@ -0,0 +1,135 @@
+---
+sidebar_position: 1
+---
+
+# Subsquid
+
+[Subsquid](https://subsquid.io) is an indexing framework supporting both [Substrate](/docs/build/build-on-layer-1/smart-contracts/wasm/index.md) and [EVM](/docs/build/build-on-layer-1/smart-contracts/EVM/index.md)-based chains. It is extremely flexible and offers [high syncing speeds](https://docs.subsquid.io/migrate/subsquid-vs-thegraph/). Subsquid comes with a set of code generation tools that make ready-to-use, customizable indexer projects ("squids") out of contracts' ABIs. WASM/ink! and EVM/Solidity contracts are supported. Once scraped, the contract data can be served over a GraphQL API or stored as a dataset.
+
+Squids can run either locally or in a Subsquid-provided a cloud service called Aquarium. The service has and will always have a free tier. For more demanding applications it offers a premium subscription.
+
+Prerequisites: NodeJS, [Subsquid CLI](https://docs.subsquid.io/squid-cli/installation/). Docker if you want your data in PostgreSQL and/or served via a GraphQL API.
+
+## Generating a WASM/ink! squid
+
+A squid indexing events listed in a contract ABI can be generated with the `@subsquid/squid-gen-ink` tool. Begin by creating a new project with the tool's complementary template and installing dependencies:
+```bash
+sqd init my-ink-squid -t https://github.com/subsquid-labs/squid-ink-abi-template
+cd my-ink-squid
+npm i
+```
+Obtain any contract ABIs and save them to the `./abi` folder. For example, for indexing token contracts you can grab the `erc20` ABI from the `squid-gen-ink` repository:
+```bash
+curl -o abi/erc20.json https://raw.githubusercontent.com/subsquid/squid-gen/master/tests/ink-erc20/abi/erc20.json
+```
+Next, make a `squidgen.yaml` configuration file like this one:
+```yaml
+archive: shibuya
+target:
+ type: postgres
+contracts:
+ - name: testToken
+ abi: "./abi/erc20.json"
+ address: "0x5207202c27b646ceeb294ce516d4334edafbd771f869215cb070ba51dd7e2c72"
+ events:
+ - Transfer
+```
+Here,
+* **archive** is an alias or an endpoint URL of a chain-specific data lake responsible for the initial ingestion and filtration of the data. Subsquid maintains free archives for all Astar-related networks under aliases `astar`, `shibuya` and `shiden`.
+* **target** section describes how the scraped data should be stored. The example above uses a PostgreSQL database that can be presented to users as a GraphQL API or used as-is. Another option is to [store the data to a file-based dataset](https://docs.subsquid.io/basics/squid-gen/#file-store-targets).
+* **contracts** is a list of contracts to be indexed. All fields in the example above are required. Set `events: true` to index all events listed in the ABI.
+
+When done, run
+```bash
+npx squid-gen config squidgen.yaml
+```
+to generate the squid code.
+
+## Generating an EVM/Solidity squid
+
+There are two primary ways to index EVM contracts deployed to Astar with Subsquid:
+1. With a [Substrate processor](https://docs.subsquid.io/substrate-indexing/) utilizing its [support for the Frontier EVM pallet](https://docs.subsquid.io/substrate-indexing/evm-support/). This is useful when both EVM and Substrate data is required. If that is your use case, check out [this tutorial](https://docs.subsquid.io/tutorials/create-an-evm-processing-squid/).
+2. With a [native EVM processor](https://docs.subsquid.io/evm-indexing/). This simpler and more performant approach will be the focus of this section.
+
+Generating EVM squids is very similar to generating ink! squids. To create a project, execute
+```bash
+sqd init my-evm-squid -t abi
+cd my-evm-squid
+npm i
+```
+Note that a different template, `abi`, is used.
+
+Next, obtain any contract ABIs and save them to the `./abi` folder. I will be indexing the [PancakeFactory](https://blockscout.com/astar/address/0xA9473608514457b4bF083f9045fA63ae5810A03E) and [PancakeRouter](https://blockscout.com/astar/address/0xE915D2393a08a00c5A463053edD31bAe2199b9e7) contracts of [Arthswap](https://arthswap.org) in this example, taking their ABIs from the Blockscout pages ("Code" tab, "Contract ABI" section) and saving them to `./abi/factory.json` and `./abi/router.json`, correspondingly.
+
+The syntax of `squidgen.yaml` is almost the same as for the ink! tool, with the sole difference being that a `function` field can now be set for contracts. It is used for indexing contract method calls. Here is an example config for generating a squid indexing the two Arthswap contracts:
+```yaml
+archive: astar
+target:
+ type: postgres
+contracts:
+ - name: pancakeFactory
+ abi: "./abi/factory.json"
+ address: "0xA9473608514457b4bF083f9045fA63ae5810A03E"
+ events: true
+ functions:
+ - feeTo
+ - feeToSetter
+ - name: pancakeRouter
+ abi: "./abi/router.json"
+ address: "0xE915D2393a08a00c5A463053edD31bAe2199b9e7"
+ events: true
+ functions: true
+```
+Note that the `astar` archive used by the EVM processor is different from the `astar` archive used by the Substrate processor in the ink! example. At the moment, `astar` is the only EVM archive that Subsquid maintains for Astar-related networks. If you happen to need an EVM archive for Shibuya or Shiden, please contact the Subsquid team directly using [this form](https://forms.gle/ioVNFiPjZgvUNunY9).
+
+Generate the squid code with
+```bash
+npx squid-gen config squidgen.yaml
+```
+
+## Starting the squid
+
+Once you're done generating the code for your squid it is time to give it a local test run. This procedure is the same for both kinds of squids.
+
+If you used a `postgres` target prepare the database and migrations by running
+```bash
+sqd up # starts a database container
+sqd migration:generate
+```
+
+Then start a squid *processor* - the process that ingests data from the archive:
+```bash
+sqd process
+```
+You should see the processor apply the migrations and start the ingestion, producing messages like
+```
+05:26:16 INFO sqd:processor 3483256 / 3483256, rate: 1045307 blocks/sec, mapping: 294 blocks/sec, 261 items/sec, ingest: 10 blocks/sec, eta: 0s
+05:26:35 INFO sqd:processor 3483257 / 3483257, rate: 157368 blocks/sec, mapping: 211 blocks/sec, 169 items/sec, ingest: 10 blocks/sec, eta: 0s
+05:26:56 INFO sqd:processor 3483259 / 3483259, rate: 79846 blocks/sec, mapping: 151 blocks/sec, 101 items/sec, ingest: 9 blocks/sec, eta: 0s
+```
+
+If the data is stored to the database, it should appear there almost instantaneously. Check it out with
+```bash
+PGPASSWORD=postgres psql -U postgres -p 23798 -h localhost squid
+```
+For file-based targets synchronization [takes longer](https://docs.subsquid.io/basics/store/file-store/overview/#filesystem-syncs-and-dataset-partitioning).
+
+If you want to serve the scraped data over GraphQL you will need to start a separate GraphQL server process. Processor blocks the terminal, so open another one, navigate to the squid project folder and run
+```bash
+sqd serve
+```
+The server will listen at localhost:4350, with a [GraphiQL](https://github.com/graphql/graphiql) playground available at [http://localhost:4350/graphql](http://localhost:4350/graphql):
+
+
+
+## Next steps
+
+Squid processes are just regular NodeJS processes that can be extended with regular Typescript code. The processor can [apply arbitrary transformations to the data](https://docs.subsquid.io/basics/squid-processor/#processorrun), query the contract state ([WASM](https://docs.subsquid.io/substrate-indexing/wasm-support/#state-queries)/[EVM](https://docs.subsquid.io/evm-indexing/query-state/)) or [mix in some data from external sources like APIs or IPFS](https://docs.subsquid.io/basics/external-api/). If you are using a GraphQL server, that can be extended too with [custom queries](https://docs.subsquid.io/graphql-api/custom-resolvers/) and simple [access control](https://docs.subsquid.io/graphql-api/authorization/). Check out Subsquid's [extensive documentation](https://docs.subsquid.io) to learn more about its features.
+
+Once you're done customizing your squid, you can run it on your own infrastructure or use the Aquarium cloud service. In the simplest case, the deployment can be done with just
+```bash
+sqd deploy .
+```
+after [authenticating with Aquarium](https://docs.subsquid.io/squid-cli/#1-obtain-an-aquarium-deployment-key). For more complex scenarios see the [Deploy a Squid](https://docs.subsquid.io/deploy-squid/) section of the framework documentation.
+
+Subsquid team can be reached via [Telegram](https://t.me/HydraDevs) and [Discord](https://discord.gg/dxR4wNgdjV). Feel free to stop by and chat!
diff --git a/docs/build/build-on-layer-1/integrations/indexers/thegraph.md b/docs/build/build-on-layer-1/integrations/indexers/thegraph.md
new file mode 100644
index 0000000..350d37c
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/indexers/thegraph.md
@@ -0,0 +1,131 @@
+---
+sidebar_position: 4
+---
+
+# The Graph
+
+[The Graph]: https://thegraph.com/en/
+
+## Overview: Why is The Graph needed?
+
+[The Graph] is a decentralized protocol for indexing and querying data from blockchains. It makes querying fast, reliable and secure but also allows anyone to build and publish application programming interfaces (APIs) called subgraphs, which act as intermediaries and allow applications to communicate with one other.
+
+## Prerequisites
+
+Before you run The Graph node on a server, you will need:
+
+- [Docker](https://docs.docker.com/get-docker/): Containerization platform for software solutions.
+- [`docker-compose`](https://docs.docker.com/compose/install/) : Used to automate interactions between docker containers.
+- [JQ](https://stedolan.github.io/jq/download/): JSON processor for graph requests.
+
+In this guide, we will demonstrate how to run an Astar node for getting more insight into the transactions on the blockchain, by providing indexing data to The Graph node.
+
+## Running a Graph Node
+
+After successfully running an [RPC node](https://docs.astar.network/docs/build/build-on-layer-1/nodes/archive-node/), the Graph node will need to be installed and configured to connect to a separate computer. If you are running a self-signed RPC node, you will need to set up an extra environment variable for allowance.
+
+The first step is to clone the [Graph Node repository](https://github.com/graphprotocol/graph-node/):
+
+```sh
+git clone \\
+&& cd graph-node/docker
+```
+
+Next, execute the `setup.sh` file. This will pull all the necessary Docker images and write the necessary information to the `docker-compose.yml` file. Do ensure that `docker-compose` and `jq` are installed.
+
+```sh
+sudo bash ./setup.sh
+```
+
+After running the command, the tail end of the command should show similar logs as below:
+
+![8](img/8.png)
+
+Once everything is set up, you will need to modify the "Ethereum environment" inside the `docker-compose.yml` file, so that the Graph node points to the endpoint of the RPC that you are connecting with. Note that the `setup.sh` file detects the RPC's Host IP and writes its value, so you'll need to modify it accordingly.
+
+## Modifying the Ethereum Environment
+
+### Astar
+
+```sh
+# open docker-compose.yml
+nano docker-compose.yml
+
+# modify file and save
+ethereum: 'astar:https://:'
+```
+
+### Shiden
+
+```sh
+# open docker-compose.yml
+nano docker-compose.yml
+
+# modify file and save
+ethereum: 'shiden:https://:'
+```
+
+### Shibuya
+
+```sh
+# open docker-compose.yml
+nano docker-compose.yml
+
+# modify file and save
+ethereum: 'shibuya:https://:'
+```
+
+For example, if you are building a Graph node for Shiden, the entire `docker-compose.yml` now should appear as below:
+
+```yaml
+version: '3'
+services:
+ graph-node:
+ image: graphprotocol/graph-node
+ ports:
+ - '8000:8000'
+ - '8001:8001'
+ - '8020:8020'
+ - '8030:8030'
+ - '8040:8040'
+ depends_on:
+ - ipfs
+ - postgres
+ environment:
+ postgres_host: postgres
+ postgres_user: graph-node
+ postgres_pass: let-me-in
+ postgres_db: graph-node
+ ipfs: 'ipfs:5001'
+ ethereum: 'shiden:https://:'
+ RUST_LOG: info
+ ipfs:
+ image: ipfs/go-ipfs:v0.4.23
+ ports:
+ - '5001:5001'
+ volumes:
+ - ./data/ipfs:/data/ipfs
+ postgres:
+ image: postgres
+ ports:
+ - '5432:5432'
+ command: ["postgres", "-cshared_preload_libraries=pg_stat_statements"]
+ environment:
+ POSTGRES_USER: graph-node
+ POSTGRES_PASSWORD: let-me-in
+ POSTGRES_DB: graph-node
+ volumes:
+ - ./data/postgres:/var/lib/postgresql/data
+```
+
+## Running The Graph containers
+
+Run the following command:
+
+```sh
+sudo docker-compose up
+```
+
+When everything is set up the log will appear as below:
+
+![9](img/9.png)
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/_category_.json b/docs/build/build-on-layer-1/integrations/node-providers/_category_.json
new file mode 100644
index 0000000..2a8d20b
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/node-providers/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Node Providers",
+ "position": 4
+}
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/alchemy.md b/docs/build/build-on-layer-1/integrations/node-providers/alchemy.md
new file mode 100644
index 0000000..aab40ca
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/node-providers/alchemy.md
@@ -0,0 +1,113 @@
+---
+sidebar_position: 4
+description: "Alchemy is a developer platform that helps companies to build scalable and reliable decentralized applications without the difficulty of managing blockchain infrastructure in-house."
+---
+
+# Alchemy
+[Alchemy] is a developer-focused platform that helps companies build scalable and reliable decentralized applications, without the difficulty of managing blockchain infrastructure in-house.
+
+- Alchemy Supernode offers the most reliable, scalable and accurate way to connect and build on the Astar blockchain.
+- Alchemy SDK gives Astar developers the easiest way to connect their dApps to the blockchain with just two lines of code.
+- Websockets & Webhooks allows you to subscribe and get notified for any address activity events as well as mined and dropped transactions.
+
+## Overview
+There are many ways to make requests to Astar network. This guide will show you how to use Alchemy and API it provides that allow us to communicate with the Astar mainnet without having to run our own nodes.
+
+### Step 1. Create an Alchemy account
+
+To use API's and ifrastructure provided by Alchemy, create an Alchemy account for free [here](https://www.alchemy.com/).
+
+### Step 2. Create App.
+
+
+To authenticate your requests you need an API key.
+
+Once you’ve created an Alchemy account, you can generate an API key by creating an app. This will allow you to make requests to Astar network.
+
+Navigate to the “Create App” page in your Alchemy Dashboard by hovering over “Apps” in the nav bar and clicking “Create App”.
+
+![](https://i.imgur.com/kC5t94Q.jpg)
+
+### Step 3. Enter App details.
+
+In the "Create App" window, choose the chain you are connecting to (Astar) and the network (Astar Mainnet). Currently Alchemy only supports the Astar Mainnet. Give your App a name and a description.
+
+![](https://i.imgur.com/LBbPAEC.jpg)
+
+
+### Step 4. Get your API key
+
+Perfect! Now we have created our app and by doing so, generated the API key and an endpoint. To see the API key, choose your newly created App in the list of your Apps and click on the "View Key" button.
+
+![](https://i.imgur.com/SFern1V.jpg)
+
+### Step 5. Save your API key
+
+Copy and save your API Key - you will need it to send request to query data from Astar.
+
+![](https://i.imgur.com/X4aGtSu.jpg)
+
+
+
+### Step 6. Install Alchemy SDK
+
+To interact with Astar blockchain using Alchemy's infrastructure we need to instal Alchemy SDK.
+
+Depending on your package manager run the commands below in a terminal or a command line:
+
+```
+(npm)
+mkdir alchemy-astar-api
+cd alchemy-astar-api
+npm init --yes
+```
+
+```
+(yarn)
+mkdir alchemy-astar-api
+cd alchemy-astar-api
+yarn init --yes
+```
+
+
+### Step 7. Make your first request
+
+You are all set to make your first request! For instance, lets make a request to get `latest block`. Create an `index.js` file and paste the following code snippet into the file.
+
+Ensure to paste your saved API key into the `apiKey` field instead of `demo`.
+
+```
+const { Network, Alchemy } = require("alchemy-sdk");
+
+// Optional Config object, but defaults to demo api-key and eth-mainnet.
+const settings = {
+ apiKey: "demo", // Replace with your Alchemy API Key.
+ network: Network.ASTAR_MAINNET,
+};
+
+const alchemy = new Alchemy(settings);
+
+async function main() {
+ const latestBlock = await alchemy.core.getBlockNumber();
+ console.log("The latest block number is", latestBlock);
+}
+
+main();
+```
+
+
+To run the above node script, use `cmd node index.js`, and you should see the output:
+
+```
+The latest block number is 2404244
+```
+
+
+
+### Step 8
+
+You are now ready to start building on Astar with Alchemy!
+For more requests explore API endpoints [here](https://docs.alchemy.com/reference/astar-api-endpoints).
+
+
+[Alchemy]: https://www.alchemy.com/
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/blast.md b/docs/build/build-on-layer-1/integrations/node-providers/blast.md
new file mode 100644
index 0000000..139feaf
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/node-providers/blast.md
@@ -0,0 +1,62 @@
+---
+sidebar_position: 2
+description: "Blast is Bware Labs' API provider platform that aims to solve Web 3 infrastructure issues related to reliability and latency, by employing geographically distributed third-party nodes."
+---
+
+# Blast
+
+## Overview
+
+Blast is Bware Labs' API provider platform that aims to solve Web3 infrastructure issues related to reliability and latency, by employing geographically distributed third-party nodes.
+
+Blast offers a multi-region architecture that, along with a series of clustering and geo-location mechanisms, ensures optimal routing of user requests to the closest point of presence relative to where a call is generated from. Moreover, using third party-nodes scattered all over the world, Blast ensures the decentralization of the underlying blockchain infrastructures thus reducing down-time and increasing reliability.
+
+## API Usage
+
+Blast offers a standardized Blockchain API service that covers all infrastructure aspects of Web 3 development. For each supported blockchain, users are able to generate a dedicated endpoint that will allow them access to the majority of RPC methods required for dApp development and/or blockchain interaction. In the following sections, you will find detailed instructions for connecting and generating your endpoints, as well as RPC and WSS usage examples, together with platform limitations and payment conditions.
+
+Users joining the platform will be able to use the APIs for free within certain limitations and will have the option to upgrade to a standard paid subscription plan or to contact us to create a customized plan suitable to their needs.
+
+## Public Endpoints
+
+Here are two Public APIs, that include Astar / Shiden and Shibuya (+ one-click add network to MetaMask):
+
+
+
+
+
+### Public RPC Endpoints
+
+
+
+
+
+
+
+### Public WSS Endpoints
+
+
+
+
+
+
+
+## Instructions
+
+1. Launch the app on:
+2. Connect the app to MetaMask. This is to prevent users from spamming the network. You will only need to connect MetaMask to create an account, and sign in to their app.
+![2](img/2.png)
+
+3. Now you can create an API endpoint. Click on '**Add project**' to create the environment.
+![3](img/3.png)
+
+4. Select the desired network and activate the endpoints:
+![4](img/4.png)
+
+After the endpoint is created, you will be able to use the RPC Endpoint to connect to Astar mainnet through MetaMask, or the WSS Endpoint through other applications. These endpoints are only for you to use, and will count towards your account limits.
+
+How to add an endpoint to MetaMask:
+
+1. Open MetaMask
+2. Click on Custom RPC
+3. Add the information
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/1.png b/docs/build/build-on-layer-1/integrations/node-providers/img/1.png
new file mode 100644
index 0000000..af0e06b
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/1.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/10.png b/docs/build/build-on-layer-1/integrations/node-providers/img/10.png
new file mode 100644
index 0000000..d982b2d
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/10.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/11.png b/docs/build/build-on-layer-1/integrations/node-providers/img/11.png
new file mode 100644
index 0000000..a17b710
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/11.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/12.png b/docs/build/build-on-layer-1/integrations/node-providers/img/12.png
new file mode 100644
index 0000000..4cf6182
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/12.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/13.png b/docs/build/build-on-layer-1/integrations/node-providers/img/13.png
new file mode 100644
index 0000000..7f5c5a7
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/13.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/2.png b/docs/build/build-on-layer-1/integrations/node-providers/img/2.png
new file mode 100644
index 0000000..d86d1d1
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/2.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/3.png b/docs/build/build-on-layer-1/integrations/node-providers/img/3.png
new file mode 100644
index 0000000..956e69d
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/3.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/4.png b/docs/build/build-on-layer-1/integrations/node-providers/img/4.png
new file mode 100644
index 0000000..ae109d8
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/4.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/5.png b/docs/build/build-on-layer-1/integrations/node-providers/img/5.png
new file mode 100644
index 0000000..6e8cef7
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/5.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/6.png b/docs/build/build-on-layer-1/integrations/node-providers/img/6.png
new file mode 100644
index 0000000..c0ce350
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/6.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/7.png b/docs/build/build-on-layer-1/integrations/node-providers/img/7.png
new file mode 100644
index 0000000..1dd7fe5
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/7.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/8.png b/docs/build/build-on-layer-1/integrations/node-providers/img/8.png
new file mode 100644
index 0000000..460e72e
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/8.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/img/9.png b/docs/build/build-on-layer-1/integrations/node-providers/img/9.png
new file mode 100644
index 0000000..675c91e
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/node-providers/img/9.png differ
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/index.md b/docs/build/build-on-layer-1/integrations/node-providers/index.md
new file mode 100644
index 0000000..dfbb469
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/node-providers/index.md
@@ -0,0 +1,14 @@
+# Node Providers
+
+The free endpoints mentioned in the [Build Environment section](/docs/build/build-on-layer-1/environment/endpoints.md) are rate limited and designed for end users to be able to interact with dApps, or deploy/call smart contracts. They are not suitable for usage by dApp UIs that scrape blockchain data continuously, or indexers (like the Graph).
+
+
+If you are an active developer consider creating your own endpoint. It is mandatory to do so for production deployments. Refer to how run an [archive node](/docs/build/build-on-layer-1/nodes/archive-node/index.md) for more information, or obtain an API key from one of our infrastructure providers listed below:
+
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/onfinality.md b/docs/build/build-on-layer-1/integrations/node-providers/onfinality.md
new file mode 100644
index 0000000..92fa3c3
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/node-providers/onfinality.md
@@ -0,0 +1,27 @@
+---
+sidebar_position: 1
+description: "OnFinality is a SaaS platform providing infrastructure services for the Polkadot community. Its mission is to support blockchain developers of all shapes and sizes by providing infrastructure services so they can focus on building the next dApp."
+---
+
+# OnFinality
+
+## Overview
+
+[OnFinality] is a SaaS platform providing infrastructure services for the Polkadot community. Its mission is to support blockchain developers by providing infrastructure services so they can focus on building dApps.
+
+OnFinality provides an API service for the Astar ecosystem and is live on all our networks. The initial integration of OnFinality allows users to:
+
+- Query our network RPC and WSS endpoints immediately using OnFinality’s free API service. You can create a free account key allowing up to 500k daily requests to high-performance managed nodes, including nodes for Polkadot and Kusama.
+- Quickly stand up dedicated nodes for private access to high-performance WSS and RPC APIs, without needing to manage your own infrastructure.
+
+## Instructions
+
+To create a custom OnFinality endpoint, go to [OnFinality] and sign up, or log in if you have an account already. From the **OnFinality Dashboard**, you can:
+
+1. Click on API Service
+2. Select the network from the dropdown
+3. Your custom API endpoint will be generated automatically
+
+![1](img/1.png)
+
+[OnFinality]: https://onfinality.io/
diff --git a/docs/build/build-on-layer-1/integrations/node-providers/pinknode.md b/docs/build/build-on-layer-1/integrations/node-providers/pinknode.md
new file mode 100644
index 0000000..d188f78
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/node-providers/pinknode.md
@@ -0,0 +1,60 @@
+---
+sidebar_position: 3
+description: "Pinknode is a Polkadot-focused Infrastructure-as-a-Service platform offering RPC service through globally distributed node architecture coupled with Kubernetes clustering and geo-location routing for increased reliability and reduced latency. You will be able to integrate into the Astar ecosystem with Pinknode API services within minutes."
+---
+
+# Pinknode
+
+
+![5](img/5.png)
+
+
+## Overview
+
+Pinknode is a Polkadot-focused Infrastructure-as-a-Service platform offering RPC service through globally distributed node architecture coupled with Kubernetes clustering and geo-location routing for increased reliability and reduced latency. You will be able to integrate into the Astar ecosystem with Pinknode API services within minutes.
+
+## Public Endpoints
+
+Pinknode provides RPC and WSS endpoints for the Astar ecosystem. You will be able to connect via:
+
+- Public endpoints (May be subject to increased rate limits during high network load)
+- Custom API endpoints on:
+
+
+| Network | RPC Endpoint | WSS Endpoint|
+|----|----|---|
+| Astar | | wss://public-rpc.pinknode.io/astar |
+| Shiden | | wss://public-rpc.pinknode.io/shiden |
+| Shibuya | | wss://public-rpc.pinknode.io/shibuya |
+
+## Instructions
+
+### Step 1
+
+Log in or sign up via [Pinknode Portal](https://pinknode.io/login) with the Astar partnership promo code for a free upgrade to the partnership tier plan.
+
+Astar promo code: PINKASTAR
+
+- 500,000 --> 1,000,000 requests limit per day
+- 15 --> 50 requests per second
+- 1 --> 3 projects limit
+
+For new sign-ups, enter PINKASTAR on the sign-up page to activate the partnership tier plan.
+
+![6](img/6.png)
+
+For an existing account, click on the upgrade plan on the dashboard and enter PINKASTAR to activate the partnership tier plan.
+
+![7](img/7.png)
+
+### Step 2
+
+Create a new project.
+
+![8](img/8.png)
+
+### Step 3
+
+Select your network with the dropdown.
+
+![9](img/9.png)
diff --git a/docs/build/build-on-layer-1/integrations/oracles/_category_.json b/docs/build/build-on-layer-1/integrations/oracles/_category_.json
new file mode 100644
index 0000000..bbf905e
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/oracles/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Oracles",
+ "position": 7
+}
diff --git a/docs/build/build-on-layer-1/integrations/oracles/band.md b/docs/build/build-on-layer-1/integrations/oracles/band.md
new file mode 100644
index 0000000..7332b5a
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/oracles/band.md
@@ -0,0 +1,147 @@
+---
+sidebar_position: 2
+---
+
+# Band Protocol
+
+[Band Protocol]: https://bandprotocol.com/
+
+## Overview
+
+[Band Protocol] is a cross-chain data oracle that aggregates and connects real-world data and APIs, to smart contracts.
+
+### Why do Blockchains Require Oracles?
+
+Blockchains are great at providing immutable storage and deterministically verifiable computations. However, they cannot access trusted real-world information available outside their networks. Band Protocol enhances smart contract functionality by granting access to reliable data, without relying on centralized authorities, or points of failure.
+
+## Using Band Protocol
+
+Decentralized application developers have two ways to fetch prices from Band’s oracle infrastructure. The first option, is to use Band’s smart contracts on Astar. By doing so, developers can access on-chain data updated either at regular intervals, or when price slippage is greater than a threshold amount (different for each token). Currently, **the interval is set at 10 minutes, or price slippage of 0.5%.** The second option, allows developers to use the JavaScript helper library, which relies on an API endpoint to fetch data using similar functions as those used with the smart contracts, to obtain price quotes off-chain. This can be useful if your dApp front-end needs direct access to data.
+
+The Aggregator Contract address can be found in the following table:
+
+### Astar
+
+Smart Contract (Aggregator): 0xDA7a001b254CD22e46d3eAB04d937489c93174C3
+
+### Shiden
+
+Smart Contract (Aggregator): 0xDA7a001b254CD22e46d3eAB04d937489c93174C3
+
+## Supported Tokens
+
+Price queries with any denomination are available as long as the base and quote symbols are supported (base/quote). For example:
+
+- `BTC/USD`
+- `BTC/ETH`
+- `ETH/EUR`
+
+We provide feeds for the following assets:
+
+- ASTR
+- ATOM
+- AVAX
+- BNB
+- BUSD
+- DAI
+- DOT
+- ETH
+- FTM
+- MATIC
+- SOL
+- USDC
+- USDT
+- WBTC
+
+## Obtain Data Using Smart Contracts
+
+To query prices from Band's oracle through smart contracts, the contract requiring the price values should reference Band's `StdReference` contract. This contract exposes the `getReferenceData` and `getReferenceDataBulk` functions.
+
+`getReferenceData` takes two strings as inputs, the base and quote symbol, respectively. It then queries the `StdReference` contract for the latest rates for those two tokens, and returns a `ReferenceData` struct, shown below:
+
+```rust
+struct ReferenceData {
+ uint256 rate; // base/quote exchange rate, multiplied by 1e18.
+ uint256 lastUpdatedBase; // UNIX epoch of the last time when base price gets updated.
+ uint256 lastUpdatedQuote; // UNIX epoch of the last time when quote price gets updated.
+}
+```
+
+`getReferenceDataBulk` instead takes two lists, one of the base tokens, and one of the quotes. It then queries the price for each base/quote pair at each index, and returns an array of `ReferenceData` structs.
+
+For example, if we call `getReferenceDataBulk` with `['BTC','BTC','ETH']` and `['USD','ETH','BNB']`, the ReferenceData array returned will contain information regarding the pairs:
+
+- `BTC/USD`
+- `BTC/ETH`
+- `ETH/BNB`
+
+## Example Usage
+
+The contract code below demonstrates a simple usage of the new `StdReference` contract and the `getReferenceData` function.
+
+```ts
+pragma solidity 0.6.11;
+pragma experimental ABIEncoderV2;
+
+interface IStdReference {
+ /// A structure returned whenever someone requests for standard reference data.
+ struct ReferenceData {
+ uint256 rate; // base/quote exchange rate, multiplied by 1e18.
+ uint256 lastUpdatedBase; // UNIX epoch of the last time when base price gets updated.
+ uint256 lastUpdatedQuote; // UNIX epoch of the last time when quote price gets updated.
+ }
+
+ /// Returns the price data for the given base/quote pair. Revert if not available.
+ function getReferenceData(string memory _base, string memory _quote)
+ external
+ view
+ returns (ReferenceData memory);
+
+ /// Similar to getReferenceData, but with multiple base/quote pairs at once.
+ function getReferenceDataBulk(string[] memory _bases, string[] memory _quotes)
+ external
+ view
+ returns (ReferenceData[] memory);
+}
+
+contract DemoOracle {
+ IStdReference ref;
+
+ uint256 public price;
+
+ constructor(IStdReference _ref) public {
+ ref = _ref;
+ }
+
+ function getPrice() external view returns (uint256){
+ IStdReference.ReferenceData memory data = ref.getReferenceData("BTC","USD");
+ return data.rate;
+ }
+
+ function getMultiPrices() external view returns (uint256[] memory){
+ string[] memory baseSymbols = new string[](2);
+ baseSymbols[0] = "WBTC";
+ baseSymbols[1] = "DOT";
+
+ string[] memory quoteSymbols = new string[](2);
+ quoteSymbols[0] = "USD";
+ quoteSymbols[1] = "USDT";
+ IStdReference.ReferenceData[] memory data = ref.getReferenceDataBulk(baseSymbols,quoteSymbols);
+
+ uint256[] memory prices = new uint256[](2);
+ prices[0] = data[0].rate;
+ prices[1] = data[1].rate;
+
+ return prices;
+ }
+
+ function savePrice(string memory base, string memory quote) external {
+ IStdReference.ReferenceData memory data = ref.getReferenceData(base,quote);
+ price = data.rate;
+ }
+}
+```
+
+## Full Documentation
+
+You can find the Band Protocol official documentation [here](https://docs.bandchain.org/).
diff --git a/docs/build/build-on-layer-1/integrations/oracles/dia-wasm.md b/docs/build/build-on-layer-1/integrations/oracles/dia-wasm.md
new file mode 100644
index 0000000..b14f8df
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/oracles/dia-wasm.md
@@ -0,0 +1,76 @@
+# DIA Wasm Oracle
+## Introduction
+DIA has a dedicated Wasm-based oracle. It can be universally deployed on any chain that supports substrate Wasm environment.
+
+## Prerequisites
+Make sure the version of ink you are on is v3.0.1.
+
+## Getting started
+To access values from DIA wasm oracles you need to copy the diadata directory to your contract so that you can access DIA structs, that contain the oracle data.
+
+### Contract Integration
+In your contract, create storage with DiaDataref, this is used to access values from the oracle.
+
+```
+ #[ink(storage)]
+ pub struct OracleExample {
+ diadata: DiadataRef,
+ ....
+ ....
+ }
+```
+
+This struct can be used to access data from pub functions from the oracle contract.
+
+### Link the contract with an Oracle
+To instantiate a contract's access to the oracle you need to pass the DIA oracle address, either using the constructor or by creating a separate write function to update with the value of oracle at a later stage.
+
+Here is an example using a constructor:
+
+```
+ #[ink(constructor)]
+ pub fn new(
+ oracle_address: AccountId,
+ ) -> Self {
+ let diadata: DiadataRef = ink_env::call::FromAccountId::from_account_id(oracle_address);
+ Self {
+ diadata
+ }
+ }
+```
+
+Here, `oracle_address` refers to the DIA oracle address of a deployed oracle contract.
+
+### Access the value
+Next, to access an oracle value you can simple call the get() function:
+
+```
+ pub fn get(&self ) -> diadata::ValueTime {
+ return self.diadata.get(String::from("ETH"));
+ }
+```
+
+This returns the ETH price value time given by the oracle.
+
+### Config changes
+
+Make sure you add diadata/std in you config:
+
+```
+std = [
+ "ink_metadata/std",
+ "ink_env/std",
+ "ink_storage/std",
+ "ink_primitives/std",
+ "scale/std",
+ "scale-info/std",
+ "diadata/std",
+]
+```
+
+## Addresses
+**Astar Wasm Smart Contract**: [XmVR4FbKWLYQgyHVxkFiBYScVo662WgSCoS84uZZPWNrtRT](https://shiden.subscan.io/account/XmVR4FbKWLYQgyHVxkFiBYScVo662WgSCoS84uZZPWNrtRT)
+**Shibuya Wasm Smart Contract**: [X5NLwAUYX7FwVk25a8JwaXtuVJQsW87GQcKxYoF3aLyu8Pz](https://shibuya.subscan.io/account/X5NLwAUYX7FwVk25a8JwaXtuVJQsW87GQcKxYoF3aLyu8Pz)
+
+## Learn more
+See the entire oracle code and instructions on how to run and oracle service by yourself in [our github repo](https://github.com/diadata-org/dia-wasm-oracle).
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/integrations/oracles/dia.md b/docs/build/build-on-layer-1/integrations/oracles/dia.md
new file mode 100644
index 0000000..ed87100
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/oracles/dia.md
@@ -0,0 +1,73 @@
+---
+sidebar_position: 1
+---
+
+# DIA
+
+[DIA]: https://www.diadata.org/
+
+## Overview
+
+DIA is a cross-chain, end-to-end, open-source data and oracle platform for Web3. DIA is an ecosystem for open financial data in a financial smart contract ecosystem. The goal of DIA is to bring together data analysts, data providers, and data users. In general, DIA provides a reliable and verifiable bridge between off-chain data from various sources and on-chain smart contracts that can be used to build a variety of financial dApps. DIA is setup as a hybrid system with off-chain components for storing and processing large amounts of data and on-chain components providing data sources for financial smart contracts.
+
+## DIA's API
+
+Show your users the most transparent data on the market with DIA's API. Whether you're building a financial service, a portfolio management tool, a new media offering, or more, DIA has the most advanced and updated data on the market for your product.
+
+### API Access
+
+The DIA base URL is `https://api.diadata.org/v1`. All API paths are sub-paths of this base URL. You can find specific documentation for the endpoints of our API on the [API documentation site](https://docs.diadata.org/documentation/api-1/api-endpoints).
+
+## DIA's Oracle
+
+Here, we provide an overview of the deployed oracle contracts on each supported chain.
+
+DIA Development Oracle contracts are smart contracts that provide a selected range of asset prices for live testing on our Mainnet and Testnet. The contracts are upgraded and exchanged on a rolling basis and are not maintained indefinitely.
+
+DIA Development Oracle contracts are not intended to be integrated into a dApp. DIA deploys dedicated contracts for dApps. Please request a dedicated oracle by contacting the team on their [Discord](https://discord.com/invite/zFmXtPFgQj) or the [DIA DAO Forum](https://dao.diadata.org/).
+
+## Deployed Contracts
+
+[Key/Value Oracle]: https://docs.diadata.org/documentation/oracle-documentation/access-the-oracle#dia-key-value-oracle-contract-v2
+
+### Astar
+
+**Smart Contract**: [0xd79357ebb0cd724e391f2b49a8De0E31688fEc75](https://blockscout.com/astar/address/0xd79357ebb0cd724e391f2b49a8De0E31688fEc75/contracts)
+
+**Oracle Type**: [Key/Value Oracle]
+
+### Shiden
+
+**Smart Contract**: [0xCe784F99f87dBa11E0906e2fE954b08a8cc9815d](https://blockscout.com/shiden/address/0xCe784F99f87dBa11E0906e2fE954b08a8cc9815d/contracts)
+
+**Oracle Type**: [Key/Value Oracle]
+
+### Shibuya
+
+**Smart Contract**: 0x1232AcD632Dd75f874E357c77295Da3f5Cd7733E
+
+**Oracle Type**: [Key/Value Oracle]
+
+## Price feeds
+
+The oracle contains information about crypto assets. You can access a price quotation (see [sources](https://docs.diadata.org/documentation/methodology/digital-assets/cryptocurrency-trading-data) and [methodology](https://docs.diadata.org/documentation/methodology/digital-assets/exchangeprices)) and the current circulating supply as well as the timestamp of the last update.
+
+1. Access the corresponding oracle smart contract (see table above).
+2. Call `getCoinInfo(coin_name)` with `coin_name` being the full coin name such as `Bitcoin`. You can use the "Read" section on Etherscan to execute this call.
+3. The response of the call contains four values:
+ 1. The current asset price in USD with a fix-comma notation of five decimals.
+ 2. The current circulating supply.
+ 3. The [UNIX timestamp](https://www.unixtimestamp.com/) of the last oracle update.
+ 4. The short name of the asset, e.g., `BTC` for Bitcoin.
+
+The development oracle supports price quotations for, at the very least, the following assets:
+
+- BTC
+- ETH
+- DIA
+- USDC
+- FTM
+- SDN
+- KSM
+- MOVR
+- ASTR
diff --git a/docs/build/build-on-layer-1/integrations/vrf/_category_.json b/docs/build/build-on-layer-1/integrations/vrf/_category_.json
new file mode 100644
index 0000000..a6934ff
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/vrf/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "VRF",
+ "position": 8
+}
diff --git a/docs/build/build-on-layer-1/integrations/vrf/band.md b/docs/build/build-on-layer-1/integrations/vrf/band.md
new file mode 100644
index 0000000..49521fd
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/vrf/band.md
@@ -0,0 +1,78 @@
+---
+sidebar_position: 1
+---
+
+# Band Protocol VRF
+
+[Band VRF]: https://bandprotocol.com/vrf
+
+## Overview
+
+[Band VRF] provides a verifiable pseudorandomness solution based on the BandChain blockchain. Our protocol uses a Verifiable Random Function (VRF) to cryptographically secure and verify that output results have not and cannot be tampered with.
+
+Similar to the BandChain Oracle Network, the Band VRF is a VRF system that serves requests from dApps. Validators on the BandChain and the VRF Oracle Script are responsible for generating the random number requests that are verifiably random. Final validated results are stored on the BandChain as proof of the random number generation process before returning the results to the requested dApps.
+
+## Integrate with Band VRF
+
+This guide serves as a quick reference about how to request random data from the Band VRF. For a more detailed reference with examples, refer to the [VRF Integration](https://docs.bandchain.org/vrf/vrf-integration.html) section.
+
+### Step 1: Prepare a VRF Consumer Contract
+
+1. Create a VRF consumer contract that can call the `requestRandomData` function on the `VRFProvider` contract.
+2. Implement a callback function (e.g. `consume`) on the VRF consumer contract, which allows the `VRFProvider` contract to call back and execute some logic against the returned result. It is critical that this callback function can only be called by the `VRFProvider` contract.
+
+### Step 2: Choose a resolving method
+There are currently 3 methods for relaying and resolving the VRF request:
+- **Band's VRF worker solution** - We provide both standard and customized solutions for all clients. Visit [contact us](mailto:bd@bandprotolcol.com) for more details.
+- **Manually resolve on CosmoScan** - This is an ideal and low cost solution for one-off Band VRF requests. Refer to this [guide](https://docs.bandchain.org/vrf/vrf-integration.html#manually-request-and-resolve) for how to resolve manually.
+- **Implement your own resolver bot** - Anyone can implement their own version of resolver bot. An open-source version of Band's VRF worker bot will be available soon.
+
+### Step 3: Request a Random Value
+
+You are now ready to request a random value from the Band VRF.
+
+A summary of the Band VRF process is outlined below:
+1. Simply call the request function on your VRF consumer contract that implements the `requestRandomData` function in Step 1, providing a `seed` and an optional `msg.value`.
+2. Depending on the resolving method chosen in Step 2, the request is sent to the BandChain.
+3. The VRF oracle script on the BandChain forwards the request to a randomly chosen data source, and then retrieves the returned result and the corresponding proof of authenticity.
+4. Depending on the resolving method chosen in Step 2, the proof is relayed to the `Bridge` contract for verification on the client chain via the `VRFProvider` contract.
+5. If the verification succeeds, the result (random value) is returned to the VRF consumer contract via the callback function mentioned in Step 1.
+
+## Contract Addresses
+
+For `VRFProvider` and other contract addresses on Astar, please refer to the [Supported Blockchains](https://docs.bandchain.org/vrf/supported-blockchains.html) section.
+
+## Example Usage
+
+The contract below is an example of a simple VRF consumer contract written in Solidity.
+
+```ts
+contract MockVRFConsumer {
+ IVRFProvider public provider;
+ string public latestSeed;
+ uint64 public latestTime;
+ bytes32 public latestResult;
+
+ constructor(IVRFProvider _provider) {
+ provider = _provider;
+ }
+
+ function requestRandomDataFromProvider(string calldata seed) external payable {
+ provider.requestRandomData{value: msg.value}(seed);
+ }
+
+ function consume(string calldata seed, uint64 time, bytes32 result) external override {
+ require(msg.sender == address(provider), "Caller is not the provider");
+
+ latestSeed = seed;
+ latestTime = time;
+ latestResult = result;
+ }
+}
+```
+
+More complex and detailed examples can be found in the [Example Use Cases](https://docs.bandchain.org/vrf/example.html) section.
+
+## Full Documentation
+
+[Band VRF Documentation](https://docs.bandchain.org/vrf/introduction.html)
diff --git a/docs/build/build-on-layer-1/integrations/wallets/_category_.json b/docs/build/build-on-layer-1/integrations/wallets/_category_.json
new file mode 100644
index 0000000..9f0647f
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/wallets/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Wallets",
+ "position": 3
+}
diff --git a/docs/build/build-on-layer-1/integrations/wallets/astar-safe.md b/docs/build/build-on-layer-1/integrations/wallets/astar-safe.md
new file mode 100644
index 0000000..f9de7e4
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/wallets/astar-safe.md
@@ -0,0 +1,157 @@
+---
+sidebar_position: 1
+---
+
+# Astar Safe (Gnosis Safe)
+
+Build on the Gnosis Safe infrastructure: take advantage of the most modular, flexible, and secure wallet and identity solution in Ethereum. Now available on Astar!
+
+## Create a Safe
+
+To get started, navigate to [Astar Safe].
+
+:::info
+The current guide will focus on creating a MultiSig on Astar. Soon, we will add more networks to the list.
+:::
+
+### Connect MetaMask
+
+To create a wallet, ypu need to connect your wallet:
+
+1. Click **Connect Wallet**
+2. Select a wallet to connect. For this example, we use MetaMask.
+
+![1](img/1.png)
+![2](img/2.png)
+
+If you're not already signed into MetaMask, you will be prompted to sign in or download the MetaMask plugin. You will then be guided through adding and connecting your accounts, and adding and switching to the Astar Network:
+
+1. Select an account and connect to the Safe. You'll want to select at least 2 of the 3 owner accounts and then click Next. You may need to add additional accounts to your **MetaMask Address Book** For this example, two of the accounts have all been selected.
+2. Connect to the selected accounts by clicking Connect.
+3. If you are not connected to Astar Network, and you don't have the network added to your MetaMask, add it as a custom network by clicking Approve.
+4. Switch the network to Astar Network by clicking Switch Network.
+
+![3](img/3.png)
+
+Your wallet is now connected and connected to the correct network. Let's continue by creating a Safe on Astar. Press Continue.
+
+![4](img/4.png)
+
+## Create New Safe
+
+To create a new Safe on Astar, click **Continue**. You will be taken to a wizard that will walk you through the process of creating your new Safe. By following these steps and creating your Safe, you are consenting to the terms of use and the privacy policy.
+
+Let's begin by giving your Safe a name:
+
+1. Enter the name of your new Safe, for example `my-astar-safe`.
+2. Click **Continue**
+
+![5](img/5.png)
+
+Next up is the owners, and confirmations section of the wizard. In this section, you will add the owners of the Safe and specify the signing threshold. The threshold determines how many of the owners are required to confirm a transaction before the transaction gets executed.
+
+There are a few options that can set when creating a Safe, such as number of owners, and varying signing threshold levels. Do note that it is not advised to create a Safe with a single owner, as it will be a single point of failure.
+
+In this guide, you will create a MultiSig setup that has three owners, and requires a threshold of 2, so at least 2 out of the 3 owners keys are required to execute transactions through the Safe.
+
+Your account information, as Owner 1, will be completed automatically, however, this can be modified if you would like to use a different account. In this example, the Owner 1 account has been prefilled. In addition to Owner 1, you can also add `Owner 2` and `Owner 3` as owners:
+
+1. Click **Add another owner**
+2. Enter **Owner 2** as the second owner, along with his address: `0x612c7623732d756FBb7a2eAb904Cd8989116C41F`
+3. Enter **Owner 3** as the third owner, along with his address: `0xA96a67A5e969755B918F1ac1c20D496141a31b3F`
+4. Set the confirmation threshold to **2 of 3** owners
+5. Click **Review** to go to the last step in the wizard
+
+![6](img/6.png)
+
+Finally, review and confirm the Safe and owner details before clicking **Continue** then:
+
+1. Click **Create** to create your new Safe. The creation of the Safe will cost approximately less than .001 ASTR tokens on Astar Network. MetaMask will pop up and prompt you to confirm the transaction.
+2. Click **Confirm** to send the transaction and create the Safe
+![7](img/7.png)
+
+It will take a moment to process the transaction and create the Safe, and once it done you should see a message saying "**Your Safe was created successfully**". From there, click **Get Started** to load your Safe and start interacting with it.
+
+![8](img/8.png)
+![9](img/9.png)
+
+## Configure Safe
+
+You can manage your Safe at any time, and change the parameters. To do so, click on the **Settings** option on the left-hand side menu.
+
+![10](img/10.png)
+
+In there you should see the following options:
+
+- **Safe Details** — allows you to change the Safe name. This is a local action that requires no on-chain interaction.
+- **Owners** — allows you to initiate an on-chain proposal to add/remove owners to the Safe.
+- **Policies** — allows you to initiate an on-chain proposal to change the MultiSig threshold to execute the proposal as a transaction
+- **Advanced** — allows you to check other parameters from the Safe, such as the nonce, modules, and transaction guard.
+
+## Send and Receive Native Assets
+
+### Receive native Assets
+
+You can now start interacting with your Safe, and can send funds to it from any account with **ASTR** tokens. In this example, we will use the Owner 1 account. Hover over ASTR in the list of assets to reveal the **Send** and **Receive** buttons. Then click **Receive**.
+
+![11](img/11.png)
+![12](img/12.png)
+
+Next, open up MetaMask, and send some ASTR tokens to the MultiSig wallet. Once the transaction is complete, the balance of ASTR tokens will be updated within the Safe.
+
+### Send Native Assets
+
+Now that there are some funds in the Safe, they can sent to another account. In this example, we will send **1 ASTR** token to the `Owner 2` address. Hover over **ASTR** in the list of assets, and this time click on **Send**. Fill in all the information and click **Review**. Double check all the information and click on **Submit**.
+
+:::caution
+MetaMask will pop-up where you may notice that instead of signing a transaction, it's requesting that you sign a message. Click **Sign** to sign the message.
+:::
+
+Now, return to the Safe, where under the **Transactions** tab, you should see that there has been a transaction proposal initiated to send 1 ASTR token to the Owner 2 address. However, you should also see that only 1 out of 2 confirmations have been received, and that one more owner is required to confirm the transaction, before it gets executed.
+
+### Confirm MultiSig Safe Transaction
+
+The process of confirming (or rejecting) a transaction proposal is similar for all the use cases of a MultiSig Safe. One of the owners initiates the proposal to execute an action. The other owners can approve or reject the proposal. Once the signature threshold is reached, any owner can execute the transaction proposal if approved, or discard the transaction proposal if rejected.
+
+In this example, if two of the three owners decide to reject the proposal, then the assets would remain in the Safe. However, in this case, you can confirm the transaction from either `Owner 2` or `Owner 3` account.
+
+Switch accounts in MetaMask to `Owner 2` account (or Owner 3). Then go back to the Safe connected as Owner 2. The **Confirm** button should now be enabled. As Owner 2, click **Confirm** to meet the threshold and send the transaction.
+
+![15](img/15.png)
+
+1. Check the **Execute transaction** box to execute the transaction immediately after confirmation. You can un-check it for the transaction to be executed manually at a later time.
+2. Click **Submit**. MetaMask will pop-up and ask you to confirm the transaction, if everything looks good, you can click **Confirm**
+
+![16](img/16.png)
+
+The transaction will be removed from the **QUEUE** tab and a record of the transaction can now be found under the **HISTORY** tab. In addition, `Owner 2` balance has now increased by 1 ASTR token, and the Safe's balance for ASTR tokens has decreased.
+
+![17](img/17.png)
+
+## Send and Receive Other Assets
+
+### Receive Other Assets
+
+Next, we will send and receive some other assets from the Safe. In this example we will send some DOT to the Astar Safe. Open up MetaMask:
+
+1. Switch to the Assets tab and select DOT from the list.
+2. Click Send.
+3. Paste in the Safe's address.
+4. Enter amount of DOTs to send. Click Next.
+5. Review the transaction details and then click Confirm to send the transaction.
+
+![19](img/19.png)
+
+If you navigate back to the Safe, in the list of **Assets** you should now see **DOT**, and a balance of 1 DOT. It could take a few minutes for the **DOT** to appear, but will do so on it's own.
+
+### Send Other Assets
+
+Now that you have loaded your Safe with DOT, you can send some from the Safe to another account. You can use the same workflow as [sending and confirming native assets](https://app.gitbook.com/o/-LgGrgOEDyFYjYWIb1DT/s/-M8GVK5H7hOsGnYqg-7q-872737601/integration/wallets/astar-safe-gnosis-safe#send-tokens).
+
+## Smart contract interaction
+
+To directly interact with smart contracts from your Astar Safe account, please follow this guide:
+
+[Gnosis Safe: Contract Interactions](https://help.gnosis-safe.io/en/articles/3738081-contract-interactions)
+
+[Astar Safe]: https://safe.astar.network/
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/1.png b/docs/build/build-on-layer-1/integrations/wallets/img/1.png
new file mode 100644
index 0000000..93844bf
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/1.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/10.png b/docs/build/build-on-layer-1/integrations/wallets/img/10.png
new file mode 100644
index 0000000..ce63693
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/10.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/11.png b/docs/build/build-on-layer-1/integrations/wallets/img/11.png
new file mode 100644
index 0000000..46747f8
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/11.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/12.png b/docs/build/build-on-layer-1/integrations/wallets/img/12.png
new file mode 100644
index 0000000..02674ec
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/12.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/13.png b/docs/build/build-on-layer-1/integrations/wallets/img/13.png
new file mode 100644
index 0000000..0ee07cc
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/13.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/14.png b/docs/build/build-on-layer-1/integrations/wallets/img/14.png
new file mode 100644
index 0000000..8aba760
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/14.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/15.png b/docs/build/build-on-layer-1/integrations/wallets/img/15.png
new file mode 100644
index 0000000..7f2fbde
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/15.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/16.png b/docs/build/build-on-layer-1/integrations/wallets/img/16.png
new file mode 100644
index 0000000..343d8b5
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/16.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/17.png b/docs/build/build-on-layer-1/integrations/wallets/img/17.png
new file mode 100644
index 0000000..914c7f9
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/17.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/18.png b/docs/build/build-on-layer-1/integrations/wallets/img/18.png
new file mode 100644
index 0000000..e64eb9a
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/18.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/19.png b/docs/build/build-on-layer-1/integrations/wallets/img/19.png
new file mode 100644
index 0000000..e64eb9a
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/19.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/2.png b/docs/build/build-on-layer-1/integrations/wallets/img/2.png
new file mode 100644
index 0000000..2894260
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/2.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/20.png b/docs/build/build-on-layer-1/integrations/wallets/img/20.png
new file mode 100644
index 0000000..eef9a64
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/20.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/21.png b/docs/build/build-on-layer-1/integrations/wallets/img/21.png
new file mode 100644
index 0000000..c9cb92b
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/21.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/3.png b/docs/build/build-on-layer-1/integrations/wallets/img/3.png
new file mode 100644
index 0000000..8e63811
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/3.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/4.png b/docs/build/build-on-layer-1/integrations/wallets/img/4.png
new file mode 100644
index 0000000..976baa7
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/4.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/5.png b/docs/build/build-on-layer-1/integrations/wallets/img/5.png
new file mode 100644
index 0000000..5f329f2
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/5.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/6.png b/docs/build/build-on-layer-1/integrations/wallets/img/6.png
new file mode 100644
index 0000000..eff6e19
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/6.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/7.png b/docs/build/build-on-layer-1/integrations/wallets/img/7.png
new file mode 100644
index 0000000..9193117
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/7.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/8.png b/docs/build/build-on-layer-1/integrations/wallets/img/8.png
new file mode 100644
index 0000000..ffc4374
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/8.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/img/9.png b/docs/build/build-on-layer-1/integrations/wallets/img/9.png
new file mode 100644
index 0000000..9805fb5
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/img/9.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/_category_.json b/docs/build/build-on-layer-1/integrations/wallets/ledger/_category_.json
new file mode 100644
index 0000000..3c8146c
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/wallets/ledger/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Ledger",
+ "position": 9
+}
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/1-AstarEVM.jpg b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/1-AstarEVM.jpg
new file mode 100644
index 0000000..a246313
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/1-AstarEVM.jpg differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/11-AcceptAndSend.jpg b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/11-AcceptAndSend.jpg
new file mode 100644
index 0000000..2527fbb
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/11-AcceptAndSend.jpg differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/12-Reject.jpg b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/12-Reject.jpg
new file mode 100644
index 0000000..3219ffb
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/12-Reject.jpg differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/2-ShidenEVM.jpg b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/2-ShidenEVM.jpg
new file mode 100644
index 0000000..0bd1fa4
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/2-ShidenEVM.jpg differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/3-ApplicationIsReady.jpg b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/3-ApplicationIsReady.jpg
new file mode 100644
index 0000000..6df2ba2
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/3-ApplicationIsReady.jpg differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/4-ReviewTransaction.jpg b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/4-ReviewTransaction.jpg
new file mode 100644
index 0000000..d287a2f
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/4-ReviewTransaction.jpg differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/5-AmountASTR1.jpg b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/5-AmountASTR1.jpg
new file mode 100644
index 0000000..7be1650
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/5-AmountASTR1.jpg differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/6-Address.jpg b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/6-Address.jpg
new file mode 100644
index 0000000..a397a03
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/6-Address.jpg differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/7-Network_Astar.jpg b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/7-Network_Astar.jpg
new file mode 100644
index 0000000..8788d91
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/7-Network_Astar.jpg differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/9-MaxFees_ASTR.jpg b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/9-MaxFees_ASTR.jpg
new file mode 100644
index 0000000..04d796b
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/9-MaxFees_ASTR.jpg differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/acc_balance.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/acc_balance.png
new file mode 100644
index 0000000..d1213cf
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/acc_balance.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/confirm_tx.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/confirm_tx.png
new file mode 100644
index 0000000..7bbb95c
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/confirm_tx.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/connect_hw_wallet.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/connect_hw_wallet.png
new file mode 100644
index 0000000..d33f1e8
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/connect_hw_wallet.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/pair_hid.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/pair_hid.png
new file mode 100644
index 0000000..ffcaf99
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/pair_hid.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/select_acc.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/select_acc.png
new file mode 100644
index 0000000..24beffe
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/select_acc.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/select_ledger.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/select_ledger.png
new file mode 100644
index 0000000..118e44b
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/evm/select_ledger.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/01-open_in_window.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/01-open_in_window.png
new file mode 100644
index 0000000..a8a1d6e
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/01-open_in_window.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/02-connect_ledger.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/02-connect_ledger.png
new file mode 100644
index 0000000..ecfbe2e
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/02-connect_ledger.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/03-device_connect.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/03-device_connect.png
new file mode 100644
index 0000000..8e1e489
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/03-device_connect.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/04-select_network.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/04-select_network.png
new file mode 100644
index 0000000..af665a8
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/04-select_network.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/05-name_account.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/05-name_account.png
new file mode 100644
index 0000000..dd0b2aa
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/05-name_account.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/06-account_imported.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/06-account_imported.png
new file mode 100644
index 0000000..5fda30e
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/06-account_imported.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/07-connect_wallet.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/07-connect_wallet.png
new file mode 100644
index 0000000..62415b3
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/07-connect_wallet.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/08-pick_account.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/08-pick_account.png
new file mode 100644
index 0000000..11685e0
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/08-pick_account.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/09-switch_to_window.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/09-switch_to_window.png
new file mode 100644
index 0000000..bb29685
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/09-switch_to_window.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/10-initiate_transfer.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/10-initiate_transfer.png
new file mode 100644
index 0000000..23237ed
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/10-initiate_transfer.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/11-confirm_transfer.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/11-confirm_transfer.png
new file mode 100644
index 0000000..ede31e7
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/11-confirm_transfer.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/12-sign_on_ledger.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/12-sign_on_ledger.png
new file mode 100644
index 0000000..cf0960a
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/12-sign_on_ledger.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D00-installed.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D00-installed.png
new file mode 100644
index 0000000..7482c10
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D00-installed.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D01-ready.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D01-ready.png
new file mode 100644
index 0000000..dd28e9e
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D01-ready.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D02-review.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D02-review.png
new file mode 100644
index 0000000..22a5228
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D02-review.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D03-extrinsic_name.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D03-extrinsic_name.png
new file mode 100644
index 0000000..21f6b87
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D03-extrinsic_name.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D04-address.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D04-address.png
new file mode 100644
index 0000000..7635b09
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D04-address.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D05-amount.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D05-amount.png
new file mode 100644
index 0000000..2b44c25
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D05-amount.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D06-tip.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D06-tip.png
new file mode 100644
index 0000000..283b61a
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D06-tip.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D07-approve.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D07-approve.png
new file mode 100644
index 0000000..b06999e
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D07-approve.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D08-reject.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D08-reject.png
new file mode 100644
index 0000000..f3004ec
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/D08-reject.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger1.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger1.png
new file mode 100644
index 0000000..f34d3d7
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger1.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger2.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger2.png
new file mode 100644
index 0000000..3c6cd83
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger2.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger3.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger3.png
new file mode 100644
index 0000000..fe02ddc
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger3.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger4.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger4.png
new file mode 100644
index 0000000..a90ccc0
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger4.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger5.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger5.png
new file mode 100644
index 0000000..56a0d20
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger5.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger6.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger6.png
new file mode 100644
index 0000000..d2fb2ab
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger6.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger7.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger7.png
new file mode 100644
index 0000000..01197a5
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger7.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger8.png b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger8.png
new file mode 100644
index 0000000..b3210e4
Binary files /dev/null and b/docs/build/build-on-layer-1/integrations/wallets/ledger/img/native/ledger8.png differ
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/ledger-evm.md b/docs/build/build-on-layer-1/integrations/wallets/ledger/ledger-evm.md
new file mode 100644
index 0000000..c925a4b
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/wallets/ledger/ledger-evm.md
@@ -0,0 +1,180 @@
+---
+sidebar_position: 9
+title: Ledger Astar/Shiden EVM on MetaMask
+---
+
+## Using Astar and Shiden EVM Ledger apps with MetaMask
+
+## Intro
+
+**Astar EVM** and **Shiden EVM** apps are now available on Ledger hardware wallet devices. This means that MetaMask users can now sign transactions for EVM accounts on those networks using Ledger Nano S (plus) or Ledger X devices.
+
+This guide will show you how to set up Astar EVM and Shiden EVM on your Ledger hardware wallet and how to use it in combination with MetaMask.
+
+:::caution
+Do not transfer from EVM to Ledger Astar Native accounts. It is not supported.
+:::
+:::info
+Photos are taken using Nano S Plus device, and the example shows interaction with Astar EVM app, but the process is the same with Nano S and Nano X devices, as well as Shiden EVM app.
+:::
+
+## Requirements
+
+### Your Ledger device is ready for use
+
+- [Make sure you have set up your Ledger device](https://support.ledger.com/hc/en-us/articles/360000613793?docs=true)
+- Update your device to latest firmware
+ - [Nano S](https://support.ledger.com/hc/en-us/articles/360002731113?docs=true)
+ - [Nano S Plus](https://support.ledger.com/hc/en-us/articles/4445777839901?docs=true)
+ - [Nano X](https://support.ledger.com/hc/en-us/articles/360013349800?docs=true)
+- [Download and install Ledger Live app for your OS](https://support.ledger.com/hc/en-us/articles/4404389606417-Download-and-install-Ledger-Live?docs=true)
+- [Download and install MetaMask for your browser](https://metamask.io/download/)
+
+### Astar/Shiden network set up on Metamask
+
+If you already have this, feel free to skip this part.
+
+1. In the MetaMask menu navigate to Settings → Networks, and click “Add a network”
+2. Enter following details for Astar:
+ 1. Network name: `Astar Network Mainnet`
+ 2. New RPC URL:
+ `https://astar.public.blastapi.io/`
+ `https://astar-rpc.dwellir.com/`
+ `https://astar.api.onfinality.io/public`
+ 3. Chain ID: `592`
+ 4. Currency Symbol: `ASTR`
+ 5. Block Explorer URL(Optional): `https://astar.subscan.io/`
+3. Click the “Save” button
+4. Repeat steps 1-3 for Shiden network with following details:
+ 1. Network name: `Shiden Network Mainnet`
+ 2. New RPC URL:
+ `https://shiden.public.blastapi.io`
+ `https://shiden-rpc.dwellir.com`
+ `https://shiden.api.onfinality.io/public`
+ 3. Chain ID: `336`
+ 4. Currency Symbol: `SDN`
+ 5. Block Explorer URL(Optional): `https://shiden.subscan.io/`
+5. Close the Settings menu and from the dropdown select the network you wish to interact with
+
+### Install apps to your Ledger device
+
+1. Open Ledger Live app and navigate to “Manager”
+2. Connect your Ledger Device and unlock it
+ 1. If asked, confirm Ledger Manager on your device
+3. Search for “Astar EVM” or “Shiden EVM” in the app catalog
+4. Click install
+
+After this step, you should have one or both of these apps:
+
+
+
+
+
+
+
+
+
+
+## Connecting your Ledger device to MetaMask
+
+1. In MetaMask menu select “Connect Hardware wallet:
+
+
+
+
+
+2. On the next screen select “Ledger” and click “Continue:
+
+
+
+
+
+3. Pair and connect your Device when prompted by the browser:
+
+
+
+
+
+4. Select an account you wish to connect and click “Unlock”:
+
+
+
+
+
+5. You should now see your account and balance:
+
+
+
+
+
+# Receiving tokens
+
+To receive tokens, copy the address of your connected account by clicking your account name in MetaMask header, and send some tokens to that address from your preferred source.
+
+# Sending tokens
+
+1. In MetaMask click “Send” button and enter the address you wish to send to
+2. Enter the amount to send and click “Next”
+3. Connect your Ledger device and unlock it. Due to MetaMask limitations, it will prompt you to open Ethereum App. Ignore this and open Astar EVM app.
+
+
+
+
+
+4. When your Ledger device screen is showing “Application is ready”, click “Confirm” in MetaMask:
+
+
+
+
+
+5. Review the transaction on your Ledger device:
+
+
+
+
+
+ a) Check amount:
+
+
+
+
+
+ b) Check receiving address:
+
+
+
+
+
+ c) Check network:
+
+
+
+
+
+ d) Check Fees:
+
+
+
+
+
+ e) Either approve or reject the transaction:
+
+
+
+
+
+
+
+
+
+
+6. Check the transaction result in MetaMask “Activity” tab.
+
+## Interacting with smart contracts
+
+In order to interact with smart contracts, you need to enable blind signing in the Astar EVM app on your Ledger device:
+
+1. Open the app
+2. Navigate to “Settings” and confirm
+3. Confirm the “Blind signing” option, so it turns to “Enabled”
+4. Navigate to “Back” and confirm
diff --git a/docs/build/build-on-layer-1/integrations/wallets/ledger/ledger-native.md b/docs/build/build-on-layer-1/integrations/wallets/ledger/ledger-native.md
new file mode 100644
index 0000000..f49be8e
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/wallets/ledger/ledger-native.md
@@ -0,0 +1,106 @@
+---
+sidebar_position: 10
+title: Ledger for Astar Native Accounts
+---
+import ledger1 from "./img/native/ledger1.png"
+import ledger2 from "./img/native/ledger2.png"
+import ledger3 from "./img/native/ledger3.png"
+import ledger4 from "./img/native/ledger4.png"
+import ledger5 from "./img/native/ledger5.png"
+import ledger6 from "./img/native/ledger6.png"
+import ledger7 from "./img/native/ledger7.png"
+import ledger8 from "./img/native/ledger8.png"
+
+
+# Using a Ledger device with Astar Native Accounts
+
+:::danger
+At the time of this release, the following operations are **NOT SUPPORTED:** on Ledger devices:
+- **XCM transfers**
+:::
+
+This tutorial walks through the process of setting up a Ledger device to participate in dApp staking using ASTR native tokens, initiating the first interaction between the device and the network, and also explains some limitations of using Ledger devices with the native dApp staking system.
+
+### Before staking, confirm that:
+1. Ledger Live is up to date, and the Astar app is installed.
+2. The Ledger device firmware is up to date.
+3. A Ledger account has been imported to Polkadot.js.
+4. A Chromium-based browser is available for all web-based operations, such as Google Chrome or Brave.
+5. The Ledger device is configured to use WebHID as the preferred hardware connection method.
+
+### Update Ledger Live and Device Firmware
+
+Ensure Ledger Live is up to date.
+
+
+
+
+
+- If prompted to update the device Firmware do so, as it will update the Astar app as well.
+
+
+
+
+
+- Once Ledger Live is up to date, ensure the latest Astar app (version 2.52.2 or higher) is installed.
+
+:::tip
+Ledger NanoS users should install the Astar XL version of the app, shown in the image below:
+:::
+
+
+
+
+
+### Import Ledger account to Polkadot.js
+
+- Open the Polkadot.js extension.
+- Click the + sign menu option.
+- Choose ‘Attach ledger account.’ Make sure your ledger is unlocked.
+
+
+
+
+
+- Follow through the process of Importing a Ledger Account by specifying a descriptive name. The default name and settings are shown in the image below:
+
+
+
+
+
+### Configure the Ledger device connection method
+
+- Once the Ledger account has been imported, visit the [Astar Network settings page on the Polkadot.js apps portal](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.astar.network#/settings) and ensure **Attach Ledger via WebHID** is the preferred connection method listed under *account options* > *manage hardware connections*, as shown in the image below:
+
+
+
+
+
+### Visit the Astar Portal
+
+- Open a browser and visit the [Astar Portal](https://portal.astar.network).
+- Connect the Polkadot.js extension to the Portal.
+- Select the Ledger account that was imported during the last step. Check the toggle so Portal knows the device is a ledger. It can now be used to participate in dApp staking.
+
+
+
+
+
+For detailed information about dApp staking or how to stake on the EVM side of Astar Portal using a Ledger device, please refer to the [Astar official documentation] (/docs/build/dapp-staking/for-stakers/) [INSERT LINK] or [Ledger EVM staking guide](./ledger-evm.md)
+
+:::tip
+If you receive a **Ledger error: Failed to execute 'claimInterface' on 'USBDevice': Unable to claim interface** message during the dApp staking claim process, ensure you are performing the operation using a Chromium-based browser such as Chrome or Brave, and the Ledger device connection method is WebHID, as outlined in the previous step.
+:::
+
+
+
+
+
+## Ledger NanoS and S-plus/X device limitations
+
+Consider the following scenario: You stake on 2 dApps, and accumulate 2 eras worth of rewards each day. What happens if you do not claim the rewards for an entire month?
+
+- Ledger Nano-S - The Nano-S device supports claiming a maximum of **2 eras** at a time, so for as long as there are more eras to claim in the dApp staking dashboard, you should continue to claim. Based on a month's worth of accumulated rewards (30 eras), 30 claims would need to be initiated using a Nano-S.
+- Ledger Nano X - Based on a maximum of **6 eras per claim,** 10 claims would need to be initiated using a Nano X.
+
+Staking on multiple dApps using a Ledger device may substantially increase the amount of time and/or administrative overhead required to participate in dApp staking in order to maximize benefits. However, although multiple claims may be required to retrieve all rewards from the Portal using a Ledger device, the fees remain the same per era claimed whether they occur in batches, or as individual transactions.
diff --git a/docs/build/build-on-layer-1/integrations/wallets/subwallet.md b/docs/build/build-on-layer-1/integrations/wallets/subwallet.md
new file mode 100644
index 0000000..a99a8d6
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/wallets/subwallet.md
@@ -0,0 +1,50 @@
+---
+sidebar_position: 2
+---
+
+# SubWallet
+
+## Overview
+
+SubWallet, Polkadot{.js}, and Talisman extensions allow dApps to connect to them via public interaction with the `injectedWeb3` object of the window browser.
+
+- SubWallet (public with properties `subwallet-js`)
+- Polkadot{.js} (public with properties `polkadot-js`)
+- Talisman (public with properties `talisman`)
+
+![20](img/20.png)
+
+You can open the `injectedWeb3` object in your browser's devtools.
+
+![21](img/21.png)
+
+## How to integrate with a dApp
+
+:::info
+Refer to these examples:
+
+- Github Repository
+- Demo App:
+- Video Demo:
+ :::
+
+- Check the extension is active:
+ - When a wallet extension is active in a browser it will modify `window.injectedWeb3` by adding its interaction and specifying the name.
+ - For example: check the SubWallet extension by this code: `window.injectedWeb3 && window.injectedWeb3['subwallet-js']`
+- Enable integration with your dApp by using the method `enable()` of the extension interaction object
+
+```js
+const SubWalletExtension = window.injectedWeb3['subwallet-js'];
+const extension = await SubWalletExtension.enable();
+```
+
+After running the code extension, a popup will appear to confirm integration with your dApp.
+
+- After enabling, the `extension` variable may contain the following objects:
+ - `accounts`: Allow getting account data with 2 methods: `get()` and `subscribe()`.
+ - `signer`: Allow to sign data with 2 methods: `signPayload()` and `signRaw()`.
+ - `metadata`: Allow getting additional metadata list with method `get()` and add/update with `provide()`.
+
+## Use with TypeScript
+
+If your dApp is written with TypeScript you will need to add `@polkadot/extension-inject` to your `package.json` to get the extension interfaces.
diff --git a/docs/build/build-on-layer-1/integrations/wallets/transak.md b/docs/build/build-on-layer-1/integrations/wallets/transak.md
new file mode 100644
index 0000000..7f487f1
--- /dev/null
+++ b/docs/build/build-on-layer-1/integrations/wallets/transak.md
@@ -0,0 +1,29 @@
+---
+sidebar_position: 3
+---
+
+# Transak
+
+## Overview
+
+Transak is a developer integration that lets users buy cryptocurrency within a dApp, or directly from a website.
+
+With Transak you can onboard mainstream users into your dApp, protocol, game, or wallet app and also increase your revenue. Transak handles the KYC, regulation & compliance, fiat payment methods, and crypto coverage.
+
+Whether you're a small startup, or a large established firm looking for a fiat on-ramp, integrating and customizing Transak is an easy process. The simplest technical integrations can be done in only five minutes.
+
+## Getting Started
+
+Follow their handy guides to get started on exploring and integrating Transak as quickly as possible.
+
+### Partner Onboarding Process
+
+This guide will lay out the end-to-end steps for onboarding with Transak and integration, up to the point users are able to make live transactions:
+
+[Onboarding and Integration Process Overview](https://docs.transak.com/docs/onboarding-and-integration-process-overview)
+
+### Jump right in and start playing around
+
+Would you like to explore and see what's possible with Transak first before onboarding? Feel free to [set up an account](https://docs.transak.com/docs/setup-your-partner-account) and [create a demo integration](https://docs.transak.com/docs/integration-options). They can be done in only a few minutes.
+
+Additionally, there are other guides available, such as [using the Partner dashboard](https://dashboard.transak.com/).
diff --git a/docs/build/build-on-layer-1/introduction/_category_.json b/docs/build/build-on-layer-1/introduction/_category_.json
new file mode 100644
index 0000000..1ece9a0
--- /dev/null
+++ b/docs/build/build-on-layer-1/introduction/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Introduction",
+ "position": 1
+}
diff --git a/docs/build/build-on-layer-1/introduction/address_format.md b/docs/build/build-on-layer-1/introduction/address_format.md
new file mode 100644
index 0000000..eda5e51
--- /dev/null
+++ b/docs/build/build-on-layer-1/introduction/address_format.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 6
+---
+
+# Address Format
+
+The address format used in Substrate-based chains like Astar is ss58. Ss58 is a modification of Base-58-check from Bitcoin, with some minor modifications. Notably, the format contains an address type prefix that identifies an address as belonging to a specific network. Astar Network is special in the Polkadot ecosystem because it's the only parachain that supports EVM and Wasm smart contracts. Two distinct and different virtual machines necessitates the use two kinds of addresses:
+
+1. An Astar Native address or ss58 address, which uses 256 bits.
+2. An Astar EVM or H160 address, which starts with 0x, and uses 160 bits.
diff --git a/docs/build/build-on-layer-1/introduction/astar_family.md b/docs/build/build-on-layer-1/introduction/astar_family.md
new file mode 100644
index 0000000..3119123
--- /dev/null
+++ b/docs/build/build-on-layer-1/introduction/astar_family.md
@@ -0,0 +1,63 @@
+---
+sidebar_position: 5
+---
+
+# Astar Network Family
+
+Prior to commencing development, it's important to understand the Astar Network family, and choose an appropriate network based on what you would like to do. Currently, there are a number of networks available, including the Local network which runs exclusively within your development environment. All networks support Substrate and EVM RPCs.
+
+![Astar networks](img/networks.png)
+
+## Local Networks
+
+### Local Node
+
+You can clone the Astar repository and run the local node provided, or download the precompiled binary and run it, instead. Both methods are described in the [Build Environment](../environment) section.
+
+### Swanky Node
+
+Swanky Node is a Substrate based blockchain configured to enable the smart contract module `pallet-contracts`, and other features that assist local development of Wasm smart contracts.
+For more information about Swanky Node, check out the [Swanky Suite](/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/index.md) section.
+
+### Zombienet
+
+With Zombienet users can download arbitrary Relay Chain and parachain binaries (or use images) to setup a configurable local test network. Users will have access to all privileged actions on the Relay Chain and parachains, which simplifies testing. For more information about Zombienet, check out the [Build Environment](../environment/zombienet-testing) chapter.
+
+## Testnets
+
+### Shibuya
+
+Shibuya has nearly the same chain specifications as Shiden & Astar, and provides an ideal environment for developers to test and debug, prior to launching their dApp on mainnet.
+Shibuya is running as a parachain of the Tokio Relay Chain, which is managed internally by the Astar team, and supports Shibuya, only, as test parachain.
+
+The Shibuya native token symbol is SBY.
+
+To obtain test tokens from the faucet, please visit the Astar Portal and connect to Shibuya. If for any reason the faucet is empty, please contact the Astar team on Discord.
+
+### Rocstar
+
+Rococo is a test Relay Chain used by the Polkadot & Kusama communities. Astar team has deployed a parachain to it called Rocstar, which is mainly used for cross-chain integrations with other teams in the ecosystem. To obtain test tokens for Rocstar, please contact Astar team on Discord.
+
+The Rocstar native token symbol is ROC.
+
+## Mainnets
+
+Astar has two mainnets, like most parachains in the Polkadot ecosystem. One on Kusama Relay Chain, and the other on Polkadot Relay chain.
+
+### Shiden
+
+Shiden is a parachain connected to the Kusama Relay Chain, and used to deploy and test new releases of Astar runtime in a live production (canary) environment. Shiden is not considered a testnet since SDN has its own tokenomics and value, but is used to validate and stabilize new software releases and upgrades for Astar Network.
+
+The Shiden native token symbol is SDN.
+
+### Astar
+
+By now you may have already guessed that Astar network is a parachain on Polkadot Relay chain.
+
+The Astar native token symbol is ASTR.
+
+## Questions and Assignments:
+
+1. Using the account you created in previous chapters, visit to Astar portal, connect to Shibuya testnet and claim some tokens from the faucet. You will need them later to deploy contracts on Shibuya.
+2. Are you able to deduce how to transfer SDN tokens to Astar, and swap them for ASTR tokens?
+3. Is there a market for buying and selling SBY tokens?
diff --git a/docs/build/build-on-layer-1/introduction/create_account.md b/docs/build/build-on-layer-1/introduction/create_account.md
new file mode 100644
index 0000000..43a7a92
--- /dev/null
+++ b/docs/build/build-on-layer-1/introduction/create_account.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 3
+---
+
+# Create Account
+If you never created a native Astar account, please follow the instructions in the [User Guide] (/docs/use/manage-wallets/create-wallet.md) [INSERT LINK].
+
+If you are building EVM smart contracts you will need Metamask. Watch this short video to learn how.
+
+
diff --git a/docs/build/build-on-layer-1/introduction/img/6.png b/docs/build/build-on-layer-1/introduction/img/6.png
new file mode 100644
index 0000000..792166e
Binary files /dev/null and b/docs/build/build-on-layer-1/introduction/img/6.png differ
diff --git a/docs/build/build-on-layer-1/introduction/img/7.png b/docs/build/build-on-layer-1/introduction/img/7.png
new file mode 100644
index 0000000..95cfd1d
Binary files /dev/null and b/docs/build/build-on-layer-1/introduction/img/7.png differ
diff --git a/docs/build/build-on-layer-1/introduction/img/8.png b/docs/build/build-on-layer-1/introduction/img/8.png
new file mode 100644
index 0000000..a750ca1
Binary files /dev/null and b/docs/build/build-on-layer-1/introduction/img/8.png differ
diff --git a/docs/build/build-on-layer-1/introduction/img/9.png b/docs/build/build-on-layer-1/introduction/img/9.png
new file mode 100644
index 0000000..e82b5a7
Binary files /dev/null and b/docs/build/build-on-layer-1/introduction/img/9.png differ
diff --git a/docs/build/build-on-layer-1/introduction/img/networks.png b/docs/build/build-on-layer-1/introduction/img/networks.png
new file mode 100644
index 0000000..3391023
Binary files /dev/null and b/docs/build/build-on-layer-1/introduction/img/networks.png differ
diff --git a/docs/build/build-on-layer-1/introduction/img/switch_astar.png b/docs/build/build-on-layer-1/introduction/img/switch_astar.png
new file mode 100644
index 0000000..eb5c608
Binary files /dev/null and b/docs/build/build-on-layer-1/introduction/img/switch_astar.png differ
diff --git a/docs/build/build-on-layer-1/introduction/index.md b/docs/build/build-on-layer-1/introduction/index.md
new file mode 100644
index 0000000..e16cbb1
--- /dev/null
+++ b/docs/build/build-on-layer-1/introduction/index.md
@@ -0,0 +1,24 @@
+import Figure from '/src/components/figure'
+
+# Introduction
+
+
+
+To make use of this documentation effectively readers should possess a general understanding of programming basics. The programming languages used throughout are mainly Rust, Solidity, and JavaScript, for which previous knowledge is not necessary, but highly beneficial. For a greater depth of understanding of the material contained within these sections, we recommend reviewing supplemental material covering these programming languages in order to improve the overall learning experience and your ability to understand and use the practical code examples provided.
+
+### Do I need blockchain knowledge to follow this documentation?
+Blockchain knowledge is useful but not required. Everything you need to know about how to start building is contained within these sections.
+
+### I'm a Polkadot builder, do I need this?
+If you are already a builder on Polkadot/Kusama ecosystem you can most likely skip the Introduction chapter and jump right into reading about our Networks. [INSERT LINKS]
+
+### Do I need to be a developer to understand Introduction chapter?
+To use this introduction chapter you do not need any programming skills, and it will be useful later when you step into more advanced topics.
+
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/introduction/node_interact.md b/docs/build/build-on-layer-1/introduction/node_interact.md
new file mode 100644
index 0000000..d4b6f31
--- /dev/null
+++ b/docs/build/build-on-layer-1/introduction/node_interact.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 2
+---
+
+# Interacting with Polkadot
+
+Gaining an understanding of the material covered in this section will allow you to debug quickly and easily, should you run into issues.
+
+Developers can interact with Polkadot and its parachains using the [Polkadot{.js}](https://polkadot.js.org) portal. Let's run through a few simple tasks to help you get accustomed to the interface.
+
+First, visit the Polkadot apps portal, and switch to an Astar node by selecting Astar under **Polkadot & Parachains** and press Switch. You will be able to toggle network selection by clicking on the network name in the top left.
+
+
+
+
+
+![Switch to Astar](img/switch_astar.png)
+
+## Review Accounts and Balances
+Under the **Accounts** tab, you will be able to review your accounts and balances.
+If you are using Polkadot.js UI for the first time, you should not see any accounts. You will learn how to create accounts in the next section.
+
+## Explore Block Production
+New blocks will appear on screen as they are finalized. Note the block production time.
+
+## Explore Contents of the Latest Block
+Select **Explore** under the **Network tab**.
+
+Click on the latest block.
+Notice the calls that were inputs to the state change for this block. These calls are called extrinsics.
+Browse through the events that were emitted during the production of this block. Many of you will notice events like `balances.Deposit` and `balances.Withdrawal`, which are emitted when a transfer of funds occurs within a block.
+
+## Storage query
+Select **Chain State** under the **Developer** tab.
+
+Here you will find a drop down menu with all the pallets used on Astar network. You can query the state of any storage item in these pallets.
+Let's check which assets are defined in Astar Network.
+Select the `assets` pallet and read the storage item called `asset`. Disable `include option` to list all the available values, and press the `+` button. The output will be the list of all available assets. This is raw data, so for a more user friendly presentation, you can view the same information by selecting **Assets** under the **Network** tab.
diff --git a/docs/build/build-on-layer-1/introduction/polkadot_relay.md b/docs/build/build-on-layer-1/introduction/polkadot_relay.md
new file mode 100644
index 0000000..4aea110
--- /dev/null
+++ b/docs/build/build-on-layer-1/introduction/polkadot_relay.md
@@ -0,0 +1,65 @@
+---
+sidebar_position: 1
+---
+
+# Polkadot Relay Chain
+Before you get started on your journey towards becoming an Astar network hacker, it will be beneficial to know about what Polkadot is, and its relationship to Astar. If you are already building on Astar you will not need to go over the sections covering Substrate and how to create a Runtime, but it will be helpful for you to understand the environment, terminology and how to leverage this interconnected network of blockchains, that's right within your grasp.
+
+Polkadot is a multi-chain environment which enables specialized blockchains (called Parachains) to communicate with each other in a secure, trustless environment.
+
+Astar is a blockchain connected to Polkadot Relay chain, specialized for:
+* Executing all types of smart contracts.
+* Providing a hybrid EVM + Wasm environment supporting Cross-VM (XVM) smart contract calls.
+* Incentivizing ecosystem innovation and providing basic income for dApp developers.
+* Seamlessly aggregating features or assets from parachains in the ecosystem.
+
+## Blockchain Basics
+A blockchain is a decentralized ledger that records information in a sequence of blocks. The information contained in a block is an ordered set of instructions that may or may not result in a change in state.
+
+In a blockchain network, individual computers—called nodes—communicate with each other to form a decentralized peer-to-peer (P2P) network. There is no central authority that controls the network and, typically, each node that participates in block production stores a copy of the blocks that make up the canonical chain.
+
+In most cases, users interact with a blockchain by submitting a request that might result in a change in state, for example, a request to change the owner of a file or to transfer funds from one account to another. These transactions requests are gossiped to other nodes on the network and assembled into a block by a block author. To ensure the security of the data on the chain and the ongoing progress of the chain, the nodes use some form of consensus to agree on the state of the data in each block and on the order of transactions executed. [Read more...](https://docs.substrate.io/fundamentals/blockchain-basics/)
+
+## What is Polkadot
+To get started, let's kick it off with two short videos that do a very good job at explaining some core concepts around Polkadot. First, watch Bill Laboon, Director of Education and Support at the Web3 Foundation, explain the basics of Polkadot.
+
+
+
+Ok, you can’t learn it all in one minute. But how about in 5 minutes? Have a look at this excellent video from DeFi Teller, explaining how Polkadot works.
+
+
+
+## How the Relay Chain Works
+The Polkadot network uses a sharded model where shards - called "parachains", allow transactions to be processed in parallel instead of sequentially. Each parachain in the network has a unique state transition function. Polkadot is a Relay Chain acting as the main chain of the system.
+
+Parachains construct and propose blocks to validators on the Relay Chain, where the blocks undergo rigorous availability and validity checks before being added to the finalized chain. As the Relay Chain provides the security guarantees, collators - full nodes of these parachains - don't have any security responsibilities, and thus do not require a robust incentive system. This is how the entire network stays up to date with the many transactions that take place.
+
+## Substrate
+Based on Polkadot's design, as long as a chain's logic can compile to Wasm and adheres to the Relay Chain API, then it can connect to the Polkadot network as a parachain.
+However, the majority of parachains today are built using [Substrate](https://substrate.io/) because Substrate-based chains are easy to integrate into Polkadot or Kusama to become a parachain. Essentially, Substrate is the SDK which can be used to build parachains and Polkadot is the means of securing the chains and allowing them to communicate with each other.
+
+At a high level, a Substrate node provides a layered environment with two main elements:
+1. An outer node that handles network activity such as peer discovery, managing transaction requests, reaching consensus with peers, and responding to RPC calls.
+2. A runtime that contains all of the business logic for executing the state transition function of the blockchain.
+Read more about [Architecture](https://docs.substrate.io/fundamentals/architecture/).
+
+### FRAME
+FRAME is an acronym for Framework for Runtime Aggregation of Modularized Entities which encompasses a significant number of modules and support libraries that simplify runtime development. In Substrate, these modules (called pallets) offer customizable business logic for different use cases and features that you might want to include in your runtime. For example, there are pallets that provide a framework of business logic for staking, consensus, governance, and other common activities.
+Read more about [Runtime development](https://docs.substrate.io/fundamentals/runtime-development/)
+
+## Applications Running on a Blockchain
+Applications that run on a blockchain, often referred to as decentralized applications or dApps, are typically web applications written using front-end frameworks, but powered by smart contracts on the backend, to affect the blockchain state.
+
+A **smart contract** is a program that runs on a blockchain and executes transactions on behalf of users under specific conditions. Developers can write smart contracts to ensure that the outcomes of programmatically-executed transactions are recorded, and can't be tampered with. Yet, smart contracts operate in a sandboxed environment, where developers don't have access to underlying blockchain functionality, such as consensus, storage, or transaction layers, and instead, abide by a chain's fixed rules and restrictions. Smart contract developers often accept these limitations as a tradeoff that shortens the development lifecycle, by avoiding having to make core design decisions.
+
+## Where Do Smart Contracts Execute?
+The Polkadot runtime does not support smart contracts. Smart contracts require a Virtual Machine (VM) environment where contracts can be executed, and the most well-known and widely supported platform being the Ethereum Virtual Machine (EVM). Substrate FRAME contains modules that support Wasm smart contract execution, as well as EVM.
+
+### Ethereum Virtual Machine (EVM)
+The Ethereum Virtual Machine (EVM) is a virtual computer with components that enable Ethereum network participants to store data and agree on the state of that data. On a Substrate-based blockchain, the core responsibilities of the EVM are implemented in the EVM pallet, that's responsible for executing Ethereum contract bytecode written in a high level language like Solidity. Astar EVM provides a fully Ethereum Virtual Machine compatible platform, which you can learn more about in the [EVM chapter](/docs/build/build-on-layer-1/smart-contracts/EVM/index.md).
+
+### Substrate Virtual Machine for Wasm Contracts
+Substrate also ships with a module for smart contracts, called `pallet-contracts`. If a parachain is developed on Substrate it can easily add smart contract functionality by including this pallet. Astar supports this Polkadot Native approach to smart contracts, and you can learn more in the [Wasm chapter](/docs/build/build-on-layer-1/smart-contracts/wasm/index.md).
+
+
+
diff --git a/docs/build/build-on-layer-1/nodes/_category_.json b/docs/build/build-on-layer-1/nodes/_category_.json
new file mode 100644
index 0000000..bac62f9
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Run A Node",
+ "position": 9
+}
diff --git a/docs/build/build-on-layer-1/nodes/archive-node/_category_.json b/docs/build/build-on-layer-1/nodes/archive-node/_category_.json
new file mode 100644
index 0000000..12087fd
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/archive-node/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Run an Archive Node",
+ "position": 1
+}
diff --git a/docs/build/build-on-layer-1/nodes/archive-node/binary.md b/docs/build/build-on-layer-1/nodes/archive-node/binary.md
new file mode 100644
index 0000000..26ca136
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/archive-node/binary.md
@@ -0,0 +1,247 @@
+---
+sidebar_position: 1
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Binary
+
+In this guide, we will use the binary provided in [Astar release](https://github.com/AstarNetwork/Astar).
+
+If you have experience with Rust compilation, you can also build the binary from [here](https://github.com/astarnetwork/astar).
+
+## Let's get started
+
+Let's start with updating our server. Connect to your server and update:
+
+```sh
+sudo apt-get update
+sudo apt-get upgrade
+sudo apt install -y adduser libfontconfig1
+```
+
+## Create dedicated user and directory
+
+Download the [latest release](https://github.com/AstarNetwork/Astar/releases/latest) from Github:
+
+```sh
+wget $(curl -s https://api.github.com/repos/AstarNetwork/Astar/releases/latest | grep "tag_name" | awk '{print "https://github.com/AstarNetwork/Astar/releases/download/" substr($2, 2, length($2)-3) "/astar-collator-v" substr($2, 3, length($2)-4) "-ubuntu-x86_64.tar.gz"}')
+tar -xvf astar-collator*.tar.gz
+```
+
+Create a dedicated user for the node and move the **node binary**:
+
+```sh
+sudo useradd --no-create-home --shell /usr/sbin/nologin astar
+sudo mv ./astar-collator /usr/local/bin
+sudo chmod +x /usr/local/bin/astar-collator
+```
+
+Create a dedicated directory for the **chain storage data**:
+
+```sh
+sudo mkdir /var/lib/astar
+sudo chown astar:astar /var/lib/astar
+```
+
+## Set systemd service
+
+To run a stable collator node, a **systemd service** has to be set and activated. This will ensure that the node is restarting even after a server reboot.
+
+Create a service file
+
+```sh
+sudo nano /etc/systemd/system/astar.service
+```
+
+## Service parameters
+
+:::tip
+Please make sure to change **{NODE_NAME}**
+:::
+
+
+
+
+```sh
+[Unit]
+Description=Astar Archive node
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --pruning archive \
+ --rpc-cors all \
+ --name {NODE_NAME} \
+ --chain astar \
+ --base-path /var/lib/astar \
+ --rpc-external \
+ --rpc-methods Safe \
+ --rpc-max-request-size 1 \
+ --rpc-max-response-size 1 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+```sh
+[Unit]
+Description=Shiden Archive node
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --pruning archive \
+ --rpc-cors all \
+ --name {NODE_NAME} \
+ --chain shiden \
+ --base-path /var/lib/astar \
+ --rpc-external \
+ --rpc-methods Safe \
+ --rpc-max-request-size 1 \
+ --rpc-max-response-size 1 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+```sh
+[Unit]
+Description=Shibuya Archive node
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --pruning archive \
+ --rpc-cors all \
+ --name {NODE_NAME} \
+ --chain shibuya \
+ --base-path /var/lib/astar \
+ --rpc-external \
+ --rpc-methods Safe \
+ --rpc-max-request-size 1 \
+ --rpc-max-response-size 1 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+:::important
+EVM RPC calls are disabled by default, and require additional flag to be enabled. Please refer to this page [INSERT LINK] for more info.
+:::
+
+Start the service:
+
+```sh
+sudo systemctl start astar.service
+```
+
+Check the node log to ensure proper syncing:
+
+```sh
+journalctl -f -u astar.service -n100
+```
+
+Enable the service:
+
+```sh
+sudo systemctl enable astar.service
+```
+
+You can test the node health through the RPC port with this command:
+
+```sh
+curl -H "Content-Type: application/json" --data '{ "jsonrpc":"2.0", "method":"system_health", "params":[],"id":1 }' localhost:9944
+```
+
+## Next steps
+
+For any usage, wait for the chain to be fully sync by checking the [node log](/docs/build/build-on-layer-1/nodes/archive-node/binary.md#get-node-logs).
+
+It all depends on what you plan to do with your archive node.
+
+- In most cases, you will want to access node from outside. In this case, [nginx server](/docs/build/build-on-layer-1/nodes/archive-node/nginx.md) is the recommended option.
+- If you run your dApp on the same server as the node, then you can access it directly with the `localhost` address. This setup is recommended for testing purpose only.
+- If you run the node locally for testing purpose, you can switch the network in [Polkadot.js portal](https://polkadot.js.org/apps) and explore the chain:
+
+![1](img/1.png)
+
+---
+
+## Extra operations
+
+### Get node logs
+
+To get the last 100 lines from the node logs, use the following command:
+
+```sh
+journalctl -fu astar-collator -n100
+```
+
+### Indexers and oracles
+
+To access data from indexers (e.g. The Graph) or Oracles (e.g. Chainlink), you need to add the debug flags below to the node launch command, after the `astar-collator` line:
+
+`--ethapi=debug`
+
+### Upgrade node
+
+When an upgrade is necessary, node operators are be notified in our Discord and Element group.
+
+Download the [latest release](https://github.com/AstarNetwork/Astar/releases/latest) from Github
+
+```sh
+wget $(curl -s https://api.github.com/repos/AstarNetwork/Astar/releases/latest | grep "tag_name" | awk '{print "https://github.com/AstarNetwork/Astar/releases/download/" substr($2, 2, length($2)-3) "/astar-collator-" substr($2, 3, length($2)-4) "-ubuntu-x86_64.tar.gz"}')
+tar -xvf astar-collator*.tar.gz
+```
+
+Move the new release binary and restart the service:
+
+```sh
+sudo mv ./astar-collator /usr/local/bin
+sudo chmod +x /usr/local/bin/astar-collator
+sudo systemctl restart astar.service
+```
+
+### Purge node
+
+To start a node from scratch without any chain data, just wipe the chain data directory:
+
+```sh
+sudo systemctl stop astar.service
+sudo rm -R /var/lib/astar/chains/astar/db*
+sudo systemctl start astar.service
+```
+
+### Snapshot
+
+Please refer to the [**snapshot page**](/docs/build/build-on-layer-1/nodes/snapshots.md).
+:::
diff --git a/docs/build/build-on-layer-1/nodes/archive-node/docker.md b/docs/build/build-on-layer-1/nodes/archive-node/docker.md
new file mode 100644
index 0000000..b9805ad
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/archive-node/docker.md
@@ -0,0 +1,185 @@
+---
+sidebar_position: 2
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Docker
+
+A **Docker container** allows you to easily run a node without depending on the platform it is running on. This method should only be used if you already have experience with Docker containers.
+
+## Installation
+
+Start by installing docker: [How to install Docker on Ubuntu](https://linuxize.com/post/how-to-install-and-use-docker-on-ubuntu-20-04/)
+
+Create a local directory for the **chain storage data** and a dedicated user:
+
+```sh
+sudo mkdir /var/lib/astar
+sudo useradd --no-create-home --shell /usr/sbin/nologin astar
+sudo chown astar:astar /var/lib/astar
+```
+
+## Start Docker node
+
+This guide goes over the process of starting a container with both WS and RPC endpoints. For a single type of endpoint, simply remove the unnecessary port and command.
+
+Launch the docker node in detached mode:
+
+:::tip
+Make sure to change the {NODE_NAME}
+:::
+
+
+
+
+```sh
+docker run -d \
+--name astar-container \
+-u $(id -u astar):$(id -g astar) \
+-p 30333:30333 \
+-p 9944:9944 \
+-v "/var/lib/astar/:/data" \
+staketechnologies/astar-collator:latest \
+astar-collator \
+--pruning archive \
+--rpc-cors all \
+--name {NODE_NAME} \
+--chain astar \
+--base-path /data \
+--rpc-external \
+--rpc-methods Safe \
+--rpc-max-request-size 1 \
+--rpc-max-response-size 1 \
+--telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+```
+
+
+
+
+
+```sh
+docker run -d \
+--name shiden-container \
+-u $(id -u astar):$(id -g astar) \
+-p 30333:30333 \
+-p 9944:9944 \
+-v "/var/lib/astar/:/data" \
+staketechnologies/astar-collator:latest \
+astar-collator \
+--pruning archive \
+--rpc-cors all \
+--name {NODE_NAME} \
+--chain astar \
+--base-path /data \
+--rpc-external \
+--rpc-methods Safe \
+--rpc-max-request-size 1 \
+--rpc-max-response-size 1 \
+--telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+```
+
+
+
+
+
+```sh
+docker run -d \
+--name shibuya-container \
+-u $(id -u astar):$(id -g astar) \
+-p 30333:30333 \
+-p 9944:9944 \
+-v "/var/lib/astar/:/data" \
+staketechnologies/astar-collator:latest \
+astar-collator \
+--pruning archive \
+--rpc-cors all \
+--name {NODE_NAME} \
+--chain astar \
+--base-path /data \
+--rpc-external \
+--rpc-methods Safe \
+--rpc-max-request-size 1 \
+--rpc-max-response-size 1 \
+--telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+```
+
+
+
+
+Test the node health via the RPC port with this command:
+
+```sh
+curl -H "Content-Type: application/json" --data '{ "jsonrpc":"2.0", "method":"system_health", "params":[],"id":1 }' localhost:9944
+```
+
+## Next steps
+
+In any case, wait for the chain to be fully synchronized by checking the [node log](/docs/build/build-on-layer-1/nodes/archive-node/binary.md#get-node-logs).
+
+How the archive node will be used will largely determine what steps to follow next:
+- If accessing the node publicly, running an [nginx server](/docs/build/build-on-layer-1/nodes/archive-node/nginx.md) is the recommended option.
+- If the dApp is running on the same server as the node, then it can be accessed directly with the `localhost` address. This setup is recommended for testing purposes only.
+- If running the node locally for testing purposes, switch networks in [Polkadot.js portal](https://polkadot.js.org/apps) to explore the chain:
+
+![1](img/1.png)
+
+## Extra Operations
+
+### Get node logs
+
+To obtain the last 100 lines from the node logs, use the following command:
+
+```sh
+docker logs -f -n 100 $(docker ps -aq --filter name="{CHAIN}-container")
+```
+
+replacing `{CHAIN}` with `astar`, `shiden`, or `shibuya`
+
+eg.
+
+```sh
+docker logs -f -n 100 $(docker ps -aq --filter name="astar-container")
+```
+
+### Indexers and Oracles
+
+To access data from indexers (like The Graph) or Oracles (like Chainlink), add the follwing debug flags to the node launch command, after the `astar-collator` line:
+
+`--ethapi=debug`
+
+## Upgrade node
+
+When a node upgrade is necessary, node operators are notified with instructions in the [Astar Dev Announcement Telegram](https://t.me/+cL4tGZiFAsJhMGJk), [Astar Discord](https://discord.gg/Z3nC9U4), and [Astar Node Upgrade Element channel](https://matrix.to/#/#shiden-runtime-ann:matrix.org). Join and follow any of these channels to receive news about node updates and node upgrades.
+
+To upgrade to the latest node version, stop and remove the actual container:
+
+```sh
+docker stop $(docker ps -q --filter name="{CHAIN}-container")
+docker rm $(docker ps -a -q --filter name="{CHAIN}-container")
+```
+
+where `{CHAIN}` is `astar`, `shiden`, or `shibuya`.
+
+[start command]: docker
+
+Then start a new container with the information under "Start Docker Node". Chain data will be kept on your machine under `/var/lib/astar/`.
+
+### Purge node
+
+To start a new container from scratch without any chain data, simply remove the container and wipe the chain data directory:
+
+```sh
+docker rm -f $(docker ps -a -q --filter name="CHAIN-container")
+sudo rm -R /var/lib/astar/*
+```
+
+where `CHAIN` is `astar`, `shiden`, or `shibuya`.
+
+Then start a new container by following the instructions under the [Start Docker node](/docs/build/build-on-layer-1/nodes/archive-node/docker.md#start-docker-node) section.
+
+### Snapshot
+
+Please refer to [**snapshot page**](/docs/build/build-on-layer-1/nodes/snapshots.md).
+:::
diff --git a/docs/build/build-on-layer-1/nodes/archive-node/img/1.png b/docs/build/build-on-layer-1/nodes/archive-node/img/1.png
new file mode 100644
index 0000000..b0a292d
Binary files /dev/null and b/docs/build/build-on-layer-1/nodes/archive-node/img/1.png differ
diff --git a/docs/build/build-on-layer-1/nodes/archive-node/img/2.png b/docs/build/build-on-layer-1/nodes/archive-node/img/2.png
new file mode 100644
index 0000000..9c3eb3e
Binary files /dev/null and b/docs/build/build-on-layer-1/nodes/archive-node/img/2.png differ
diff --git a/docs/build/build-on-layer-1/nodes/archive-node/index.md b/docs/build/build-on-layer-1/nodes/archive-node/index.md
new file mode 100644
index 0000000..94c53d2
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/archive-node/index.md
@@ -0,0 +1,88 @@
+# Archive Node
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+## Overview
+
+An **archive node** stores the history of past blocks. Most of times, an archive node is used as **RPC endpoint**.
+RPC plays a vital role on our network: it connects users and dApps to the blockchain through WebSocket and HTTP endpoints. For example, our [public endpoints](/docs/build/build-on-layer-1/environment
+/endpoints) run archive nodes for anyone to quickly connect to Astar chains.
+
+**DApp projects** need to run their own RPC node as archive to the retrieve necessary blockchain data and not to rely on public infrastructure. Public endpoints respond slower because of the large amount of users connected and are rate limited.
+
+:::caution
+Be careful not to confuse with a **full node** that has a pruned database: a full node only stores the current state and most recent blocks (256 blocks by default) and uses much less storage space.
+:::
+
+We maintain 3 different networks: the testnet Shibuya, Shiden as a parachain of Kusama, and Astar as a parachain of Polkadot.
+
+| Astar chain | Relay Chain | Name | Token |
+|---|---|---|---|
+| Testnet | Tokyo (hosted by Astar) | Shibuya | $SBY |
+| Shiden | Kusama | Shiden | $SDN |
+| Astar | Polkadot | Astar | $ASTR |
+
+## Requirements
+### Machine
+:::note
+- Storage space will increase as the network grows.
+- Archive nodes may require a larger server, depending on the amount and frequency of data requested by a dApp.
+:::
+
+
+
+
+| Component | Requirement |
+|---|---|
+| System | Ubuntu 20.04 |
+| CPU | 8 cores |
+| Memory | 16 GB |
+| Hard Disk | 500 GB SSD (NVMe preferable) |
+
+
+
+
+
+| Component | Requirement |
+|---|---|
+| System | Ubuntu 20.04 |
+| CPU | 8 cores |
+| Memory | 16 GB |
+| Hard Disk | 500 GB SSD (NVMe preferable) |
+
+
+
+
+
+| Component | Requirement |
+|---|---|
+| System | Ubuntu 20.04 |
+| CPU | 4 cores |
+| Memory | 8 GB |
+| Hard Disk | 200 GB SSD (NVMe preferable) |
+
+
+
+
+### Ports
+The Astar node runs in parachain configuration, meaning they will listen at different ports by default for both the parachain and the embedded relay chain.
+
+|Description| Parachain Port | Relaychain Port | Custom Port Flag |
+|---|---|---|---|
+| P2P | 30333 | 30334 | `--port` |
+| RPC | 9944 | 9945 | `--rpc-port` |
+| Prometheus | 9615 | 9616 | `--prometheus-port` |
+
+For all types of nodes, ports `30333` and `30334` need to be opened for incoming traffic at the Firewall.
+**Collator nodes should not expose WS and RPC ports to the public.**
+
+---
+
+## Installation
+
+There are 2 different ways to run an Astar node:
+
+Using [Binary](/docs/build/build-on-layer-1/nodes/archive-node/binary.md) - run the node from binary file and set it up as systemd service
+
+Using [Docker](/docs/build/build-on-layer-1/nodes/archive-node/docker.md) - run the node within a Docker container
diff --git a/docs/build/build-on-layer-1/nodes/archive-node/nginx.md b/docs/build/build-on-layer-1/nodes/archive-node/nginx.md
new file mode 100644
index 0000000..f589473
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/archive-node/nginx.md
@@ -0,0 +1,170 @@
+---
+sidebar_position: 3
+---
+
+# Nginx Server
+
+To access your archive node from outside, you need to install a server and setup a certificate.
+In this guide, we will use the Nginx server as an example.
+
+## Firewall
+
+Your server will communicate through HTTP ports, you need to enable ports 80 (http) and 443 (https) in your firewall.
+
+:::info
+At the end of the configuration, you can close port 80 since only port 443 will be used to access the node. See the section below, *Self-signed certificate*.
+:::
+
+## Domain name
+
+This guide assumes that you have a **domain name** and control over the **DNS**. In this case, you need to add the **A record** with the sub domain you will use and the IP address of your node into you DNS provider console.
+
+:::info
+If you don't have a domain name, you will have to generate a self-signed certificate and access your node through the raw ip address of your server.
+:::
+
+## Installation
+
+:::info
+In the following steps, don't forget to update {SUB_DOMAIN} with your full sub domain name.
+Example: ws.astar.awesomedappproject.io
+:::
+
+First, install **Nginx** and **Certbot**:
+
+```sh
+sudo apt-get install nginx snapd
+sudo snap install core; sudo snap refresh core
+sudo snap install --classic certbot
+sudo ln -s /snap/bin/certbot /usr/bin/certbot
+```
+
+Create and enable the site:
+
+```sh
+cd /etc/nginx/sites-available
+sudo cp default {SUB_DOMAIN}
+sudo ln -s /etc/nginx/sites-available/{SUB_DOMAIN} /etc/nginx/sites-enabled/
+```
+
+Edit the site file:
+
+```sh
+sudo nano {SUB_DOMAIN}
+```
+
+Change the `root` and `server_name` to get a file like this:
+
+```
+server {
+ listen 80;
+ listen [::]:80;
+
+ root /var/www/{SUB_DOMAIN}/html;
+ index index.html index.htm index.nginx-debian.html;
+
+ server_name {SUB_DOMAIN};
+
+ location / {
+ try_files $uri $uri/ =404;
+ }
+}
+```
+
+## Generate SSL certificate
+
+Issue the Certbot certificate:
+
+```sh
+sudo certbot certonly --nginx
+```
+
+Certbot will issue the SSL certificate into `/etc/letsencrypt/live`.
+
+## Switch to https
+
+Edit again the site file:
+
+```ssh
+sudo nano {SUB_DOMAIN}
+```
+
+Delete the existing lines and set the content as below:
+
+```
+map $http_upgrade $connection_upgrade {
+ default upgrade;
+ '' close;
+}
+
+server {
+
+ # SSL configuration
+ #
+ listen 443 ssl;
+ listen [::]:443 ssl;
+
+ root /var/www/{SUB_DOMAIN}/html;
+
+ server_name {SUB_DOMAIN};
+ ssl_certificate /etc/letsencrypt/live/{SUB_DOMAIN}/fullchain.pem; # managed by Certbot
+ ssl_certificate_key /etc/letsencrypt/live/{SUB_DOMAIN}/privkey.pem; # managed by Certbot
+ ssl_session_timeout 5m;
+ ssl_protocols SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2;
+ ssl_ciphers HIGH:!aNULL:!MD5;
+ ssl_prefer_server_ciphers on;
+
+ location / {
+ proxy_pass http://localhost:9944;
+ proxy_pass_request_headers on;
+ proxy_http_version 1.1;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection $connection_upgrade;
+ }
+
+}
+
+```
+:::info
+In the example above, note that port 9944 is used in proxy_pass. This is the WS port.
+Change this to 9944 to pass the RPC port.
+:::
+
+Check and restart nginx:
+
+```sh
+sudo nginx -t
+sudo systemctl restart nginx
+```
+
+## Usage
+
+This is it, your arcive node is set and available from outside.
+
+If you set a WS endpoint, you can explore the chain from the [Polkadot.js](https://polkadot.js.org/apps) portal using the format wss://{SUB_DOMAIN}
+
+![2](img/2.png)
+
+If you set a **RPC endpoint**, you can it through
+
+## Self-signed certificate
+
+In case you do not have a domain name, you need to issue yourself a self-signed certificate:
+
+```sh
+sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt
+sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
+```
+
+Then in the https site config file, you will have to replace the following values:
+
+```
+ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
+ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
+ssl_dhparam /etc/ssl/certs/dhparam.pem;
+```
+
+In all steps, the {SUB_DOMAIN} value will be the node server ip address.
diff --git a/docs/build/build-on-layer-1/nodes/collator/_category_.json b/docs/build/build-on-layer-1/nodes/collator/_category_.json
new file mode 100644
index 0000000..bb217a7
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Run a Collator Node",
+ "position": 2
+}
diff --git a/docs/build/build-on-layer-1/nodes/collator/learn.md b/docs/build/build-on-layer-1/nodes/collator/learn.md
new file mode 100644
index 0000000..ec833d0
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/learn.md
@@ -0,0 +1,63 @@
+---
+sidebar_position: 1
+---
+
+# Learn about Collators
+
+## Introduction
+
+A collator plays an essential role in our network and is responsible for crucial tasks, including block production and transaction confirmation. A collator needs to maintain a high communication response capability to ensure the seamless operation of the Astar ecosystem.
+
+## Role of collators in the Astar ecosystem
+
+Collators maintain our ecosystem by collecting transactions from users and producing state transition proofs for Relay Chain validators. In other words, collators maintain the network by aggregating parachain transactions into parachain block candidates and producing state transition proofs for validators based on those blocks.
+
+Unlike validators, collator nodes do not secure the network. If a parachain block is invalid, it will get rejected by validators. Therefore the assumption that having more collators is better or more secure is not correct. On the contrary, too many collators may slow down the network. The only nefarious power collators have transaction censorship. To prevent censorship, a parachain only needs to ensure some neutral collators - but not necessarily a majority. Theoretically, the censorship problem is solved by having just one honest collator (reference: [https://wiki.polkadot.network/docs/learn-collator](https://wiki.polkadot.network/docs/learn-collator)).
+
+Performance of the network depends directly on collators. To ensure optimal performance of the network, a [slashing mechanism](/docs/build/build-on-layer-1/nodes/collator/learn.md#slash-mechanism) is implemented.
+
+### XCMP
+
+Collators are a key element of [XCMP (Cross-Chain Message Passing)](https://wiki.polkadot.network/docs/learn-crosschain). By being full-nodes of the Relay Chain, they are all aware of each other as peers. This makes it possible for them to send messages from parachain A to parachain B.
+
+---
+
+## Aura PoS Consensus
+
+Aura PoS consist of 2 pallets:
+
+- [Aura pallet](https://crates.parity.io/pallet_aura/index.html)
+- PoS pallet
+
+The first phase in making PoS was by deploying the Aura pallet. Aura PoA Collator Phase - permissioned block authoring and collator session key setup for Astar ecosystem. After extensive testing, we deployed the PoS pallet and switched to Aura PoS. We have enabled permissionless collator staking, network inflation, and rewards.
+
+**Let’s break down the latest phase:**
+
+- **Collator staking**: collators can now start with securing the network. This will be with a minimum bond of a fixed amount of tokens.
+- **Network inflation**: Astar mainnet has a 10% inflation. This 10% is based on a perfect block production every 12 seconds.
+- **Rewards**: a fixed amount will be created at each block and divided between treasury, collators, and dApp staking.
+
+A collator (block producer) is rewarded a fixed amount for each block produced.
+
+---
+
+## Collator election mechanism
+### Election process
+To join the election process you must register for a collator and bond tokens, see [Collator Requirements](https://docs.astar.network/docs/build/build-on-layer-1/nodes/collator/requirements) for details. When your node fits the parameters and checks all the boxes to become a collator, it will be added to the chain. **Note: if your collator doesn’t produce blocks during two sessions (2h) it will be kicked out.**
+
+---
+
+## Collator reward distribution mechanism
+At every block you produced as a collator, rewards will automatically be transferred to your account. The reward includes block reward + fees.
+
+---
+
+## Slash mechanism
+Starting April 2022, a slashing mechanism is implemented on Astar and Shiden networks - a collator that doesn't produce blocks during two sessions (2 hours) will be slashed 1% of its total stake and kicked out of the active collator set.
+This slashing ensures the best block rate and prevents malicious actors from harming the network without financial consequences.
+
+---
+
+## FAQ
+### What about NPoS?
+Our first intention was to activate NPoS to Shiden Network. After internal testing, we realised this would use a lot of Shiden collator resources. NPoS is not designed for collators in the Polkadot ecosystem (reference: [role of collators](/docs/build/build-on-layer-1/nodes/collator/learn.md#role-of-collators-in-the-astar-ecosystem). Astar ecosystem is built to be a dApp hub in the Polkadot ecosystem for smart contracts with a unique incentive reward mechanism for developers, dApp staking.
diff --git a/docs/build/build-on-layer-1/nodes/collator/requirements.md b/docs/build/build-on-layer-1/nodes/collator/requirements.md
new file mode 100644
index 0000000..da58fef
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/requirements.md
@@ -0,0 +1,104 @@
+---
+sidebar_position: 2
+---
+
+# Collator Requirements
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+## How to become a collator
+
+### Permissionless collator
+
+To become a permissionless collator on our networks, you need to meet the requirements below.
+
+**Collator staking requirements**
+
+
+
+
+
+
Bond: 3,200,000 ASTR tokens (3.2M $ASTR)
+
Meet hardware requirements
+
+
+
If your node stops producing blocks for 1 session, your node will be kicked out of the active set and 1% of the bonded funds will be slashed. Running a node with low performance can lead to skipping blocks which may result in being kicked out of the active set.
+
+
+
+
+
+
Bond: 32,000 SDN tokens (32k $SDN)
+
Meet hardware requirements
+
+
+
If your node stops producing blocks for 1 session, your node will be kicked out of the active set and 1% of the bonded funds will be slashed. Running a node with low performance can lead to skipping blocks which can lead to being kicked out of the active set.
+
+
+
+
+:::tip
+Set your collator with:
+**Extrinsics - CollatorSelection - Register as candidate** |
+Onboarding takes **n+1** session.
+:::
+
+---
+
+### System requirements
+
+A collator deploys its node on a remote server. You can choose your preferred provider for dedicated servers and operating system. Generally speaking, we recommand you to select a provider/server in your region, this will increase decentralization of the network.
+You can choose your preferred operating system, though we highly recommend Linux.
+
+**Hardware requirements**
+
+Use the charts below to find the basic configuration, which guarantees that all blocks can process in time. If the hardware doesn't meet these requirements, there is a high chance it will malfunction and you risk be automatically **kicked out and slashed** from the active set.
+
+:::caution
+Make sure your server is a **bare metal only dedicated to the collator node**, any unnecessary other process running on it will significantly decrease the collator performance.
+**We strongly discourage using a VPS** to run a collator because of their low performances.
+
+Collators are the nodes which require the most powerful and fast machine, because they only have a very short time frame to assemble a block and collate it to the relay chain.
+To run a collator, it is absolutely necessary to use a **CPU of minimum 4 Ghz per core** and a **NVMe SSD disk** (SATA SSD are not suitable for collators because they are too slow).
+:::
+
+
+
+
+| Component | Requirement |
+|---|---|
+| System | Ubuntu 20.04 |
+| CPU | 8 cores - minimum 4 Ghz per core |
+| Memory | 16 GB |
+| Hard Disk | 500 GB SSD NVMe |
+
+
+
+
+
+| Component | Requirement |
+|---|---|
+| System | Ubuntu 20.04 |
+| CPU | 8 cores - minimum 4 Ghz per core |
+| Memory | 16 GB |
+| Hard Disk | 500 GB SSD NVMe |
+
+
+
+
+
+| Component | Requirement |
+|---|---|
+| System | Ubuntu 20.04 |
+| CPU | 4 cores - minimum 3.5 Ghz per core |
+| Memory | 8 GB |
+| Hard Disk | 200 GB SSD NVMe |
+
+
+
+
+:::tip
+Shibuya is the perfect network to test out your knowledge about running nodes in the Astar ecosystem. To join the collator set on Shibuya you need to apply for a 32k SBY fund.
+If you never operated a collator node, we strongly encourage you to spin up a **Shibuya collator** node to start before thinking about mainnet. A perfect start is our [secure setup guide](/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/index.md).
+:::
diff --git a/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/_category_.json b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/_category_.json
new file mode 100644
index 0000000..4b23371
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Secure Setup Guide",
+ "position": 4
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/building_node.md b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/building_node.md
new file mode 100644
index 0000000..aacab6c
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/building_node.md
@@ -0,0 +1,300 @@
+---
+sidebar_position: 4
+---
+
+# 4. Building Your Collator
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+## Let's get started
+
+Let's start with updating our server. Connect to your server and update:
+
+```
+sudo apt-get update
+sudo apt-get upgrade
+sudo apt install -y adduser libfontconfig1
+```
+:::note
+the last command (related to ```libfontconfig1```) is optional and required if you want install Grafana in the later sections of Secure Setup Guide).
+:::
+
+## Build the node
+
+To build a collator node, you have 3 different options
+
+* **From source**: experience with using Linux
+* **From binary**: easiest way to start and update node with new releases
+* **Run a Docker container**: Docker background requires
+
+### Build from source
+
+Building a node from source code is the most complicated path, but will also provide the best optimized node version for your server.
+
+Make sure your server is ready to build a collator. The instructions that follow do not go into details which you can find in official [Substrate Docs](https://docs.substrate.io/install/linux/)
+
+```
+## Prerequisites (Software required for compilation)
+##
+sudo apt install build-essential
+sudo apt install --assume-yes git clang curl cmake llvm protobuf-compiler
+sudo apt update
+
+## Install Rust
+##
+curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+source $HOME/.cargo/env
+rustup update nightly
+rustup target add wasm32-unknown-unknown --toolchain nightly
+```
+
+
+Clone the Astar repository:
+
+```
+git clone https://github.com/AstarNetwork/Astar.git
+cd Astar
+```
+
+Make sure you have the latest commit in place:
+
+```
+git checkout
+git pull
+```
+
+Compile the node binary:
+
+```
+CARGO_PROFILE_RELEASE_LTO=true RUSTFLAGS="-C codegen-units=1" cargo build --release
+```
+
+### Build from binaries
+
+The easiest way to install an Astar node is to download the binaries. You can find them here: [Astar releases](https://github.com/AstarNetwork/Astar).
+
+Get the file and extract:
+
+```
+wget $(curl -s https://api.github.com/repos/AstarNetwork/Astar/releases/latest | grep "tag_name" | awk '{print "https://github.com/AstarNetwork/Astar/releases/download/" substr($2, 2, length($2)-3) "/astar-collator-v" substr($2, 3, length($2)-4) "-ubuntu-x86_64.tar.gz"}')
+```
+
+```
+tar -xvf astar-collator*.tar.gz
+```
+
+### Run a Docker container
+
+You can find here the [Astar Docker hub](https://hub.docker.com/r/staketechnologies/astar-collator).
+
+Pull the latest Docker version
+
+```
+docker pull staketechnologies/astar-collator:latest
+```
+
+---
+
+## Launch Your Collator
+
+:::caution
+The following steps are suitable for **binary** usage (built from source or downloaded).
+In case you want to run a Docker container, you will have to adapt those.
+:::
+
+Create a dedicated user for the node and move the **node binary** (in this example, username is ```astar```):
+
+```
+sudo useradd --no-create-home --shell /usr/sbin/nologin astar
+sudo cp ./astar-collator /usr/local/bin
+sudo chmod +x /usr/local/bin/astar-collator
+```
+
+Create a dedicated directory for the **chain storage data**:
+
+```
+sudo mkdir /var/lib/astar
+sudo chown astar:astar /var/lib/astar
+```
+
+Now, let's go to our binary directory and start the collator manually:
+
+
+
+
+```
+cd /usr/local/bin
+
+sudo -u astar ./astar-collator --collator --chain astar --pruning archive --name {COLLATOR_NAME} --base-path /var/lib/astar --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0'
+```
+
+
+
+
+```
+cd /usr/local/bin
+
+sudo -u astar ./astar-collator --collator --chain shiden --pruning archive --name {COLLATOR_NAME} --base-path /var/lib/astar --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0'
+```
+
+
+
+
+```
+cd /usr/local/bin
+
+sudo -u astar ./astar-collator --collator --chain shibuya --pruning archive --name {COLLATOR_NAME} --base-path /var/lib/astar --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0'
+```
+
+
+
+
+:::tip
+Type in the place of **{COLLATOR\_NAME}**, what you would like to call your node.
+:::
+
+See your node syncing on [https://telemetry.polkadot.io/](https://telemetry.polkadot.io/#list/0x9eb76c5184c4ab8679d2d5d819fdf90b9c001403e9e17da2e14b6d8aec4029c6).
+
+Useful commands to be used in screen:
+_`ctrl+a+d` (detach actual session)_
+_`screen ls` (this will list all running screens)_
+_`screen -r` (restore a screen session)_
+
+Stop the manual node and kill the screen session:
+
+```
+ctrl+c
+ctrl+a+k
+```
+
+## Set systemd service
+
+To run a stable collator node, a **systemd service** has to be set and activated. This will ensure that the node is restarting even after a server reboot.
+
+Create a service file
+
+```
+sudo nano /etc/systemd/system/astar.service
+```
+
+Add service parameters (this example is for Astar Network):
+
+
+
+```
+[Unit]
+Description=Astar Collator
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --collator \
+ --name {COLLATOR_NAME} \
+ --chain astar \
+ --base-path /var/lib/astar \
+ --pruning archive \
+ --trie-cache-size 0 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=120
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+```
+[Unit]
+Description=Astar Collator
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --collator \
+ --name {COLLATOR_NAME} \
+ --chain shiden \
+ --base-path /var/lib/astar \
+ --pruning archive \
+ --trie-cache-size 0 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=120
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+```
+[Unit]
+Description=Astar Collator
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --collator \
+ --name {COLLATOR_NAME} \
+ --chain shibuya \
+ --base-path /var/lib/astar \
+ --pruning archive \
+ --trie-cache-size 0 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=120
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+Start the service:
+
+```
+sudo systemctl start astar.service
+```
+
+Check the node log and that everything is syncing fine:
+
+```
+journalctl -f -u astar.service -n100
+```
+
+Enable the service:
+
+```
+sudo systemctl enable astar.service
+```
+
+### Snapshot
+
+Please refer to the [**snapshot page**](/docs/build/build-on-layer-1/nodes/snapshots.md).
+
+
+## Finalizing
+
+To finalize your collator you need to:
+
+* Setup an account
+* Author your session key
+* Set up your session key
+* Verify your identity
+* Bond tokens
+
+this part is covered in chapter [Spin up a Collator](/docs/build/build-on-layer-1/nodes/collator/spinup_collator.md)
+
diff --git a/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/configuration.md b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/configuration.md
new file mode 100644
index 0000000..053058c
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/configuration.md
@@ -0,0 +1,173 @@
+---
+sidebar_position: 6
+---
+
+# 6. Configuration
+
+:::tip
+There is a lot of copy/paste in this section, but it is highly recommended that you try to understand each step. Please use our official discord for support.
+:::
+
+### Prometheus
+
+Let’s edit the **Prometheus config file** and add all the modules to it:
+
+```sql
+sudo nano /etc/prometheus/prometheus.yml
+```
+
+Add the following code to the file and save `ctrl+o` `ctrl+x`:
+
+```sql
+global:
+ scrape_interval: 15s
+ evaluation_interval: 15s
+
+rule_files:
+ - 'rules.yml'
+
+alerting:
+ alertmanagers:
+ - static_configs:
+ - targets:
+ - localhost:9093
+
+scrape_configs:
+ - job_name: "prometheus"
+ scrape_interval: 5s
+ static_configs:
+ - targets: ["localhost:9090"]
+ - job_name: "substrate_node"
+ scrape_interval: 5s
+ static_configs:
+ - targets: ["localhost:9615"]
+ - job_name: "node_exporter"
+ scrape_interval: 5s
+ static_configs:
+ - targets: ["localhost:9100"]
+ - job_name: "process-exporter"
+ scrape_interval: 5s
+ static_configs:
+ - targets: ["localhost:9256"]
+```
+
+* `scrape_interval` defines how often Prometheus scrapes targets, while `evaluation_interval` controls how often the software will evaluate the rules.
+* `rule_files` sets the location of Alert manager rules that we will add next.
+* `alerting` contains the alert manager target.
+* `scrape_configs` contain the services Prometheus will monitor.
+
+### Alert rules
+
+Let’s create the `rules.yml` file this will set the **rules for Alert Manager**:
+
+```sql
+sudo touch /etc/prometheus/rules.yml
+sudo nano /etc/prometheus/rules.yml
+```
+
+We are going to create **2 basic rules** that will trigger an alert in case the instance is down or the CPU usage crosses 80%. **Add the following lines and save the file:**
+
+```sql
+groups:
+ - name: alert_rules
+ rules:
+ - alert: InstanceDown
+ expr: up == 0
+ for: 5m
+ labels:
+ severity: critical
+ annotations:
+ summary: "Instance $labels.instance down"
+ description: "[{{ $labels.instance }}] of job [{{ $labels.job }}] has been down for more than 1 minute."
+
+ - alert: HostHighCpuLoad
+ expr: 100 - (avg by(instance)(rate(node_cpu_seconds_total{mode="idle"}[2m])) * 100) > 80
+ for: 0m
+ labels:
+ severity: warning
+ annotations:
+ summary: Host high CPU load (instance Astar Node)
+ description: "CPU load is > 80%\n VALUE = {{ $value }}\n LABELS: {{ $labels }}"
+```
+
+The criteria for **triggering an alert** are set in the `expr:` part. To create your own alerts, you’re going to have to learn and test the different variables provided to Prometheus by the services we are setting up. There is an infinite number of possibilities to **personalize your alerts**.
+
+As this part can be time-consuming to learn and build, you can find a summary [list of alerts we like to use](https://pastebin.com/96wbiQN8). Feel free to share your Alert file with the community. You should also have a look at [alerts provided by Parity](https://github.com/paritytech/substrate/tree/master/scripts/ci/monitoring/alerting-rules).
+
+Then, check the **rules file**:
+
+```
+promtool check rules /etc/prometheus/rules.yml
+```
+
+And finally, check the **Prometheus config file**:
+
+```
+promtool check config /etc/prometheus/prometheus.yml
+```
+
+
+
+
+
+### Process exporter
+
+**Process exporter** needs a little **config file** to be told which processes they should take into account:
+
+```
+sudo touch /etc/process-exporter/config.yml
+sudo nano /etc/process-exporter/config.yml
+```
+
+Add the following code to the file and save:
+
+```
+process_names:
+ - name: "{{.Comm}}"
+ cmdline:
+ - '.+'
+```
+
+### Gmail setup
+
+To allow AlertManager to send an email to you, you will need to generate something called an `app password` in your Gmail account. For details, click [here](https://support.google.com/accounts/answer/185833?hl=en) to follow the whole setup.
+
+You should see something similar to the image below:
+
+
+
+
+
+### Alert Manager
+
+There is a configuration file named `alertmanager.yml` inside the directory that you just extracted in the previous command, but that is not useful yet. We will create our `alertmanager.yml` file under `/etc/alertmanager` with the following config.
+
+Let’s create the file:
+
+```
+sudo touch /etc/alertmanager/alertmanager.yml
+sudo nano /etc/alertmanager/alertmanager.yml
+```
+
+And add the **Gmail configuration** to it and save the file:
+
+```
+global:
+ resolve_timeout: 1m
+
+route:
+ receiver: 'gmail-notifications'
+
+receivers:
+- name: 'gmail-notifications'
+ email_configs:
+ - to: YOUR_EMAIL
+ from: YOUR_EMAIL
+ smarthost: smtp.gmail.com:587
+ auth_username: YOUR_EMAIL
+ auth_identity: YOUR_EMAIL
+ auth_password: YOUR_APP_PASSWORD
+ send_resolved: true
+```
+
+With the above configuration, alerts will be sent using the email you set above. Remember to change `YOUR_EMAIL` to your email and paste the app password you just saved earlier to the `YOUR_APP_PASSWORD`. We will test the Alert Manager later in the guide.
diff --git a/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/create_environnement.md b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/create_environnement.md
new file mode 100644
index 0000000..5c72267
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/create_environnement.md
@@ -0,0 +1,16 @@
+---
+sidebar_position: 1
+---
+
+# 1. Create Your Environment
+
+## Overview
+
+A collator deploys its node on a remote server. You can choose your preferred provider for dedicated servers and operating system. Generally speaking, we recommand you to select a provider/server in your region, this will increase decentralization of the network.
+You can choose your preferred operating system, though we highly recommend Linux.
+
+:::tip
+Shibuya is the perfect network to test out your knowledge about running nodes in the Astar ecosystem. To join the collator set on Shibuya you need to apply for a 32k SBY fund.
+
+If you never operated a collator node, we strongly encourage you to spin up a **Shibuya collator** node to start before thinking about the mainnet.
+:::
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/index.md b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/index.md
new file mode 100644
index 0000000..46e247b
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/index.md
@@ -0,0 +1,10 @@
+# Ultimate Beginners Guide
+
+**Beginners Guide** is broken down into the following pages:
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/launch_services.md b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/launch_services.md
new file mode 100644
index 0000000..e2f6ca8
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/launch_services.md
@@ -0,0 +1,65 @@
+---
+sidebar_position: 8
+---
+
+# 8. Launch Services
+
+## Launch Services
+
+Launch a **daemon reload** to take the services into account in `systemd`:
+
+```
+sudo systemctl daemon-reload
+```
+
+Start the services:
+
+```
+sudo systemctl start prometheus.service
+sudo systemctl start node_exporter.service
+sudo systemctl start process-exporter.service
+sudo systemctl start alertmanager.service
+sudo systemctl start grafana-server
+```
+
+And check that they are working fine, one by one:
+
+```
+systemctl status prometheus.service
+systemctl status node_exporter.service
+systemctl status process-exporter.service
+systemctl status alertmanager.service
+systemctl status grafana-server
+```
+
+A service working fine should look like this:
+
+
+
+
+
+When everything is okay, activate the services!
+
+```
+sudo systemctl enable prometheus.service
+sudo systemctl enable node_exporter.service
+sudo systemctl enable process-exporter.service
+sudo systemctl enable alertmanager.service
+sudo systemctl enable grafana-server
+```
+
+## Test Alert manager
+
+Run this command to fire an alert:
+
+```
+curl -H "Content-Type: application/json" -d '[{"labels":{"alertname":"Test"}}]' localhost:9093/api/v1/alerts
+```
+
+Check your inbox, you have a surprise:
+
+
+
+
+
+You will always receive a **Firing** alert first, then a **Resolved** notification to indicate the alert isn’t active anymore.
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/node_monitoring.md b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/node_monitoring.md
new file mode 100644
index 0000000..ad63029
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/node_monitoring.md
@@ -0,0 +1,146 @@
+---
+sidebar_position: 5
+---
+
+# 5. Node Monitoring
+
+## Installation of Node Monitoring
+
+In this chapter, we will walk you through the setup of local monitoring for your collator node.
+
+## Installation
+
+Make sure you download the latest releases. Please check [Prometheus](https://prometheus.io/download/), [Process exporter](https://github.com/ncabatoff/process-exporter/releases), and [Grafana](https://grafana.com/grafana/download) download pages. We will continue to update this guide but recommend that you verify that you are installing the latest versions.
+
+There are 7 steps to install these packages:
+
+* Download
+* Extract
+* Move the files to `/usr/lib/bin`
+* Create dedicated users
+* Create directories
+* Change the ownership of those directories
+* Cleanup
+
+### Prometheus
+
+```python
+#download files
+wget https://github.com/prometheus/prometheus/releases/download/v2.33.4/prometheus-2.33.4.linux-amd64.tar.gz
+
+#extract
+tar xvf prometheus-*.tar.gz
+
+#move the files to /usr/lib/bin
+sudo cp ./prometheus-2.33.4.linux-amd64/prometheus /usr/local/bin/
+sudo cp ./prometheus-2.33.4.linux-amd64/promtool /usr/local/bin/
+sudo cp -r ./prometheus-2.33.4.linux-amd64/consoles /etc/prometheus
+sudo cp -r ./prometheus-2.33.4.linux-amd64/console_libraries /etc/prometheus
+
+#create dedicated users
+sudo useradd --no-create-home --shell /usr/sbin/nologin prometheus
+
+#create directories
+sudo mkdir /var/lib/prometheus
+
+#change the ownership
+sudo chown prometheus:prometheus /etc/prometheus/ -R
+sudo chown prometheus:prometheus /var/lib/prometheus/ -R
+sudo chown prometheus:prometheus /usr/local/bin/prometheus
+sudo chown prometheus:prometheus /usr/local/bin/promtool
+
+#cleanup
+rm -rf ./prometheus*
+```
+
+### Node Exporter
+
+```python
+#download files
+wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz
+
+#extract
+tar xvf node_exporter-*.tar.gz
+
+#move the files to /usr/lib/bin
+sudo cp ./node_exporter-1.3.1.linux-amd64/node_exporter /usr/local/bin/
+
+#create dedicated users
+sudo useradd --no-create-home --shell /usr/sbin/nologin node_exporter
+
+#change the ownership
+sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter
+
+#cleanup
+rm -rf ./node_exporter*
+```
+
+### Process Exporter
+
+```python
+#download files
+wget https://github.com/ncabatoff/process-exporter/releases/download/v0.7.10/process-exporter-0.7.10.linux-amd64.tar.gz
+
+#extract
+tar xvf process-exporter-*.tar.gz
+
+#move the files to /usr/lib/bin
+sudo cp ./process-exporter-0.7.10.linux-amd64/process-exporter /usr/local/bin/
+
+#create dedicated users
+sudo useradd --no-create-home --shell /usr/sbin/nologin process-exporter
+
+#create directories
+sudo mkdir /etc/process-exporter
+
+#change the ownership
+sudo chown process-exporter:process-exporter /etc/process-exporter -R
+sudo chown process-exporter:process-exporter /usr/local/bin/process-exporter
+
+#cleanup
+rm -rf ./process-exporter*
+```
+
+### Alert Manager
+
+```python
+#download files
+wget https://github.com/prometheus/alertmanager/releases/download/v0.23.0/alertmanager-0.23.0.linux-amd64.tar.gz
+
+#extract
+tar xvf alertmanager-*.tar.gz
+
+#move the files to /usr/lib/bin
+sudo cp ./alertmanager-0.23.0.linux-amd64/alertmanager /usr/local/bin/
+sudo cp ./alertmanager-0.23.0.linux-amd64/amtool /usr/local/bin/
+
+#create dedicated users
+sudo useradd --no-create-home --shell /usr/sbin/nologin alertmanager
+
+#create directories
+sudo mkdir /etc/alertmanager
+sudo mkdir /var/lib/alertmanager
+
+#change the ownership
+sudo chown alertmanager:alertmanager /etc/alertmanager/ -R
+sudo chown alertmanager:alertmanager /var/lib/alertmanager/ -R
+sudo chown alertmanager:alertmanager /usr/local/bin/alertmanager
+sudo chown alertmanager:alertmanager /usr/local/bin/amtool
+
+#cleanup
+rm -rf ./alertmanager*
+```
+
+### Grafana
+
+```python
+sudo apt-get install -y adduser libfontconfig1
+wget https://dl.grafana.com/oss/release/grafana_8.4.2_amd64.deb
+sudo dpkg -i grafana_8.4.2_amd64.deb
+
+sudo grafana-cli plugins install camptocamp-prometheus-alertmanager-datasource
+sudo systemctl restart grafana-server
+
+#cleanup
+rm -rf ./grafana*
+```
diff --git a/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/secure_connection.md b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/secure_connection.md
new file mode 100644
index 0000000..4515f44
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/secure_connection.md
@@ -0,0 +1,184 @@
+---
+sidebar_position: 2
+---
+
+# 2. Secure SSH Connection
+
+**SSH access** is the most standard attack vector for an online server. An incredible number of robots and hackers scan the default port 22 and gain access, with basic and elaborated credentials.
+
+In this part, we are going to build a **secure SSH connection with strong SSH keys.** We will **change the default SSH port** to mitigate scans and brute-force attempts.
+
+We will use the [`curve25519-sha256`](https://git.libssh.org/projects/libssh.git/tree/doc/curve25519-sha256@libssh.org.txt) protocol (**ECDH over Curve25519 with SHA2**) for our keys here as this is considered the most secure nowadays.
+
+:::info
+This part is a modified version of [bLd's guide](https://medium.com/bld-nodes/securing-ssh-access-to-your-server-cc1324b9adf6) using only **Putty** client to access server. If you are using Linux or MacOS, you can refer directly to the original guide to use **Open SSH**.
+:::
+
+## Configuration
+
+:::info
+Follow this guide step-by-step. We recommend that you try to understand every step explained in this guide.
+:::
+
+:::caution
+Be very careful to never close your actual session until you’ve tested the connection with your new key. You could lose access to your SSH connection.
+:::
+
+Connect to your server using [PuTTy](https://www.chiark.greenend.org.uk/\~sgtatham/putty/latest.html).
+
+1. Open PuTTY
+2. Type the IP of your server on Azure in the field called ‘**Host Name**’.
+3. The terminal will open and you can log in with your username and password.
+
+Let’s start by moving our actual unsecured host keys into a backup directory:
+
+```
+cd /etc/ssh
+sudo mkdir backup
+sudo mv ssh_host_* ./backup/
+```
+
+Open the `ssh` config file:
+
+```
+sudo nano /etc/ssh/ssh_config
+```
+
+Add the following lines in the `Host *` section and save:
+
+```
+Host *
+ KexAlgorithms curve25519-sha256@libssh.org
+ HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-ed25519
+ Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
+ MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com
+ PasswordAuthentication no
+ ChallengeResponseAuthentication no
+ PubkeyAuthentication yes
+ UseRoaming no
+```
+
+Save `CTRL+O` your file and close the editor `CTRL+X`
+
+Open the `sshd` config file:
+
+```
+sudo nano /etc/ssh/sshd_config
+```
+
+:::info
+In the following lines, you see that we use port 4321. This is just an example. You can use any random port within the range of 1024 to 49151. Copy these lines in your file:
+:::
+
+```
+Port 22
+Port 4321
+KexAlgorithms curve25519-sha256@libssh.org
+HostKey /etc/ssh/ssh_host_ed25519_key
+Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
+MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com
+AllowGroups ssh-user
+PubkeyAuthentication yes
+PasswordAuthentication no
+ChallengeResponseAuthentication no
+```
+
+Save `CTRL+O` your file and close the editor `CTRL+X` .
+
+Now, what did we just do? In detail, we told the host to:
+
+* Use the port instead 4321 of default 22: please use a different random port in the range 1024–49151
+* Use the `curve25519` protocol for authentication
+* Use `chacha20-poly1305` (preferred), `aes-gmc` and `aes-ctr` ciphers for data
+* Enable _Message Authentication Code MAC_ for CTR ciphers
+* Allow ssh group ssh-user
+* Enable key authentication
+* Disable password access
+
+:::info
+Here we left the line `Port 22` for the first test on the new port. Once your tests are successful, we will remove this line.
+:::
+
+Then, create the new SSH host key:
+
+```
+sudo ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N ""
+```
+
+Create the SSH user group and add your user to it. This will prevent any connection to an unexpected user:
+
+```
+sudo groupadd ssh-user
+sudo usermod -a -G ssh-user
+```
+
+**Note**: you have to change `` by the user that you use on your server.
+
+### Firewall
+
+Before continuing, it is very important to open the newly configured SSH port in your firewall settings of your server (4321 in our example). For the first tests, you should let port 22 open. Once you successfully connected to the new port, you can safely close port 22.
+
+### Generate SSH keys
+
+:::info
+This guide is built around Azure and PuTTy, in case you want to use OpenSSH follow [this guide](https://medium.com/bld-nodes/securing-ssh-access-to-your-server-cc1324b9adf6).
+:::
+
+Open PUTTYGen GUI:
+
+
+
+
+
+Select the `Ed25519`key type and click on _Generate_:
+
+
+
+
+
+Enter a strong passphrase and save both private and public key in a secure folder. Copy the public key from the text box.
+
+Go back to the PuTTy session on your **server** and open the `authorized_keys` file.
+
+```
+sudo nano ~/.ssh/authorized_keys
+```
+
+Paste the public key and save.
+
+### Verify
+
+Let’s restart the `ssh` service without killing the current session:
+
+```
+sudo kill -SIGHUP $(pgrep -f 'sshd -D')
+```
+
+**Attention**: you should not send a complete restart of `sshd` for the moment, this would close your open session and potentially lose access to your server if something is set wrong.
+
+Check that the `sshd` service is still running correctly:
+
+```
+systemctl status sshd
+```
+
+### Connect
+
+Let’s load the private key in the Putty `Auth` section:
+
+
+
+
+
+Don’t forget to use your custom port, then connect:
+
+
+
+
+
+Congratulation, y**our SSH connection is secure**!
+
+:::info
+Don’t forget to remove port 22 from `sshd_config` file and firewall, and check that no other key is allowed in `authorized_keys` file.
+:::
+
diff --git a/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/services.md b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/services.md
new file mode 100644
index 0000000..d2cfdba
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/services.md
@@ -0,0 +1,125 @@
+---
+sidebar_position: 7
+---
+
+# 7. Services
+
+## Systemd
+
+Starting all programs manually is such a pain. So we are going to use this section to create the `systemd` services.
+
+Creating those services will setup a fully **automated process** that you will never have to do again **if your node reboots**.
+
+:::tip
+Please set all the services provided here.
+:::
+
+### Prometheus
+
+```
+sudo touch /etc/systemd/system/prometheus.service
+sudo nano /etc/systemd/system/prometheus.service
+```
+
+```
+[Unit]
+ Description=Prometheus Monitoring
+ Wants=network-online.target
+ After=network-online.target
+
+[Service]
+ User=prometheus
+ Group=prometheus
+ Type=simple
+ ExecStart=/usr/local/bin/prometheus \
+ --config.file /etc/prometheus/prometheus.yml \
+ --storage.tsdb.path /var/lib/prometheus/ \
+ --web.console.templates=/etc/prometheus/consoles \
+ --web.console.libraries=/etc/prometheus/console_libraries
+ ExecReload=/bin/kill -HUP $MAINPID
+
+[Install]
+ WantedBy=multi-user.target
+```
+
+### Node exporter
+
+```
+sudo touch /etc/systemd/system/node_exporter.service
+sudo nano /etc/systemd/system/node_exporter.service
+```
+
+```
+[Unit]
+ Description=Node Exporter
+ Wants=network-online.target
+ After=network-online.target
+
+[Service]
+ User=node_exporter
+ Group=node_exporter
+ Type=simple
+ ExecStart=/usr/local/bin/node_exporter
+
+[Install]
+ WantedBy=multi-user.target
+```
+
+### Process exporter
+
+```
+sudo touch /etc/systemd/system/process-exporter.service
+sudo nano /etc/systemd/system/process-exporter.service
+```
+
+```
+[Unit]
+ Description=Process Exporter
+ Wants=network-online.target
+ After=network-online.target
+
+[Service]
+ User=process-exporter
+ Group=process-exporter
+ Type=simple
+ ExecStart=/usr/local/bin/process-exporter \
+ --config.path /etc/process-exporter/config.yml
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### Alert manager
+
+```
+sudo touch /etc/systemd/system/alertmanager.service
+sudo nano /etc/systemd/system/alertmanager.service
+```
+
+```
+[Unit]
+ Description=AlertManager Server Service
+ Wants=network-online.target
+ After=network-online.target
+
+[Service]
+ User=alertmanager
+ Group=alertmanager
+ Type=simple
+ ExecStart=/usr/local/bin/alertmanager \
+ --config.file /etc/alertmanager/alertmanager.yml \
+ --storage.path /var/lib/alertmanager \
+ --web.external-url=http://localhost:9093 \
+ --cluster.advertise-address='0.0.0.0:9093'
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### Grafana
+
+**Grafana’s service** is automatically created during the extraction of the `deb` package, you do not need to create it manually.
+
+
+Now it's getting exciting! We are going to fire up everything. If you encounter errors in a file, go back to previous sections steps and check if you missed anything.
+If cannot identify the issues then join our Discord. We will provide you with support there.
diff --git a/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/ssh_tunneling.md b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/ssh_tunneling.md
new file mode 100644
index 0000000..e48f7d6
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/ssh_tunneling.md
@@ -0,0 +1,35 @@
+---
+sidebar_position: 3
+---
+
+# 3. SSH Tunneling
+
+# SSH Tunneling
+
+**Grafana** runs an **HTTP** server on your local node so basically, we shouldn’t access it directly from the outside.
+
+**SSH tunneling** is considered to be a safe way to make traffic transit from your node to your local computer (or even phone). The principle is to make the SSH client listen to a specific port on your local machine, **encrypt traffic through SSH** protocol, and forward it to the target port on your node.
+
+
+
+
+
+Of course, you could also configure Grafana to run an HTTPS server but we do not want to expose another open port. Since our data will be encrypted with SSH, **we do not need HTTPS**.
+
+Once we have finished installing Grafana on our node, we will access it through this address on our local machine: `http://localhost:2022`
+
+ As PuTTy is a very popular client usable on many OS and used in this guide, here is where you can configure local port forwarding. If you want to [use OpenSSL please follow this guide](https://bldstackingnode.medium.com/monitoring-substrate-node-polkadot-kusama-parachains-validator-guide-922734ea4cdb#3351).
+
+
+
+
+
+Inside the SSH | Tunnel’s menu, just add the local port and destination then click _Add_.
+
+* `2022` is the local port we arbitrary chose (please use a different unused local port inside the range 1024–49151)
+* `3000` is Grafana’s port
+
+:::tip
+Don’t forget to save the session.
+:::
+
diff --git a/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/start_monitoring.md b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/start_monitoring.md
new file mode 100644
index 0000000..388b354
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/start_monitoring.md
@@ -0,0 +1,115 @@
+---
+sidebar_position: 9
+---
+
+# 9. Run Monitor Dashboard
+
+## Run Grafana dashboard
+
+Now we get to the most visual part: the **monitoring dashboard**.
+
+From the browser on your local machine, connect to the custom port on localhost that we have set at the beginning of this guide:
+
+```
+http://localhost:2022
+```
+
+
+
+
+
+Enter the default user `admin` and password `admin` then change the password.
+
+
+
+
+
+### Add data Sources
+
+Open the _Settings_ menu:
+
+
+
+
+
+Click on _Data Sources_:
+
+
+
+
+
+Click on Add data source:
+
+
+
+
+
+Select Prometheus:
+
+
+
+
+
+Just fill the URL with http://localhost:9090 and click _Save & Test_.
+Then add a new data source and search for Alert Manager
+
+
+
+
+
+Fill the URL with http://localhost:9093 and click _Save & Test_.
+
+
+
+
+
+Now you have your 2 data sources set like that:
+
+
+
+
+
+
+### Import the dashboard
+
+Open the _New_ menu:
+
+
+
+
+
+Click on _Import_:
+
+
+
+
+
+Select our favorite [dashboard 13840](https://grafana.com/grafana/dashboards/13840), we recommend using this dashboard because it's created by one of our Ambassadors and we don't want to fork this. All credits go to him.
+
+
+
+
+
+Select the Prometheus and AlertManager sources and click _Import_.Dashboard selection
+
+
+
+
+
+In the dashboard selection, make sure you select:
+
+* **Chain Metrics**: `polkadot` for a Polkadot/Kusama node or `substrate` for any other parachain node
+* **Chain Instance Host:** `localhost:9615` to point the chain data scrapper
+* **Chain Process Name**: the name of your node binary
+
+And there you go, everything is set!
+
+Monitoring dashboard [Polkadot Essentials](https://grafana.com/grafana/dashboards/13840)
+
+
+
+
+
+Easy right? Consider saving the dashboard once parameters are set and working.
+
+**Note**: you can also consider [Parity’s dashboards](https://github.com/paritytech/substrate/tree/master/scripts/ci/monitoring/grafana-dashboards) for advanced monitoring and analysis.
diff --git a/docs/build/build-on-layer-1/nodes/collator/spinup_collator.md b/docs/build/build-on-layer-1/nodes/collator/spinup_collator.md
new file mode 100644
index 0000000..e8340f8
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/collator/spinup_collator.md
@@ -0,0 +1,153 @@
+---
+sidebar_position: 3
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Spin up a Collator
+
+:::caution
+Collators are responsible for the network stability, it is very important to be able to react at any time of the day or night in case of trouble. We strongly encourage collators to set up a monitoring and alerting system, learn more about this from our [secure setup guide](/docs/build/build-on-layer-1/nodes/collator/secure_setup_guide/index.md).
+:::
+
+### Service Parameters
+
+
+
+
+```sh
+./astar-collator \
+ --collator \
+ --name {COLLATOR_NAME} \
+ --chain astar \
+ --base-path /var/lib/astar \
+ --pruning archive \
+ --trie-cache-size 0 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+```
+
+
+
+
+```sh
+./astar-collator \
+ --collator \
+ --name {COLLATOR_NAME} \
+ --chain shiden \
+ --base-path /var/lib/astar \
+ --pruning archive \
+ --trie-cache-size 0 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+```
+
+
+
+
+```sh
+./astar-collator \
+ --collator \
+ --name {COLLATOR_NAME} \
+ --chain shibuya \
+ --base-path /var/lib/astar \
+ --pruning archive \
+ --trie-cache-size 0 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+```
+
+
+
+
+### Verify synchronization
+
+Before jumping to the next steps, you have to wait until your node is **fully synchronized**. This can take a long time depending on the chain height. Please note that syncing to one of our networks requires the node to sync with the network and with the relay chain.
+
+Check the current synchronization:
+
+```
+journalctl -f -u shibuya-node -n100
+```
+
+### Session Keys
+
+#### Author session keys
+
+Run the following command to author session keys:
+
+```
+curl -H "Content-Type: application/json" -d '{"id":1, "jsonrpc":"2.0", "method": "author_rotateKeys", "params":[]}' http://localhost:9944
+```
+
+The result will look like this (you just need to copy the key):
+
+```
+{"jsonrpc":"2.0","result":"0x600e6cea49bdeaab301e9e03215c0bcebab3cafa608fe3b8fb6b07a820386048","id":1}
+```
+
+#### Set session keys
+
+Go to the Polkadot.js portal and connect to respective network (Astar, Shiden or Shibuya).
+
+Go to _**Developper > Extrinsic**_ and select your **collator account** and extrinsic type: _**session / setKeys**_
+
+Enter the **session keys** and set proof to `0x00`:
+
+
+
+
+
+Submit the transaction.
+
+### Identity
+
+#### Set identity
+
+On the Polkadot.js portal select _**Accounts**_.
+
+Open the 3 dots next to your collators address: **Set on-chain Identity**:
+
+
+
+
+
+Enter all fields you want to set:
+
+
+
+
+
+Send the transaction.
+
+#### Request judgment
+
+On the Polkadot.js portal select _**Developer > Extrinsic**_.
+
+Select your **collator account** and extrinsic type: _**identity / requestJudgment**_.
+
+Send the transaction.
+
+### Bond funds
+
+To start collating, you need to have **32 000 SDN** tokens for Shiden or **3 200 000 ASTR** tokens for Astar.
+
+On the Polkadot.js portal select _**Developer > Extrinsic**_.
+
+Select your **collator account** and extrinsic type: _**CollatorSelection / registerAsCandidate**_:
+
+
+
+
+
+Submit the transaction.
+
+### Production blocks
+
+:::info
+Onboarding takes place at `n+1` session.
+:::
+
+Once your collator is active, you will see your name inside **Network** tab every time you produce a block:
+
+
+
+
diff --git a/docs/build/build-on-layer-1/nodes/evm-tracing-node.md b/docs/build/build-on-layer-1/nodes/evm-tracing-node.md
new file mode 100644
index 0000000..3abdfa3
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/evm-tracing-node.md
@@ -0,0 +1,172 @@
+---
+sidebar_position: 7
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Run an EVM Tracing Node
+
+## Overview
+
+Running a tracing node on an Astar chain allows you to debug EVM transactions and have enhanced access to transaction pool using EVM debug RPC (INSERT_LINK also see link below).
+
+## Requirements
+
+Requirements for running a tracing node are similar to what we recommend for an archive node. Read more about this [here](/docs/build/build-on-layer-1/nodes/archive-node/index.md).
+
+
+## Node launch
+
+Tracing node setup in general is similar to the [Archive Node setup](/docs/build/build-on-layer-1/nodes/archive-node/index.md), except for the location of the binary and some additional launch flags.
+
+:::info
+
+An EVM tracing node binary is different because it includes additional tracing features. You can easily build it from source using `cargo build --release --features evm-tracing` command or download the `evm-tracing-artifacts` from [latest release](https://github.com/AstarNetwork/Astar/releases/latest), an executable EVM tracing binary is included in the compressed file `evm-tracing-artifacts.tar.gz`.
+
+:::
+
+:::important
+
+EVM RPC calls are disabled by default, and require the `--enable-evm-rpc` flag to be enabled. Please refer to this page (INSERT_LINK) for more info.
+
+:::
+
+### Runtime overriding
+
+Tracing runtimes has additional debug API that makes possible deep (and slow) transaction debugging. For this reason it's not part of production runtimes. So, for using tracing features, runtime must be overrided by special `tracing` runtime.
+
+For example, if current rutime is `astar-52` then `astar-runtime-52-substitute-tracing.wasm` blob should be used for overriding and debug recent transactions. Tracing runtime is published in release assets as `evm-tracing-artifacts`, please check [latest release here](https://github.com/AstarNetwork/Astar/releases/latest).
+
+For runtime override please create folder somewhere node can access it, by default it could be `/var/lib/astar/wasm`. And then copy overriding runtimes into this folder.
+This folder is cumulative, this means you can place all previous runtimes at the same place to be able to trace historical data.
+
+```
+mkdir /var/lib/astar/wasm
+cp astar-runtime-52-substitute-tracing.wasm /var/lib/astar/wasm
+chown -hR astar /var/lib/astar/wasm
+```
+
+When wasm blob located correctly the node launch string should be addicted by `--wasm-runtime-overrides=/var/lib/astar/wasm` flag. Then service should be restarted, if all go well then node will catch up tracing runtime and substitute on-chain version by it.
+
+:::important
+
+Tracing data at a certain block requires to override the runtime version of this block.
+To use tracing on an ancient blocks, you need to add the runtime that was in place at this block.
+
+:::
+
+## Service parameters
+
+The service file for a tracing node will look like this
+
+:::tip
+Please make sure to change **{NODE_NAME}**
+:::
+
+
+
+
+```sh
+[Unit]
+Description=Astar EVM tracing node
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --chain astar \
+ --state-pruning archive \
+ --blocks-pruning archive \
+ --rpc-cors all \
+ --name {NODE_NAME} \
+ --base-path /var/lib/astar \
+ --state-pruning archive \
+ --blocks-pruning archive \
+ --rpc-methods Safe \
+ --rpc-max-request-size 10 \
+ --rpc-max-response-size 50 \
+ --enable-evm-rpc \
+ --ethapi=txpool,debug,trace \
+ --wasm-runtime-overrides /var/lib/astar/wasm \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+```sh
+[Unit]
+Description=Shiden EVM tracing node
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --chain shiden \
+ --state-pruning archive \
+ --blocks-pruning archive \
+ --rpc-cors all \
+ --name {NODE_NAME} \
+ --base-path /var/lib/astar \
+ --state-pruning archive \
+ --blocks-pruning archive \
+ --rpc-methods Safe \
+ --rpc-max-request-size 10 \
+ --rpc-max-response-size 10 \
+ --enable-evm-rpc \
+ --ethapi=txpool,debug,trace \
+ --wasm-runtime-overrides /var/lib/astar/wasm \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+```sh
+[Unit]
+Description=Shibuya EVM tracing node
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --pruning archive \
+ --rpc-cors all \
+ --name {NODE_NAME} \
+ --chain shibuya \
+ --base-path /var/lib/astar \
+ --state-pruning archive \
+ --blocks-pruning archive \
+ --rpc-methods Safe \
+ --rpc-max-request-size 10 \
+ --rpc-max-response-size 10 \
+ --enable-evm-rpc \
+ --ethapi=txpool,debug,trace \
+ --wasm-runtime-overrides /var/lib/astar/wasm \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
diff --git a/docs/build/build-on-layer-1/nodes/full-node.md b/docs/build/build-on-layer-1/nodes/full-node.md
new file mode 100644
index 0000000..2a98911
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/full-node.md
@@ -0,0 +1,37 @@
+---
+sidebar_position: 3
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Run a Full Node
+
+## Overview
+
+Running a full node on an Astar chain allows you to connect to the network, sync with a bootnode, obtain local access to RPC endpoints, author blocks on the parachain, and more.
+
+Different from archive node, a full node discards all finalized blocks older than configured number of blocks (256 blocks by default).
+A full node occupies less storage space than an archive node because of pruning.
+
+A full node may eventually be able to rebuild the entire chain with no additional information, and become an archive node, but at the time of writing, this is not implemented. If you need to query historical blocks past what you pruned, you need to purge your database and resync your node starting in archive mode. Alternatively you can use a backup or snapshot of a trusted source to avoid needing to sync from genesis with the network, and only need the blocks past that snapshot. (reference: https://wiki.polkadot.network/docs/maintain-sync#types-of-nodes)
+
+If your node need to provide old historical blocks' data, please consider to use Archive node instead.
+
+## Requirements
+
+Requirements for running any node are similar to what we recommend for archive node. Read more about this [here](/docs/build/build-on-layer-1/nodes/archive-node/index.md).
+Note that Full node requires less disk space. Hard Disk requirement for Archive node is not applied to Full nodes.
+
+To set a full node, you need to specify the number of blocks to be pruned:
+```
+--pruning 1000 \
+```
+
+:::info
+Running a node for our testnet 'Shibuya' requires less resources. It's a perfect place to test your node infrastructure and costs.
+:::
+
+:::important
+EVM RPC calls are disabled by default, and require additional flag to be enabled. Please refer to this page [INSERT LINK] for more info.
+:::
diff --git a/docs/build/build-on-layer-1/nodes/index.md b/docs/build/build-on-layer-1/nodes/index.md
new file mode 100644
index 0000000..9146c5f
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/index.md
@@ -0,0 +1,10 @@
+# Node Operators
+
+**Node SDK** is broken down into the following pages:
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs/build/build-on-layer-1/nodes/node-commands.md b/docs/build/build-on-layer-1/nodes/node-commands.md
new file mode 100644
index 0000000..b8d3609
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/node-commands.md
@@ -0,0 +1,380 @@
+---
+sidebar_position: 5
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Node Commands
+
+The following sections summarize the commands of Astar nodes you need for different cases.
+For any more details, you can consult help page:
+```
+astar-collator --help
+```
+
+---
+
+## Collator
+### Binary service file
+
+
+
+```
+[Unit]
+Description=Astar Collator
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --pruning archive \
+ --collator \
+ --name {COLLATOR_NAME} \
+ --chain astar \
+ --base-path /var/lib/astar \
+ --trie-cache-size 0 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+```
+[Unit]
+Description=Shiden Collator
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --pruning archive \
+ --collator \
+ --name {COLLATOR_NAME} \
+ --chain shiden \
+ --base-path /var/lib/astar \
+ --trie-cache-size 0 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+```
+[Unit]
+Description=Shibuya Collator
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --pruning archive \
+ --collator \
+ --name {COLLATOR_NAME} \
+ --chain shibuya \
+ --base-path /var/lib/astar \
+ --trie-cache-size 0 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+### Docker
+
+
+
+
+```
+docker run -d \
+--name astar-container \
+-u $(id -u astar):$(id -g astar) \
+-p 30333:30333 \
+-v "/var/lib/astar/:/data" \
+staketechnologies/astar-collator:latest \
+astar-collator \
+--pruning archive \
+--collator \
+--name {COLLATOR_NAME} \
+--chain astar \
+--base-path /data \
+--trie-cache-size 0 \
+--telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+```
+
+
+
+
+```
+docker run -d \
+--name shiden-container \
+-u $(id -u astar):$(id -g astar) \
+-p 30333:30333 \
+-v "/var/lib/astar/:/data" \
+staketechnologies/astar-collator:latest \
+astar-collator \
+--pruning archive \
+--collator \
+--name {COLLATOR_NAME} \
+--chain shiden \
+--base-path /data \
+--trie-cache-size 0 \
+--telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+```
+
+
+
+
+```
+docker run -d \
+--name shibuya-container \
+-u $(id -u astar):$(id -g astar) \
+-p 30333:30333 \
+-v "/var/lib/astar/:/data" \
+staketechnologies/astar-collator:latest \
+astar-collator \
+--pruning archive \
+--collator \
+--name {COLLATOR_NAME} \
+--chain shibuya \
+--base-path /data \
+--trie-cache-size 0 \
+--telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+```
+
+
+
+
+---
+
+## Archive node as RPC endpoint
+### Binary
+
+
+
+
+```
+[Unit]
+Description=Astar Archive node
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --pruning archive \
+ --rpc-cors all \
+ --name {NODE_NAME} \
+ --chain astar \
+ --base-path /var/lib/astar \
+ --rpc-external \
+ --rpc-methods Safe \
+ --rpc-max-request-size 1 \
+ --rpc-max-response-size 1 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+```
+[Unit]
+Description=Shiden Archive node
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --pruning archive \
+ --rpc-cors all \
+ --name {NODE_NAME} \
+ --chain shiden \
+ --base-path /var/lib/astar \
+ --rpc-external \
+ --rpc-methods Safe \
+ --rpc-max-request-size 1 \
+ --rpc-max-response-size 1 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+```
+[Unit]
+Description=Shibuya Archive node
+
+[Service]
+User=astar
+Group=astar
+
+ExecStart=/usr/local/bin/astar-collator \
+ --pruning archive \
+ --rpc-cors all \
+ --name {NODE_NAME} \
+ --chain shibuya \
+ --base-path /var/lib/astar \
+ --rpc-external \
+ --rpc-methods Safe \
+ --rpc-max-request-size 1 \
+ --rpc-max-response-size 1 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+
+### Docker
+
+
+
+
+```
+docker run -d \
+--name astar-container \
+-u $(id -u astar):$(id -g astar) \
+-p 30333:30333 \
+-p 9944:9944 \
+-v "/var/lib/astar/:/data" \
+staketechnologies/astar-collator:latest \
+astar-collator \
+--pruning archive \
+--rpc-cors all \
+--name {NODE_NAME} \
+--chain astar \
+--base-path /data \
+--rpc-external \
+--rpc-methods Safe \
+--rpc-max-request-size 1 \
+--rpc-max-response-size 1 \
+--telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+```
+
+
+
+
+```
+docker run -d \
+--name shiden-container \
+-u $(id -u astar):$(id -g astar) \
+-p 30333:30333 \
+-p 9944:9944 \
+-v "/var/lib/astar/:/data" \
+staketechnologies/astar-collator:latest \
+astar-collator \
+--pruning archive \
+--rpc-cors all \
+--name {NODE_NAME} \
+--chain astar \
+--base-path /data \
+--rpc-external \
+--rpc-methods Safe \
+--rpc-max-request-size 1 \
+--rpc-max-response-size 1 \
+--telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+```
+
+
+
+
+```
+docker run -d \
+--name shibuya-container \
+-u $(id -u astar):$(id -g astar) \
+-p 30333:30333 \
+-p 9944:9944 \
+-v "/var/lib/astar/:/data" \
+staketechnologies/astar-collator:latest \
+astar-collator \
+--pruning archive \
+--rpc-cors all \
+--name {NODE_NAME} \
+--chain astar \
+--base-path /data \
+--rpc-external \
+--rpc-methods Safe \
+--rpc-max-request-size 1 \
+--rpc-max-response-size 1 \
+--telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+```
+
+
+
+
+---
+
+## Specific cases command args
+
+### EVM management
+
+Enable EVM medhods on RPC node
+```
+--enable-evm-rpc
+```
+
+Enable EVM debug log
+```
+--ethapi=debug
+```
+
+Enable EVM tracing log
+```
+--ethapi=txpool,debug,trace
+--wasm-runtime-overrides /var/lib/astar/wasm
+```
+
+### External monitoring
+```
+--prometheus-external
+```
+
+---
+
+## Full command documentation
+To see full node command binary embedded documentation, please use help option.
+```
+$ ./astar-collator -h
+```
+
+Node process will be launched with Parachain ID 2006 for Astar, 2007 for Shiden, 1000 for Shibuya.
+Parachain ID info for each network can be found [here](/docs/build/build-on-layer-1/environment/endpoints.md).
diff --git a/docs/build/build-on-layer-1/nodes/node-maintenance.md b/docs/build/build-on-layer-1/nodes/node-maintenance.md
new file mode 100644
index 0000000..70b5a98
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/node-maintenance.md
@@ -0,0 +1,94 @@
+---
+sidebar_position: 6
+---
+
+# Node Maintenance
+
+## Backup
+
+Maintaining a backup node that is in sync with a collator is vital to ensuring continuous and uninterrupted block production, and avoiding the possibility of being slashed. We highly recommend locating a backup node in a different physical location, with a different provider.
+
+Note: The collator session keys are stored in `/var/lib/astar/chains/{NETWORK}/keystore`.
+
+:::info
+
+You may need to install the rsync package depending on your distro (using `sudo apt-get install rsync` or similar)
+
+:::
+
+:::caution
+
+Ensure you create a backup of the keystore folder on your local machine using the following command:
+
+`rsync --rsync-path="sudo rsync" -r {MAIN_SERVER_IP}:/var/lib/astar/chains/{NETWORK}/keystore .`
+
+:::caution
+
+### In case of an incident on the main collator
+
+On the **backup collator server**, stop the collator service and remove keys:
+
+```sh
+sudo systemctl stop {NETWORK}.service
+sudo rm -rf /var/lib/astar/chains/{NETWORK}/keystore/*
+```
+
+On your **local machine**, from your **backup directory**, copy the keys into the keystore folder of the backup server:
+
+```sh
+rsync --rsync-path="sudo rsync" -r ./keystore {BACKUP_SERVER_IP}:/var/lib/astar/chains/{NETWORK}
+```
+
+On the **backup collator server**, update permission of the ``astar`` directory and restart the collator service:
+
+```sh
+sudo chown -R astar:astar /var/lib/astar/
+sudo systemctl start {NETWORK}.service
+```
+
+## Get node logs
+
+To get the last 100 lines from the node logs, use the following command:
+
+```sh
+journalctl -fu astar-collator -n100
+```
+
+## Upgrade node
+
+When a node upgrade is necessary, node operators are notified with instructions in the [Astar Dev Announcement Telegram](https://t.me/+cL4tGZiFAsJhMGJk), Astar Discord (INSERT_LINK), and [The Astar Node Upgrade Element channel](https://matrix.to/#/#shiden-runtime-ann:matrix.org). Join and follow any of these channels to receive news about node updates and node upgrades.
+
+Download the [latest release](https://github.com/AstarNetwork/Astar/releases/latest) from Github:
+
+```sh
+wget $(curl -s https://api.github.com/repos/AstarNetwork/Astar/releases/latest | grep "tag_name" | awk '{print "https://github.com/AstarNetwork/Astar/releases/download/" substr($2, 2, length($2)-3) "/astar-collator-v" substr($2, 3, length($2)-4) "-ubuntu-x86_64.tar.gz"}')
+tar -xvf astar-collator*.tar.gz
+```
+
+Move the new release binary and restart the service:
+
+```sh
+sudo mv ./astar-collator /usr/local/bin
+sudo chmod +x /usr/local/bin/astar-collator
+sudo systemctl restart {NETWORK}.service
+```
+
+## Purge node
+
+:::danger
+**Never purge the chain data on an active collator** or it will not produce blocks during the sync process, and therefore reduce the block production rate of the chain.
+Instead, switch to your backup node and *only* purge the chain data after it is **actively collating**.
+:::
+
+To start a node from scratch without any existing chain data, simply wipe the chain data directory:
+
+```sh
+sudo systemctl stop {NETWORK}.service
+sudo rm -R /var/lib/astar/chains/{NETWORK}/db/*
+sudo systemctl start {NETWORK}.service
+```
+
+## Snapshot
+
+Please refer to **snapshot page**(/docs/build/build-on-layer-1/nodes/snapshots).
+:::
diff --git a/docs/build/build-on-layer-1/nodes/rpi-cheat-sheet.md b/docs/build/build-on-layer-1/nodes/rpi-cheat-sheet.md
new file mode 100644
index 0000000..bff8e82
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/rpi-cheat-sheet.md
@@ -0,0 +1,128 @@
+---
+title: Raspberry Pi Node
+sidebar_position: 8
+---
+
+# Raspberry Pi configuration for Astar node
+
+This is a simple readme containing main steps to setup a Raspberry Pi running an Astar node.
+
+Requirements:
+- Raspberry Pi 4 with 2 Gb RAM minimum, recomended 4 Gb
+- A USD hard drive (preferably SSD) of 1 Tb
+- Internet connection (Wifi is fine)
+- A SD Card
+
+## Setup Raspberry Pi to boot from USB disk
+
+Raspberry Pi natively loads OS from MicroSD card.
+As we need a hard drive to store the blockchain database, we start by configuring the Raspberry to boot OS from USB disk.
+
+Download Raspberry Pi Imager: https://www.raspberrypi.com/software/
+
+Insert the SD card
+
+Start RPi imager
+
+Choose OS > Misc Utility Images > Bootloader > USB Boot
+
+Choose storage > select the SD card
+
+Write
+
+Insert the SD card into the Raspberry Pi
+
+Plug the Pi and wait for 10-20 seconds after the green light blinks constantly
+
+Turn off the Pi and remove the SD card
+
+## Install OS
+
+Plug the USD hard drive
+
+Start RPi imager
+
+Choose OS > Other general-purpose OS > Ubuntu > Ubuntu Server 22.04.2 LTS (64-bit)
+
+Choose storage > select the USB disk
+
+If advance menu doesn't show up, open it with Ctrl + Shift + X
+
+Set hostname, enable SSH, user and wireless LAN
+
+Write
+
+Plug the USB drive on the Pi and turn it on
+
+## Configure the Raspberry Pi
+
+SSH to the Pi from your computer
+- On Linux/Mac: `ssh user@pi_name.local`
+- On Windows, you will need a SSH client like PuTTY
+
+Check that partition / uses the full size of the disk: `lsblk`
+
+Update and upgrade the OS with latest packages: `sudo apt update && sudo apt upgrade`
+
+Install package required: `sudo apt install -y adduser libfontconfig1`
+
+To prevent Out Of Memory issues, create a swap file
+
+ sudo fallocate -l 4g /file.swap
+ sudo chmod 600 /file.swap
+ sudo mkswap /file.swap
+ sudo swapon /file.swap
+
+Add swap file to fstab so that swap will be loaded on reboot: `echo '/file.swap none swap sw 0 0' | sudo tee -a /etc/fstab`
+
+## Install Astar node
+
+Download and unarchive ARM binary
+
+ wget $(curl -s https://api.github.com/repos/AstarNetwork/Astar/releases/latest | grep "tag_name" | awk '{print "https://github.com/AstarNetwork/Astar/releases/download/" substr($2, 2, length($2)-3) "/astar-collator-v" substr($2, 3, length($2)-4) "-ubuntu-aarch64.tar.gz"}') && tar -xvf astar-collator*.tar.gz
+
+Create a dedicated user for the node and move the node binary:
+
+ sudo useradd --no-create-home --shell /usr/sbin/nologin astar
+ sudo mv ./astar-collator /usr/local/bin
+ sudo chmod +x /usr/local/bin/astar-collator
+
+Create a dedicated directory for the chain storage data: `sudo mkdir /var/lib/astar && sudo chown astar:astar /var/lib/astar`
+
+Create the Astar service file changing the name {NODE_NAME}
+
+sudo nano /etc/systemd/system/astar.service
+
+ [Unit]
+ Description=Astar Archive node
+
+ [Service]
+ User=astar
+ Group=astar
+
+ ExecStart=/usr/local/bin/astar-collator \
+ --pruning archive \
+ --rpc-cors all \
+ --name {NODE_NAME} \
+ --chain astar \
+ --base-path /var/lib/astar \
+ --rpc-external \
+ --ws-external \
+ --rpc-methods Safe \
+ --rpc-max-request-size 1 \
+ --rpc-max-response-size 1 \
+ --telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
+
+ Restart=always
+ RestartSec=10
+
+ [Install]
+ WantedBy=multi-user.target
+
+Save the file: Ctrl+O > Yes
+
+Start the service: `sudo systemctl start astar.service`
+
+Check the node log to ensure proper syncing: `journalctl -f -u astar.service -n100`
+
+Enable the service: `sudo systemctl enable astar.service`
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/nodes/snapshots.md b/docs/build/build-on-layer-1/nodes/snapshots.md
new file mode 100644
index 0000000..078f3f3
--- /dev/null
+++ b/docs/build/build-on-layer-1/nodes/snapshots.md
@@ -0,0 +1,44 @@
+---
+sidebar_position: 4
+---
+
+# Snapshots
+
+Generally speaking, using database snapshots is discouraged, it is a best practice to synchronize database from scratch.
+In some particular cases, it may be needed to use a parachain snapshot. Stakecraft is providing archive db snapshots for Astar and Shiden at .
+Note: these are archive snapshots only and they don't work on pruned node.
+
+## Stakecraft snapshots usage
+
+```sh
+# remove your Astar database directory in case you already have one
+rm -rf {BASE_PATH}/chains/{CHAIN}/db
+
+# in case you haven't started a node yet, you need to make the following dir
+mkdir -p {BASE_PATH}/chains/{CHAIN}/db/full
+
+# browse the directory
+cd {BASE_PATH}/chains/{CHAIN}/db/full
+
+# download latest snapshot
+wget -O - {STAKECRAFT_WEBSITE_SNAPSHOT} | tar xf -
+
+# pay attention to file ownership if needed
+chown -R astar:astar {BASE_PATH}/chains/{CHAIN}/db/full
+
+```
+
+Note: `{BASE_PATH}` is the path specified for chain data in the node command
+* The best practice is to set it to `/var/lib/astar`
+* The default path if you don't specify any is `~/.local/share/astar-collator`
+
+## Relay chain
+
+Since the introduction of warp sync, it is not necessary and discouraged to use a relay chain snapshot.
+This method downloads finality proofs and state in priority, it allows the relay node to be up with data necessary to the parachain node in less that 15 minutes.
+
+To sync the relay chain in warp mode, just add this at the end of the node command:
+
+```sh
+-- --sync warp
+```
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/_category_.json b/docs/build/build-on-layer-1/smart-contracts/EVM/_category_.json
new file mode 100644
index 0000000..482a196
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "EVM Smart contracts",
+ "position": 4
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/astarbase.md b/docs/build/build-on-layer-1/smart-contracts/EVM/astarbase.md
new file mode 100644
index 0000000..5acc0a2
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/astarbase.md
@@ -0,0 +1,138 @@
+---
+sidebar_position: 9
+---
+
+# AstarBase
+
+A few important facts about the Astar ecosystem:
+
+- The majority of crowdloan participants used their Polkadot native addresses (ss58 format), and also make up the majority of dApp staking participants.
+- Although Astar has built a Wasm smart contract platform, most dApps still use the Ethereum Virtual Machine (EVM) and address format (H160), native to MetaMask accounts.
+- DApp staking, which simultaneously provides a basic income for developers, and staking mechanism for users, is a system which both dApp developers and users benefit from.
+
+AstarBase aims to:
+
+- Bring more users to EVM dApps;
+- Create more opportunities for end users to participate in the Astar ecosystem;
+- Allow EVM dApps to attract attention and reward users, even though their funds may be illiquid, and locked in dApp Staking;
+- Encourage users to stake and use the dApp Staking mechanism.
+
+AstarBase is an on-chain EVM database, which contains a mapping between a user's EVM, and Astar Native ss58 address. Such mapping on its own does not bring any value to ecosystem projects, since anyone can register an address pair, but the `checkStakerStatus()` call, which checks to see if the ss58 address of the pair is an active staker, does.
+The AstarBase contracts are available on each of the Shibuya/Shiden/Astar Networks, and deployment addresses can be found in the [AstarBase github repository](https://github.com/AstarNetwork/astarbase/blob/main/contract/deployment-info.md).
+
+There are 3 functions that can be used to interact with AstarBase.
+
+- `isRegistered()` checks to see if the given address is registered in AstarBase
+
+```
+function isRegistered(address evmAddress)
+ external view
+ returns (bool);
+```
+
+- `checkStakerStatus()` Checks to see if a pair of addresses (ss58, evm) is an active staker in dApp staking, and returns the staked amount
+
+```
+function checkStakerStatus(address evmAddress)
+ external view
+ returns (uint128);
+```
+
+`checkStakerStatusOnContract()` Checks to see if a pair of addresses (ss58, evm) is an active staker in dApp staking on the specified contract, and returns the staked amount
+
+```
+function checkStakerStatusOnContract(address evmAddress, address stakingContract)
+ external view
+ returns (uint128);
+```
+
+The interface file `IAstarBase.sol` can be found in the [ERC20 example](https://github.com/AstarNetwork/astarbase/tree/main/contract/example).
+
+## How to Use AstarBase From the Client Side
+
+The `abi` for the contract can be found in [AstarBase Github repository](https://github.com/AstarNetwork/astarbase/tree/main/public/config).
+
+The following is an example usage of Astarbase from the client side:
+
+```js
+if (metamaskIsInstalled) {
+ Web3EthContract.setProvider(ethereum);
+ try {
+
+ const smartContract = new Web3EthContract(
+ abi,
+ CONFIG.ASTARBASE_ADDRESS
+ );
+
+ const stakerStatus = await smartContract.methods.checkStakerStatus(user).call();
+ const isRegistered = await smartContract.methods.isRegistered(user).call();
+
+ return isRegistered && stakerStatus > 0;
+ } catch (err) {
+ console.log(err);
+ return false;
+ }
+} else {
+ console.log('Install Metamask.');
+}
+```
+
+### How to Determine the Native Address From an H160 Address
+
+To read the address mapping perform the following:
+
+```js
+const abi = [
+ "function addressMap(address evmAddress) public view returns (bytes native)"
+];
+const contract = new ethers.Contract(providerRPC.astarPass.contract, abi, provider);
+const native = await contract.addressMap(evmAddress);
+console.log(native);
+```
+
+The complete script to read the address mapping is in the example folder on [Github repo](https://github.com/AstarNetwork/astarbase/tree/main/contract/example).
+
+## How to Use AstarBase From the Contract Side
+
+The following is an example usage for when an EVM contract wants to check dApp staking status for an H160 address:
+
+```sol
+import "./IAstarBase.sol"
+contract A {
+ // Deployed on Shibuya
+ AstarBase public ASTARBASE = AstarBase(0xF183f51D3E8dfb2513c15B046F848D4a68bd3F5D);
+ ...
+
+ function stakedAmount(address user) private view returns (uint128) {
+
+ // The returned value from checkStakerStatus() call is the staked amount
+ return ASTARBASE.checkStakerStatus(user);
+ }
+}
+```
+
+## Example Use Case: Discount Price on an NFT
+
+In the [minting-dapp Github repository](https://github.com/AstarNetwork/minting-dapp/blob/main/contract/contracts/ShidenPass_flat.sol) you will find an example NFT minting dApp, which uses AstarBase to mint a free NFT for active dApp stakers. The same example could easily be adapted to issue a discount price instead of a free NFT.
+
+## Example Use Case: Permissioned Claim for an ERC20 Airdrop
+
+A new project coming to Astar ecosystem would like to attract users by issuing an ERC20 token airdrop, but they want to qualify users who are active participants in the ecosystem only, not one-time users who will disappear after the airdrop is claimed. AstarBase can be used to create a permissioned airdrop claim, and make it available to dApp stakers, only.
+
+`if ASTARBASE.checkStakerStatus(user) > 0;`
+
+## Example Use Case: Rewards for Participants
+
+A project is using dApp staking as basic income, and would like to reward staking participants who are specifically staking on their dApp. Since those stakers use their native Astar address (s58),and the project is based on EVM, there is no way to issue EVM-based rewards. Astarbase, though, gives them the opportunity to do so, as long as they are registered for an AstarPass.
+
+`if ASTARBASE.checkStakerStatusOnContract(address evmAddress, address stakingContract) > 0;`
+
+See the [example ERC20 contract](https://github.com/AstarNetwork/astarbase/tree/main/contract/example) on Github, which mints rewards only to stakers of a specified contract.
+
+## Example Use Case: Bot Protection
+
+There is no absolute protection against bots, but at the very least, their activity can be disincentivized. In this example, the registered accounts will need to have the minimum staked amount in dApp staking, and there is also a configurable unbonding period. This will eliminate a bots ability to create an unlimited number of addresses, in order to claim rewards or buy NFTs. By using AstarBase, bots are forced to stake actively with the minimum amount, in the event they want to reap your project's rewards.
+
+## ECDSA Address Registration
+
+Besides the ss58 address scheme, [ECDSA](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm) addresses are also supported.
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/Truffle.md b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/Truffle.md
new file mode 100644
index 0000000..c1490b4
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/Truffle.md
@@ -0,0 +1,40 @@
+# Truffle
+
+### Create an Ethereum Account
+
+We recommend using the `@truffle/hdwallet-provider` package for key management. Instructions can be found [here](https://github.com/trufflesuite/truffle/blob/develop/packages/hdwallet-provider/README.md).
+
+### Add Networks to `truffle-config.js`
+
+To deploy and interact with Astar, modify `networks` in `truffle-config.js` to include Astar's networks:
+
+```js
+// truffle-config.js
+module.exports = {
+ networks: {
+ // ... any existing networks (development, test, etc.)
+
+ // Shibuya faucet: use #shibuya-faucet room in https://discord.gg/astarnetwork
+ shibuya: {
+ url: "https://evm.shibuya.astar.network",
+ network_id: 81,
+ },
+
+ // Astar community faucet (please don't abuse): https://as-faucet.xyz/en/astar#
+ astar: {
+ url: "https://evm.astar.network",
+ network_id: 592,
+ },
+
+ // Shiden community faucet (please don't abuse): https://as-faucet.xyz/en/shiden#
+ shiden: {
+ url: "https://evm.shiden.astar.network",
+ network_id: 336,
+ },
+ },
+
+ // ...
+};
+```
+
+Deploy/Migrate by running `truffle migrate --network shibuya`, replacing `shibuya` with your chosen network. If `--network` is not specified, the network values under`development` will be used.
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/_category_.json b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/_category_.json
new file mode 100644
index 0000000..f6a400a
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Developer Tooling",
+ "position": 5
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/banana.md b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/banana.md
new file mode 100644
index 0000000..11cf980
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/banana.md
@@ -0,0 +1,341 @@
+---
+sidebar_position: 3
+---
+# Banana SDK
+
+## Introduction
+In this tutorial we will show how you can integrate Banana Wallet to your JavaScript or TypeScript-based frontend. We will demonstrate how to create a new wallet or connect an existing Banana Wallet on any dApp on Astar Network.
+
+
+## Prerequisites
+
+ - Basic JavaScript/Typescript knowledge.
+ - Enthusiasm to build an amazing dApp on Astar.
+
+## Getting started
+
+### Step 1: Create a new repo with create-react-app
+Create a new react project with react with name `banana-sdk-demo` and now let's move into to the it.
+```
+npx create-react-app banana-sdk-demo
+cd banana-sdk-demo
+```
+
+### Step 2: Installing banana sdk package
+
+Install @rize-labs/banana-wallet-sdk package with
+
+```
+npm install @rize-labs/banana-wallet-sdk
+or
+yarn add @rize-labs/banana-wallet-sdk
+```
+
+### Step 3: Smart Contract deployment
+
+For this demo we will be using a very basic smart contract with only two functionalities:
+
+- Make a transaction to the blockchain by making a state variable change its value.
+- Fetch value of state variable.
+
+Code for smart contract
+
+```
+pragma solidity ^0.8.12;
+
+contract Sample {
+
+ uint public stakedAmount = 0;
+
+ function stake() external payable {
+ stakedAmount = stakedAmount + msg.value;
+ }
+
+ function returnStake() external {
+ payable(0x48701dF467Ba0efC8D8f34B2686Dc3b0A0b1cab5).transfer(stakedAmount);
+ }
+}
+```
+
+You can deploy the contract on Shibuya Testnet using [remix](https://remix.ethereum.org/) or something of your own choice.
+
+For this demo we had already deployed it here: `0xCC497f137C3A5036C043EBd62c36F1b8C8A636C0`
+
+### Step 4: Building the front end
+
+We will have a simple front end with some buttons to interact with the blockchain. Although Banana SDK provides you with a smart contract wallet you don't need worry about its deployment. Everything is handled by us in the SDK so you can concentrate on building your dApp.
+
+![](https://hackmd.io/_uploads/ryPnrYEPh.png)
+
+For more information about building the frontend please refer to this [guide](https://banana-wallet-docs.rizelabs.io/integration/sdk-integration-tutorial/banana-less-than-greater-than-shibuya#building-the-frontend).
+
+### Step 5: Imports
+
+```
+import "./App.css";
+import { Banana, Chains } from '@rize-labs/banana-wallet-sdk';
+import { useEffect, useState } from "react";
+import { ethers } from "ethers";
+import { SampleAbi } from "./SampleAbi";
+```
+
+Download app.css and SampleAbi.js from here [App.css](https://github.com/Banana-Wallet/banana-tutorial/blob/feat/chaido-tutorial/src/App.css) and [SampleAbi.js](https://github.com/Banana-Wallet/banana-tutorial/blob/feat/chaido-tutorial/src/SampleAbi.js)
+
+Initializing some states for demo
+
+```
+const [walletAddress, setWalletAddress] = useState("");
+const [bananaSdkInstance, setBananSdkInstance] = useState(null);
+const [isLoading, setIsLoading] = useState(false);
+const [walletInstance, setWalletInstance] = useState(null);
+const [output, setOutput] = useState("Welcome to Banana Demo");
+const SampleContractAddress = "0xCB8a3Ca479aa171aE895A5D2215A9115D261A566";
+```
+
+### Step 6: Initializing Banana SDK instance and creating methods
+
+```
+// calling it in useEffect
+
+useEffect(() => {
+ getBananaInstance();
+}, []);
+
+ const getBananaInstance = () => {
+ const bananaInstance = new Banana(Chains.shibuyaTestnet);
+ setBananSdkInstance(bananaInstance);
+ };
+```
+
+For simplicity in this example we are creating an SDK instance for Shibuya testnet.
+
+Creating Wallet
+
+```
+const createWallet = async () => {
+ // starts loading
+ setIsLoading(true);
+
+ // creating wallet
+ const wallet = await bananaSdkInstance.createWallet();
+ setWalletInstance(wallet);
+
+ // getting address for wallet created
+ const address = await wallet.getAddress();
+ setWalletAddress(address);
+ setOutput("Wallet Address: " + address);
+ setIsLoading(false);
+ };
+
+```
+
+Developers need to call the `createWallet` method which will inherently ask the user for a wallet name. Once username is provided, the wallet is initialized for the user, and the method returns an instance of the wallet.
+
+Connecting wallet
+
+```
+const connectWallet = async () => {
+
+ // checking does wallet name is cached in cookie
+ const walletName = bananaSdkInstance.getWalletName();
+
+ // if cached we will use it
+ if (walletName) {
+ setIsLoading(true);
+
+ // connect wallet with cached wallet name
+ const wallet = await bananaSdkInstance.connectWallet(walletName);
+ setWalletInstance(wallet);
+
+ // extracting wallet address for display purpose
+ const address = await wallet.getAddress();
+ setWalletAddress(address);
+ setOutput("Wallet Address: " + address);
+ setIsLoading(false);
+ } else {
+ setIsLoading(false);
+ alert("You don't have wallet created!");
+ }
+ };
+
+```
+When the user wallet is created the wallet's public data is cached in the user's cookie. Once the `getWalletName` function fetches `walletName` from the cookie, we pass `walletName` into `connectWallet` which initializes and configures some wallet parameters internally, and returns a wallet instance.
+
+Get ChainId
+
+```
+ const getChainId = async () => {
+ setIsLoading(true);
+ const signer = walletInstance.getSigner();
+ const chainid = await signer.getChainId();
+ setOutput(JSON.stringify(chainid));
+ setIsLoading(false);
+ };
+```
+Getting `chainId` is pretty straight forward. Developers should extract the *signer* from the wallet and use `getChainId` to obtain the `chainId` of the current network.
+
+Get Network
+
+```
+ const getNetwork = async () => {
+ setIsLoading(true);
+ const provider = walletInstance.getProvider();
+ const network = await provider.getNetwork();
+ setOutput(JSON.stringify(network));
+ setIsLoading(false);
+ };
+```
+
+Extracting the network is as easy as it looks. Developers should extract the *provider* from the wallet and use the `getNetwork` method to obtain the chain info.
+
+Make transaction
+
+```
+ const makeTransaction = async () => {
+ setIsLoading(true);
+
+ // getting signer
+ const signer = walletInstance.getSigner();
+ const amount = "0.00001";
+ const tx = {
+ gasLimit: "0x55555",
+ to: SampleContractAddress,
+ value: ethers.utils.parseEther(amount),
+ data: new ethers.utils.Interface(SampleAbi).encodeFunctionData(
+ "stake",
+ []
+ ),
+ };
+
+ try {
+ // sending txn object via signer
+ const txn = signer.sendTransaction(tx);
+ setOutput(JSON.stringify(txn));
+ } catch (err) {
+ console.log(err);
+ }
+ setIsLoading(false);
+ };
+```
+
+To initiate a transaction you will create a transaction object. Extract *signer* from the wallet instance and initiate a transaction by passing the *transaction object* to the *send transaction* method.
+PS: Make sure your wallet is funded before you initiate transactions.
+
+Signing message
+
+```
+ const signMessage = async () => {
+ setIsLoading(true);
+ const sampleMsg = "Hello World";
+ const signer = walletInstance.getSigner();
+ const signMessageResponse = await signer.signBananaMessage(sampleMsg);
+ setOutput(JSON.stringify(signMessageResponse));
+ setIsLoading(false);
+ };
+```
+
+Signing a message is as simple as it looks. Pass a message that needs to be signed, and the method will return an object { messageSigned: "", signature: "" }
+
+messageSigned: message that was signed.
+
+signature: signature for the signed message.
+
+### Step 7: Building the frontend
+
+JSX code for frontend
+
+```
+
+
Banana SDK Demo
+ {walletAddress &&
Wallet Address: {walletAddress}
}
+
+
+
+
+
+
+
Output Panel
+
+
{isLoading ? "Loading.." : output}
+
+
+```
+
+## Troubleshooting
+
+If you are facing a webpack 5 polyfill issue please try using `react-app-rewired`.
+
+```
+npm install react-app-rewired
+
+npm install stream-browserify constants-browserify crypto-browserify os-browserify path-browserify process stream-browserify buffer ethers@^5.7.2
+```
+
+create a file name `config-overrides.js` using the content below.
+```
+const { ProvidePlugin }= require("webpack")
+
+module.exports = {
+ webpack: function (config, env) {
+ config.module.rules = config.module.rules.map(rule => {
+ if (rule.oneOf instanceof Array) {
+ rule.oneOf[rule.oneOf.length - 1].exclude = [/\.(js|mjs|jsx|cjs|ts|tsx)$/, /\.html$/, /\.json$/];
+ }
+ return rule;
+ });
+ config.resolve.fallback = {
+ ...config.resolve.fallback,
+ stream: require.resolve("stream-browserify"),
+ buffer: require.resolve("buffer"),
+ crypto: require.resolve("crypto-browserify"),
+ process: require.resolve("process"),
+ os: require.resolve("os-browserify"),
+ path: require.resolve("path-browserify"),
+ constants: require.resolve("constants-browserify"),
+ fs: false
+ }
+ config.resolve.extensions = [...config.resolve.extensions, ".ts", ".js"]
+ config.ignoreWarnings = [/Failed to parse source map/];
+ config.plugins = [
+ ...config.plugins,
+ new ProvidePlugin({
+ Buffer: ["buffer", "Buffer"],
+ }),
+ new ProvidePlugin({
+ process: ["process"]
+ }),
+ ]
+ return config;
+ },
+}
+```
+Change package.json to start using `react-app-rewired` instead of `react-scripts`.
+
+```
+react-scripts start -> react-app-rewired start
+react-scripts build -> react-app-rewired build
+react-scripts test -> react-app-rewired test
+```
+
+If you are still unable to resolve the issue please post your query to Banana Discord [here](https://discord.gg/3fJajWBT3N)
+
+
+## Learn more
+
+To learn more about Banana Wallet head over to [banana docs](https://banana-wallet-docs.rizelabs.io/)
+
+Full tutorial code is available [here](https://github.com/Banana-Wallet/banana-tutorial/tree/feat/shibuya-tutorial)
+
+If your dApp already uses Rainbowkit then you can use Banana Wallet directly on Shibuya testnet. Please refer [here](https://docs.bananahq.io/integration/wallet-connectors/using-rainbowkit) for more information.
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/hardhat.md b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/hardhat.md
new file mode 100644
index 0000000..e07708a
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/hardhat.md
@@ -0,0 +1,115 @@
+---
+sidebar_position: 1
+---
+# Hardhat
+
+### Initialize Your Project
+
+If you're starting your Hardhat project from scratch, we recommend you read the [Hardhat Quick Start](https://hardhat.org/getting-started/#quick-start#overview) page.
+
+### Setting up Your Account
+
+The quickest way to get Hardhat to deploy contracts to a non-local testnet, is to export and use an existing MetaMask account.
+
+To get an account's private key from MetaMask:
+
+1. Open MetaMask.
+2. Select the account you want to export.
+3. Click the three dots on the right side.
+4. Select "Account Details".
+5. Select "Export Private Key".
+6. Enter your password and select "Confirm".
+
+You should see a 64-character hex string similar to the following:
+
+`60ed0dd24087f00faea4e2b556c74ebfa2f0e705f8169733b01530ce4c619883`
+
+Create a new file in your root folder called `private.json` with your private key in it:
+
+```json
+{
+ "privateKey": "60ed0dd24087f00faea4e2b556c74ebfa2f0e705f8169733b01530ce4c619883"
+}
+```
+
+Modify your `hardhat.config.js` file to include:
+
+```js
+// hardhat.config.js
+
+// ...
+
+const { privateKey } = require("./private.json");
+
+// ...
+
+module.exports = {
+ // ...
+
+ networks: {
+ // Shibuya faucet: use #shibuya-faucet room in https://discord.gg/astarnetwork
+ shibuya: {
+ url: "https://evm.shibuya.astar.network",
+ chainId: 81,
+ accounts: [privateKey],
+ },
+
+ // Astar community faucet (please don't abuse): https://as-faucet.xyz/en/astar#
+ astar: {
+ url: "https://evm.astar.network",
+ chainId: 592,
+ accounts: [privateKey],
+ },
+
+ // Shiden community faucet (please don't abuse): https://as-faucet.xyz/en/shiden#
+ shiden: {
+ url: "https://evm.shiden.astar.network",
+ chainId: 336,
+ accounts: [privateKey],
+ },
+ },
+};
+```
+
+Once your accounts are funded, you can deploy the sample contract to Shibuya with `npx hardhat run --network shibuya scripts/deploy.js`.
+
+## Truffle
+
+### Create an Ethereum Account
+
+We recommend using the `@truffle/hdwallet-provider` package for key management. Instructions can be found [here](https://github.com/trufflesuite/truffle/blob/develop/packages/hdwallet-provider/README.md).
+
+### Add Networks to `truffle-config.js`
+
+To deploy and interact with Astar, modify `networks` in `truffle-config.js` to include Astar's networks:
+
+```js
+// truffle-config.js
+module.exports = {
+ networks: {
+ // ... any existing networks (development, test, etc.)
+
+ // Shibuya faucet: use #shibuya-faucet room in https://discord.gg/astarnetwork
+ shibuya: {
+ url: "https://evm.shibuya.astar.network",
+ network_id: 81,
+ },
+
+ // Astar community faucet (please don't abuse): https://as-faucet.xyz/en/astar#
+ astar: {
+ url: "https://evm.astar.network",
+ network_id: 592,
+ },
+
+ // Shiden community faucet (please don't abuse): https://as-faucet.xyz/en/shiden#
+ shiden: {
+ url: "https://evm.shiden.astar.network",
+ network_id: 336,
+ },
+ },
+
+ // ...
+};
+```
+
+Deploy/Migrate by running `truffle migrate --network shibuya`, replacing `shibuya` with your chosen network. If `--network` is not specified, the network values under`development` will be used.
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/img/thirdweb-explore.png b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/img/thirdweb-explore.png
new file mode 100644
index 0000000..5c189ee
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/img/thirdweb-explore.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/index.md b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/index.md
new file mode 100644
index 0000000..165ab7a
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/index.md
@@ -0,0 +1,10 @@
+# Developer Tooling
+
+Deploying and interacting with EVM-based smart contracts on Astar is as easy as any other EVM-compatible network. Getting started requires just two steps:
+
+1. Configuring (and funding) your Ethereum account on the respective network.
+2. Adding Astar networks to your Ethereum client.
+
+:::caution
+For Astar and Shiden applications, we _highly_ recommend running your own network node and not relying on our RPC endpoints. This further decentralizes the network, and puts you in control of your uptime requirements.
+:::
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/own-RPC.md b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/own-RPC.md
new file mode 100644
index 0000000..e493150
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/own-RPC.md
@@ -0,0 +1,24 @@
+---
+sidebar_position: 4
+---
+# Your Own RPC Server
+
+For EVM developers and projects, it is not an unreasonable expectation that they should have their own managed EVM endpoints. Relying on public endpoints can introduce additional risk due to centralizaion or improper maintenance, and make them single points of failure.
+
+:::note
+Astar team highly recommends that projects use and maintain their own EVM endpoints.
+:::
+
+Launching an Astar Network endpoint is easy.
+
+:::note
+The EVM RPC server is disabled by default. To enable it, append the `--enable-evm-rpc` flag to the launch string.
+:::
+
+```
+astar-collator --chain=shiden --enable-evm-rpc --unsafe-rpc-external
+```
+
+The launch string above will start an Astar Collator on Shiden network, open up an WS/HTTP endpoint on port `9944`.
+
+We also recommend paying attention to the `--rpc-max-connections` parameter. By default this value is relatively small, so it may be beneficial to increase it to a few thousand.
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/privy.mdx b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/privy.mdx
new file mode 100644
index 0000000..ae10a0b
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/privy.mdx
@@ -0,0 +1,151 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Privy
+
+## Introduction
+
+Privy is the easiest way for web3 developers to onboard their users, regardless of whether they already have wallets, across mobile and desktop. Privy offers [embedded wallets](https://www.privy.io/features/wallets) so you can seamlessly provision self-custodial wallets for users who sign in via email or social login, as well as [powerful connectors](https://www.privy.io/features/connectors) for web3 native users who prefer to sign in with their existing wallets. It’s one library to onboard all users, regardless of where they are in their web3 journey.
+
+Developers building end-user facing applications in the Astar ecosystem can leverage Privy to expand their addressable market, improve onboarding funnel conversion and better understand their users. **For a limited time, Astar developers can get free unlimited access to Privy’s features for their first three months using the product, by reaching out to us at astar@privy.io**. For more information on Privy, check out the [website](http://privy.io), [API docs](http://docs.privy.io), [product demo](http://demo.privy.io), and sample customer integrations ([Lighthouse](http://lighthouse.world/), [Courtyard](http://courtyard.io/), and [Shibuya](http://shibuya.xyz/)).
+
+## Prerequisites
+
+To use Privy in your app, you'll need to:
+
+- Have basic knowledge of JavaScript and React
+- Use React 18 in your app
+- Use only EVM-compatible networks for any on-chain actions
+
+## Getting started
+
+### Step 1
+
+Request API keys by reaching out to our team at astar@privy.io to ensure you’re able to access Privy’s special offer of three free months of unlimited software use. We'll set you up with a [Privy app ID](https://docs.privy.io/guide/console/api-keys#app-id) that you can use to initialize the SDK.
+
+### Step 2
+
+Install the **[Privy React Auth SDK](https://www.npmjs.com/package/@privy-io/react-auth)** using `npm`:
+
+`npm install @privy-io/react-auth@latest`
+
+### Step 3
+
+Once you have your app ID and have installed the SDK, **in your React project, wrap your components with a [PrivyProvider](https://docs.privy.io/reference/react-auth/modules#privyprovider)**. The [PrivyProvider](https://docs.privy.io/reference/react-auth/modules#privyprovider) should wrap any component that will use the Privy SDK.
+
+For example, in a [NextJS](https://nextjs.org/) or [Create React App](https://create-react-app.dev/) project, you may wrap your components like so:
+
+
+
+
+```tsx title=_app.jsx
+import type {AppProps} from 'next/app';
+import Head from 'next/head';
+import {PrivyProvider} from '@privy-io/react-auth';
+
+// This method will be passed to the PrivyProvider as a callback
+// that runs after successful login.
+const handleLogin = (user) => {
+ console.log(`User ${user.id} logged in!`)
+}
+
+function MyApp({Component, pageProps}: AppProps) {
+ return (
+ <>
+
+ {/* Edit your HTML header */}
+
+
+
+
+ >
+ );
+}
+```
+
+
+
+
+
+```tsx title=index.js
+import React from 'react';
+import ReactDOM from 'react-dom/client';
+import './index.css';
+import App from './App';
+import reportWebVitals from './reportWebVitals';
+import {PrivyProvider} from '@privy-io/react-auth';
+
+const root = ReactDOM.createRoot(document.getElementById('root'));
+
+// This method will be passed to the PrivyProvider as a callback
+// that runs after successful login.
+const handleLogin = (user) => {
+ console.log(`User ${user.id} logged in!`)
+}
+
+root.render(
+
+
+
+
+
+);
+
+// See https://docs.privy.io/guide/troubleshooting/webpack for how to handle
+// common build issues with web3 projects bootstrapped with Create React App
+```
+
+
+
+
+The [PrivyProvider](https://docs.privy.io/reference/react-auth/modules#privyprovider) takes the following properties:
+
+- your [appId](https://docs.privy.io/reference/react-auth/interfaces/PrivyProviderProps#appid)
+- an optional [onSuccess](https://docs.privy.io/reference/react-auth/interfaces/PrivyProviderProps#onsuccess) callback which will execute once a user successfully logs in
+- an optional [createPrivyWalletOnLogin](https://docs.privy.io/reference/react-auth/interfaces/PrivyProviderProps#createprivywalletonlogin) boolean to configure whether you'd like your users to create [embedded wallets](https://docs.privy.io/guide/frontend/embedded/overview) when logging in
+- an optional [config](https://docs.privy.io/reference/react-auth/modules#privyclientconfig) property to customize your onboarding experience.
+ - The example above will set you up with email and wallet logins.
+ - See [this page](https://docs.privy.io/guide/configuration/) for more on how to construct the right [config](https://docs.privy.io/reference/react-auth/modules#privyclientconfig) for your app!
+
+### Step 4
+
+**You can now use the Privy SDK throughout your app via the [usePrivy](https://docs.privy.io/reference/react-auth/modules#useprivy) hook!** Check out our [starter repo](https://github.com/privy-io/next-starter) to see what a simple end-to-end integration looks like.
+
+Read on to learn how you can use Privy to:
+
+- [log your users in](https://docs.privy.io/guide/frontend/authentication/login)
+- [prompt users to link additional accounts](https://docs.privy.io/guide/frontend/users/linking), as part of progressive onboarding
+- [interface with users' crypto wallets](https://docs.privy.io/guide/frontend/wallets/external)
+- [create Ethereum wallets embedded in your app](https://docs.privy.io/guide/frontend/embedded/overview)
+
+and to do so much more!
+
+## Troubleshooting
+
+If you're using a framework like [create-react-app](https://create-react-app.dev/) to build your project, you may encounter errors related to [Webpack 5](https://webpack.js.org/blog/2020-10-10-webpack-5-release/). To resolve, check out [this guide](https://docs.privy.io/guide/troubleshooting/webpack).
+
+## Learn more
+
+If there’s anything we can do to support your Privy integration, please reach out to us at astar@privy.io or via our [developer slack](https://join.slack.com/t/privy-developers/shared_invite/zt-1y6sjkn3l-cJQ1ryWRA7RkMGuHHXIX8w).
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/thirdweb.md b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/thirdweb.md
new file mode 100644
index 0000000..ca7d149
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/developer-tooling/thirdweb.md
@@ -0,0 +1,120 @@
+---
+sidebar_position: 2
+---
+
+
+### Introduction
+
+Thirdweb is a complete web3 development framework that provides everything you need to connect your apps and games to decentralized networks.
+
+:::caution
+API keys will be required for access to thirdweb infrastructure services (effective August 1st, 2023).
+
+
+In an effort to serve our growing developer community by providing more resilient, reliable, and robust infrastructure, we are instituting a new policy requiring users of our SDK’s, Storage, CLI and Smart Accounts to include an API key for use of the following thirdweb infrastructure services:
+
+
+- RPC Edge - RPC infrastructure for connecting to any EVM chain
+- Storage - Service for both uploading and downloading of data to decentralized storage
+- Smart Wallet Bundler/Paymaster (beta) - Our Bundler and Paymaster service for use with smart wallets (ERC-4337/6551)
+
+
+For more details, FAQ and instructions on how you can get your API key and upgrade your app to work in advance of the August 1st deadline, please visit the [migration blog](https://blog.thirdweb.com/changelog/api-keys-to-access-thirdweb-infra/).
+:::
+
+### Prerequisites
+
+1. Latest version of [Node.js](https://nodejs.org/) installed.
+2. Astar network wallet set up with basic usage knowledge.
+3. Sufficient funds in the wallet for contract deployment gas fees.
+4. Basic knowledge of Solidity.
+
+### Getting started
+
+#### Creating contract
+
+To create a new smart contract using thirdweb CLI, follow these steps:
+
+1. In your CLI run the following command:
+
+ ```
+ npx thirdweb create contract
+ ```
+
+2. Input your preferences for the command line prompts:
+ 1. Give your project a name
+ 2. Choose your preferred framework: Hardhat or Foundry
+ 3. Name your smart contract
+ 4. Choose the type of base contract: Empty, [ERC20](https://portal.thirdweb.com/solidity/base-contracts/erc20base), [ERC721](https://portal.thirdweb.com/solidity/base-contracts/erc721base), or [ERC1155](https://portal.thirdweb.com/solidity/base-contracts/erc1155base)
+ 5. Add any desired [extensions](https://portal.thirdweb.com/solidity/extensions)
+3. Once created, navigate to your project’s directory and open in your preferred code editor.
+4. If you open the `contracts` folder, you will find your smart contract; this is your smart contract written in Solidity.
+
+ The following is code for an ERC721Base contract without specified extensions. It implements all of the logic inside the [`ERC721Base.sol`](https://github.com/thirdweb-dev/contracts/blob/main/contracts/base/ERC721Base.sol) contract; which implements the [`ERC721A`](https://github.com/thirdweb-dev/contracts/blob/main/contracts/eip/ERC721A.sol) standard.
+
+ ```bash
+ // SPDX-License-Identifier: MIT
+ pragma solidity ^0.8.0;
+
+ import "@thirdweb-dev/contracts/base/ERC721Base.sol";
+
+ contract Contract is ERC721Base {
+ constructor(
+ string memory _name,
+ string memory _symbol,
+ address _royaltyRecipient,
+ uint128 _royaltyBps
+ ) ERC721Base(_name, _symbol, _royaltyRecipient, _royaltyBps) {}
+ }
+ ```
+
+ This contract inherits the functionality of ERC721Base through the following steps:
+
+ - Importing the ERC721Base contract
+ - Inheriting the contract by declaring that our contract is an ERC721Base contract
+ - Implementing any required methods, such as the constructor.
+
+5. After modifying your contract with your desired custom logic, you may deploy it to Astar using [Deploy](https://portal.thirdweb.com/deploy).
+
+---
+
+Alternatively, you can deploy a prebuilt contract for NFTs, tokens, or marketplace directly from the thirdweb Explore page:
+
+1. Go to the thirdweb Explore page: https://thirdweb.com/explore
+
+ ![thirdweb Explore page](./img/thirdweb-explore.png)
+
+2. Choose the type of contract you want to deploy from the available options: NFTs, tokens, marketplace, and more.
+3. Follow the on-screen prompts to configure and deploy your contract.
+
+> For more information on different contracts available on Explore, check out [thirdweb’s documentation.](https://portal.thirdweb.com/pre-built-contracts)
+
+#### Deploying contract
+
+Deploy allows you to deploy a smart contract to any EVM compatible network without configuring RPC URLs, exposing your private keys, writing scripts, and other additional setup such as verifying your contract.
+
+1. To deploy your smart contract using deploy, navigate to the root directory of your project and execute the following command:
+
+ ```bash
+ npx thirdweb deploy
+ ```
+
+ Executing this command will trigger the following actions:
+
+ - Compiling all the contracts in the current directory.
+ - Providing the option to select which contract(s) you wish to deploy.
+ - Uploading your contract source code (ABI) to IPFS.
+
+2. When it is completed, it will open a dashboard interface to finish filling out the parameters.
+ - `_name`: contract name
+ - `_symbol`: symbol or "ticker"
+ - `_royaltyRecipient`: wallet address to receive royalties from secondary sales
+ - `_royaltyBps`: basis points (bps) that will be given to the royalty recipient for each secondary sale, e.g. 500 = 5%
+3. Select Astar as the network
+4. Manage additional settings on your contract’s dashboard as needed such as uploading NFTs, configuring permissions, and more.
+
+For additional information on Deploy, please reference [thirdweb’s documentation](https://portal.thirdweb.com/deploy).
+
+### Learn more
+
+If you have any further questions or encounter any issues during the process, please [reach out to thirdweb support](https://support.thirdweb.com).
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/evm-debug-api.md b/docs/build/build-on-layer-1/smart-contracts/EVM/evm-debug-api.md
new file mode 100644
index 0000000..18f7799
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/evm-debug-api.md
@@ -0,0 +1,114 @@
+---
+sidebar_position: 10
+---
+
+# Debug EVM Transactions
+
+Geth's debug APIs and OpenEthereum's trace module provide non-standard RPC methods for getting a deeper insight into transaction processing.
+
+> Thanks to the PureStake team, the Polkadot ecosystem has tracing capabilities similar to that of Geth, and OpenEthereum. Astar Network implements the same approach for Astar EVM tracing, due to it being the best solution we have at the moment, for the Polkadot ecosystem.
+
+## Debugging RPC Methods
+
+The following RPC methods are available:
+
+* debug_traceTransaction - requires the hash of the transaction to be traced, optional parameters:
+ - `disableStorage(boolean)` - (default: false) setting this to true disables storage capture
+ - `disableMemory(boolean)` - (default: false) setting this to true disables memory capture
+ - `disableStack(boolean)` - (default: false) setting this to true disables stack capture
+* [trace_filter](https://openethereum.github.io/JSONRPC-trace-module#trace_filter) - optional parameters:
+ - `fromBlock(uint blockNumber)` - either block number (hex), earliest which is the genesis block or latest (default) best block available. Trace starting block
+ - `toBlock(uint blockNumber)` - either block number (hex), earliest which is the genesis block or latest best block available. Trace ending block
+ - `fromAddress(array addresses)` - filter transactions done from these addresses only. If an empty array is provided, no filtering is done with this field
+ - `toAddress(array addresses)` - filter transactions done from these addresses only. If an empty array is provided, no filtering is done with this field
+ - `after(uint offset)` - default offset is 0. Trace offset (or starting) number
+ - `count(uint numberOfTraces)` - number of traces to display in a batch
+
+There are some default values that you should be aware of:
+
+* The maximum number of trace entries a single request of `trace_filter` is allowed to return is `500`. A request exceeding this limit will return an error
+* Blocks processed by requests are temporarily stored in cache for `300` seconds, after which they are deleted.
+
+To change the default values you can add CLI flags when spinning up your tracing node.
+
+## Run a Debugging Node
+
+:::caution
+
+EVM tracing features available from Astar 5.1 release.
+
+:::
+
+To use the supported RPC methods, you need to run a node in debug mode, which is slightly different than running a full node. Additional flags will also need to be used to tell the node which of the non-standard features to support.
+
+Spinning up a debug or tracing node is similar to running a full node. However, there are some additional flags that you may want to enable specific tracing features:
+
+* `--ethapi=debug` - optional flag that enables `debug_traceTransaction`
+* `--ethapi=trace` - optional flag that enables `trace_filter`
+* `--ethapi=txpool` - optional flag that enables `txpool_content`, `txpool_inspect`, `txpool_status`
+* `--wasm-runtime-overrides=` - required flag for tracing that specifies the path where the local Wasm runtimes are stored
+* `--runtime-cache-size 64` - required flag that configures the number of different runtime versions preserved in the in-memory cache to 64
+* `--ethapi-trace-max-count ` - sets the maximum number of trace entries to be returned by the node. _The default maximum number of trace entries a single request of trace_filter returns is_ **500**
+* `--ethapi-trace-cache-duration ` - sets the duration (in seconds) after which the cache of `trace_filter`, for a given block, is discarded. _The default amount of time blocks are stored in the cache is **300** seconds_
+
+:::info
+
+EVM tracing node installation manual available on [this page](/docs/build/build-on-layer-1/nodes/evm-tracing-node.md).
+
+:::
+
+
+### Using the Debug/Tracing API
+
+Once you have a running tracing node, you can open your terminal to run curl commands and start to call any of the available JSON RPC methods.
+
+For example, for the `debug_traceTransaction` method, you can make the following JSON RPC request in your terminal:
+
+:::caution
+
+`--ethapi=debug` flag as tracing node argument required to expose this API.
+
+:::
+
+```
+curl http://127.0.0.1:9944 -H "Content-Type:application/json;charset=utf-8" -d \
+ '{
+ "jsonrpc":"2.0",
+ "id":1,
+ "method":"debug_traceTransaction",
+ "params": ["0xc74f3219cf6b9763ee5037bab4aa8ebe5eafe85122b00a64c2ce82912c7d3960"]
+ }'
+```
+
+The node responds with the step-by-step replayed transaction information.
+
+For the `trace_filter` call, you can make the following JSON RPC request in your terminal (in this case, the filter is from block 20000 to 25000, only for transactions where the recipient is 0x4E0078423a39EfBC1F8B5104540aC2650a756577, it will start with a zero offset and provide the first 20 traces):
+
+```
+curl http://127.0.0.1:9944 -H "Content-Type:application/json;charset=utf-8" -d '{
+ "jsonrpc":"2.0",
+ "id":1,
+ "method":"trace_filter","params":[{"fromBlock":"4142700","toBlock":"4142800","toAddress":["0xb1dD8BABf551cD058F3B253846EB6FA2a5cabc50"],"after":0,"count":20}]
+ }'
+```
+
+The node responds with the trace information corresponding to the filter.
+
+### Using transaction pool API
+
+Let's get pool status using `curl` HTTP POST request.
+
+:::caution
+
+`--ethapi=txpool` flag as tracing node argument required to expose this API.
+
+:::
+
+```
+curl http://127.0.0.1:9944 -H "Content-Type:application/json;charset=utf-8" -d \
+ '{
+ "jsonrpc":"2.0",
+ "id":1,
+ "method":"txpool_status", "params":[]
+ }'
+```
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/faq.md b/docs/build/build-on-layer-1/smart-contracts/EVM/faq.md
new file mode 100644
index 0000000..7a246fe
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/faq.md
@@ -0,0 +1,45 @@
+---
+sidebar_position: 8
+---
+
+# FAQ
+
+### Is there a step by step guide on how to deploy smart contract in the Astar ecosystem?
+
+Yes, you can follow [this tutorial](first-contract) within our documentation.
+
+### Can I use [Remix](https://remix.ethereum.org) or [Hardhat](https://hardhat.org/) for smart contract deployment?
+
+You sure can.
+
+### What are the names of the native tokens in the Astar ecosystem?
+
+SBY (Shibuya - Testnet tokens)
+
+SDN (Shiden Network)
+
+ASTR (Astar Network)
+
+### How do I connect to Astar networks, RPCs, Network name, Chain ID?
+
+You can visit [this page](/docs/build/build-on-layer-1/environment/endpoints.md).
+
+### How can I get test tokens (SBY)?
+
+Use [our faucet](/docs/build/build-on-layer-1/environment/faucet.md).
+
+### Is it possible to import Substrate (Polkadot) addresses to Metamask?
+
+No. Polkadot (Substrate framework) uses a 256 bit address, while Metamask uses a 160 bit address.
+
+### Can I interact with EVM contracts by using existing Substrate account (non-ecdsa)?
+
+Any Substrate account can call the EVM, and its Native address will be mapped to an EVM compatible representation.
+
+### I was able to deploy contracts in other networks, but contracts deployed in this network show "out of gas" error with the error code of 0. How do I fix it?
+
+Contract size limits may differ between networks, so it is recommended to lower optimizer runs of the same smart contract and adjust it to a compatible size for the network. You may want to tune this on testnet.
+
+### During contract deployment I'm getting `HH110: Invalid JSON-RPC response received: 403 Forbidden` error.
+
+That means your requests are limited by Astar endpoints. You need to set up your own node or use another endpoint provider.
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/_category_.json b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/_category_.json
new file mode 100644
index 0000000..a0b8f78
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Your First Contract",
+ "position": 1
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/deploy-local.md b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/deploy-local.md
new file mode 100644
index 0000000..3dfa2c2
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/deploy-local.md
@@ -0,0 +1,45 @@
+---
+sidebar_position: 4
+---
+
+# Deploy Smart Contract on Local Network
+
+Finally, it's time to deploy your first smart contract on Astar/Shiden local network! In this tutorial, you will deploy a basic ERC20 token using Remix.
+
+## Preparation of Solidity Smart Contract on Remix
+
+Visit [Remix](https://remix.ethereum.org/) and create a new file. Within the new file, copy and paste the following Solidity code into the Remix editor.
+
+```sol
+pragma solidity ^0.8.0;
+
+import "github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/token/ERC20/ERC20.sol";
+
+contract TestToken is ERC20 {
+ constructor() ERC20("TestToken", "TST") {
+ _mint(msg.sender, 1000 * 10 ** 18);
+ }
+}
+```
+
+This contract will issue an ERC20 token called 'TestToken', with ticker TST, and a total supply of 1000 using 18 decimals of precision. You will be able to compile this contract on the Solidity Compiler tab in Remix, in preparation to deploy it.
+
+Click on the Deploy and Run Transactions tab, and change the environment to Injected Web3. Ensure that you can see the Custom (4369) network under the environment field. If you cannot, open and change your network on MetaMask. Now your screen should be similar to the following:
+
+![6](img/6.png)
+
+## Deploy Contract on Local Network
+
+Now press the Deploy button. You will see a popup window from MetaMask, where you should click the Confirm button.
+
+...Congratulations🎉 Your first smart contract has been successfully deployed on Shiden local network! As evidence of that, you will see the EVM events in the explorer.
+
+![7](img/7.png)
+
+## View your New Token within MetaMask
+
+This token can be added to MetaMask because the contract is fully ERC20 compatible, and you will find the ERC20 contract address on Remix or Explorer. In this case, the contract address is `0x666E76D2d8A0A97D79E1570dd87Cc983464d575e`. With that, you can open MetaMask, click the Add Token button, and input your contract address in the Token Contract Address field. You will see that the Token Symbol and Token Decimal fields are filled in automatically, and at last, click Next, and then Add Tokens buttons.
+
+You should now see your newly minted ERC20 tokens that are deployed on Shiden local network, right within your MetaMask, and be able to transfer them to any other EVM account.
+
+![8](img/8.png)
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/deploy-shibuya.md b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/deploy-shibuya.md
new file mode 100644
index 0000000..f885755
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/deploy-shibuya.md
@@ -0,0 +1,29 @@
+---
+sidebar_position: 5
+---
+
+# Deploy Contract on Shibuya and Shiden
+
+Your journey is almost finished. At last, you will deploy a smart contract on Shibuya and Shiden respectively.
+
+## Obtain SBY token from the Faucet
+
+To deploy a contract on Shibuya, you will need to obtain some SBY tokens from the Faucet, which is explained on [the faucet page](/docs/build/build-on-layer-1/environment/faucet.md).
+
+Once successful, you will see some SBY tokens available within MetaMask, if not, double-check to ensure Shibuya is selected as your current network.
+
+![9](img/9.png)
+
+## Deploy Contract on Shibuya
+
+Now it's time to deploy a smart contract on Shibuya, and we'll be following the exact same process as for a local network deployment. Open Remix, compile your code, and deploy your contract. Do ensure that you see the Custom (81) network under the environment field when you deploy.
+
+After a few seconds, you will see that the contract has been successfully deployed on Shibuya 🎉 and you will also be able to add the newly deployed ERC20 token to MetaMask.
+
+## Deploy Contract on Shiden
+
+The last step will be to deploy a smart contract on Shiden, again using the same process as the local and Shibuya network deployments. Do note, though, that there is no faucet for Shiden due to SDN token having real economic value, so the easiest way to obtain SDN tokens is to purchase them from crypto exchanges.
+
+## Next Step
+
+Congratulations! You are now a dApp developer on Astar/Shiden network 😎 To better make use of and expand on your new skills as a smart contract developer, we recommend diving further into our official documentation and builders guides, and joining our Discord to share ideas with other developers, or to receive technical support.
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/1.png b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/1.png
new file mode 100644
index 0000000..17d4ab6
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/1.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/10.png b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/10.png
new file mode 100644
index 0000000..3fe6fd2
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/10.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/11.png b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/11.png
new file mode 100644
index 0000000..636c660
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/11.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/2.png b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/2.png
new file mode 100644
index 0000000..500bd03
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/2.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/3.png b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/3.png
new file mode 100644
index 0000000..4902162
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/3.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/4.png b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/4.png
new file mode 100644
index 0000000..50fb94b
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/4.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/5.png b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/5.png
new file mode 100644
index 0000000..201c823
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/5.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/6.png b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/6.png
new file mode 100644
index 0000000..d3ff5b7
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/6.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/7.png b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/7.png
new file mode 100644
index 0000000..9832918
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/7.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/8.png b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/8.png
new file mode 100644
index 0000000..7d42bbb
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/8.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/9.png b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/9.png
new file mode 100644
index 0000000..9fbe60b
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/img/9.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/index.md b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/index.md
new file mode 100644
index 0000000..7248c02
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/index.md
@@ -0,0 +1,25 @@
+# Develop and Deploy your First Smart Contract on Astar/Shiden EVM
+
+## Introduction
+
+In this tutorial, you will learn how to develop and deploy a smart contract on Astar/Shiden EVM.
+
+This tutorial is aimed at those who have never touched Astar/Shiden EVM before, and want to get basic, but comprehensive, understanding of deploying smart contract on Astar/Shiden EVM.
+
+This tutorial should take about 30 minutes, after which you will officially be a dApp developer! To complete this tutorial, you will need:
+
+- basic knowledge of Solidity and Polkadot;
+- familiarity with software development tools and CLIs;
+- basic knowledge about MetaMask;
+- access to Discord, Remix, and Polkadot.js Apps portal.
+
+## What you will do in this Tutorial:
+
+1. Learn about Shiden and Shibuya.
+2. Run an Astar collator as a local standalone node.
+ * Please check Build Environment chapter
+3. Configure MetaMask.
+4. Deploy a smart contract on your local network.
+5. Obtain some test tokens from the faucet.
+6. Deploy a smart contract on Shibuya, a test network of Shiden.
+7. Deploy a smart contract on Shiden.
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/metamask.md b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/metamask.md
new file mode 100644
index 0000000..0d35b5f
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/first-contract/metamask.md
@@ -0,0 +1,37 @@
+---
+sidebar_position: 3
+---
+
+# Configure MetaMask
+
+## Add Network to MetaMask
+
+ >**_NOTE:_** Before following the instructions below ensure you run a local node on your machine. Follow the instructions [here](https://docs.astar.network/docs/build/build-on-layer-1/environment/local-network/#run-the-local-network).
+
+It's easy to configure MetaMask to interact with the Astar/Shiden network family. To do so, open MetaMask, click the Network tab, and click Custom RPC. In the screen shown, please enter the information shown below:
+
+| Properties | Network Details |
+| ----------------------------- | ------------------------------ |
+| Network Name | My Network (anything you want) |
+| New RPC URL | http://127.0.0.1:9944 |
+| Chain ID | 4369 |
+| Currency Symbol | ASTL |
+| Block Explorer URL (Optional) | |
+
+## Transfer Native Tokens to MetaMask
+
+Since Astar Network has built a smart contract hub that supports both EVM and Wasm virtual machines, we need to support two different account types, H160 and SS58 respectively.
+
+In order to send an asset to an H160 account (address B) from a Substrate-native ss58 account (address A), we will need to convert the H160 account address to its mapped Substrate-native ss58 account address (address B), before we can send the asset directly from address A to address B using [Polkadot.js](https://polkadot.js.org/apps/).
+
+You can convert the destination H160 address to its mapped Substrate-native ss58 address by using our [address converter](https://hoonsubin.github.io/evm-substrate-address-converter/).
+
+![Untitled](img/10.png)
+
+Now, you are ready to receive some native tokens within MetaMask! Visit the Account page on the explorer and click the send button beside Alice. In the screen shown, you can input your ss58 address in the `send to address` field, choose an amount to send, and then click the `Make Transfer` button.
+
+![4](img/4.png)
+
+Congratulations! You should now see some native tokens within MetaMask, and are one step closer to being able to deploy your first smart contract on Shiden local network!
+
+![5](img/5.png)
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/index.md b/docs/build/build-on-layer-1/smart-contracts/EVM/index.md
new file mode 100644
index 0000000..c04e56f
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/index.md
@@ -0,0 +1,13 @@
+# EVM Smart Contracts
+
+
+
+All Astar networks support EVM smart contracts except Swanky node.
+
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/_category_.json b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/_category_.json
new file mode 100644
index 0000000..bfba40c
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Contract Environment",
+ "position": 6
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/explorers.md b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/explorers.md
new file mode 100644
index 0000000..35f590e
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/explorers.md
@@ -0,0 +1,41 @@
+---
+sidebar_position: 1
+---
+
+# Block explorers
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+## Overview
+
+Block explorers are the Google for searching on a blockchain. They give developers and users the possibility to search information such as balances, contracts, tokens, transactions, and API services.
+
+Astar provides two different kinds of explorers: one that combines Substrate and EVM, and another that is dedicated to our EVM ecosystem.
+
+## Explorers
+
+
+
+
Subscan is the most used explorer in the Polkadot ecosystem. With Subscan you can search the complete Astar Network. Subscan support Substrate and Ethereum API. BlockScout is the best explorer for developers who are building in our EVM environment, it has all the features as EtherScan.
Subscan is the most used explorer in the Polkadot ecosystem. With Subscan you can search the complete Astar Network. Subscan support Substrate and Ethereum API. BlockScout is the best explorer for developers who are building in our EVM environment, it has all the features as EtherScan.
Subscan is the most used explorer in the Polkadot ecosystem. With Subscan you can search the complete Astar Network. Subscan support Substrate and Ethereum API. BlockScout is the best explorer for developers who are building in our EVM environment, it has all the features as EtherScan.
+
+
+
+
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled.png b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled.png
new file mode 100644
index 0000000..df00db1
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled1.png b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled1.png
new file mode 100644
index 0000000..eccce81
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled1.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled2.png b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled2.png
new file mode 100644
index 0000000..db0faad
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled2.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled3.png b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled3.png
new file mode 100644
index 0000000..b09f386
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled3.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled4.png b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled4.png
new file mode 100644
index 0000000..8df9a76
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled4.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled5.png b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled5.png
new file mode 100644
index 0000000..159c803
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled5.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled6.png b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled6.png
new file mode 100644
index 0000000..2980134
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled6.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled7.png b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled7.png
new file mode 100644
index 0000000..f21cb59
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled7.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled8.png b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled8.png
new file mode 100644
index 0000000..6c1b374
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/Untitled8.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/flatten.jpg b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/flatten.jpg
new file mode 100644
index 0000000..af1c79a
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/img/flatten.jpg differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/index.md b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/index.md
new file mode 100644
index 0000000..579bf5d
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/index.md
@@ -0,0 +1,8 @@
+# Contract Environment
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/verify-sc.md b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/verify-sc.md
new file mode 100644
index 0000000..3ac0251
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/verify-sc.md
@@ -0,0 +1,7 @@
+---
+sidebar_position: 6
+---
+
+# Verifying a Smart Contract
+
+The EVM block explorer for all Astar networks (Astar, Shiden, Shibuya) is **Blockscout** and you can verify a smart contract by following [Blockscout documentation](https://docs.blockscout.com/for-users/verifying-a-smart-contract) for contract verification.
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/infra/verify_smart_contract.md b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/verify_smart_contract.md
new file mode 100644
index 0000000..727ab6b
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/infra/verify_smart_contract.md
@@ -0,0 +1,78 @@
+---
+sidebar_position: 5
+---
+
+import Figure from '/src/components/figure'
+
+# How to verify a Solidity smart contract on Blockscout
+
+## TL;DR
+
+Blockscout is the primary block explorer for Astar EVM. Verifying a smart contract on Blockscout makes the contract source code publicly available and verifiable, which creates transparency and trust in the community. Also, contract verification on Blockscout is mandatory for dApps to be eligible for dApp Staking and earn the basic income from the network.
+
+In this guide, we will walk you through the process of verifying a smart contract on Blockscout, covering general smart contracts and special cases with OpenZepplin smart contracts.
+
+---
+
+## What is Blockscout
+
+Blockscout is a block explorer that provides a comprehensive, easy-to-use interface for users to search transactions, view accounts, balances, and for devs to verify smart contracts and inspect transactions on EVM (Ethereum Virtual Machine).
+
+Blockscout is the primary block explorer for Astar EVM.
+
+## Why should I verify my smart contract on Blockscout
+
+Verifying a smart contract on Blockscout makes the contract source code becomes publicly available and verifiable, which creates transparency and trust in the community.
+Also, contract verification on Blockscout is mandatory for dApps to be eligible for dApp Staking and earn the basic income from the network.
+
+---
+
+## Examples
+### Example 1: verifying smart contracts without OpenZepplin source contracts
+
+Due to compiler constraints, contracts **with OpenZeppelin-related source contracts** have different verification methods to contracts without.
+In this section, we will go through the process of verifying a smart contract **without OpenZeppelin-related source contracts.**
+In the previous guide, we went through the process of using Remix IDE to deploy a smart contract on Astar EVM. Let's start from there:
+
+[How to use Remix IDE to deploy an on-chain storage contract on Astar EVM | Astar Docs](/docs/build/build-on-layer-1/builder-guides/astar_features/use_remix.md)
+
+Copy the deployed contract address under the `Deployed Contracts` section
+
+
+
+Search for the contract on Blockscout and click `Verify and Publish` under the `Code` page
+
+
+
+Choose `Via standard input JSON`
+
+
+
+Fill in the contract name and Solidity compiler version and upload the standard input JSON file
+- You can find the standard input JSON file under contracts/artifacts/build-info. Only use the `input` object in the JSON file.
+- You can also find the Solidity compiler version in the same JSON file under `solcVersion`
+
+
+
+- Click “Verify & Publish”, then you are all set!
+
+
+
+
+---
+
+### Example 2: verifying smart contracts with OpenZepplin-related source contracts
+
+Due to compiler constraints, contracts **with OpenZeppelin-related source contracts** have different verification methods to contracts **without OpenZeppelin-related source contracts.** In this section, we will go through the process of verifying a smart contract **with OpenZeppelin-related source contracts** using **Flatten**.
+
+I have already deployed an ERC20 token contract using OpenZepplin library import, and will demonstrate how to verify it on Blockscout using **Flatten** in Remix IDE.
+
+- Use the **Flatten** function in the context menu to flatten the ERC20 contract deployed. Copy the flattened code.
+
+
+- Go to Blockscout and on the verification page choose the `Via flattened source code` method
+
+
+- Paste the flattened source code from the **Flatten** function output into the `Enter the Solidity Contract Code`”` field and click `“`Verify & Publish`.
+
+
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/interact/_category_.json b/docs/build/build-on-layer-1/smart-contracts/EVM/interact/_category_.json
new file mode 100644
index 0000000..416363c
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/interact/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Interact",
+ "position": 4
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/interact/img/web3-onboard-on-replit.png b/docs/build/build-on-layer-1/smart-contracts/EVM/interact/img/web3-onboard-on-replit.png
new file mode 100644
index 0000000..920c8e7
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/EVM/interact/img/web3-onboard-on-replit.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/interact/index.md b/docs/build/build-on-layer-1/smart-contracts/EVM/interact/index.md
new file mode 100644
index 0000000..baeb872
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/interact/index.md
@@ -0,0 +1,10 @@
+# Interact with EVM Smart Contract
+
+In this chapter you can find out how to interact with EVM smart contracts.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/interact/thirdweb-sdk.md b/docs/build/build-on-layer-1/smart-contracts/EVM/interact/thirdweb-sdk.md
new file mode 100644
index 0000000..2523a2c
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/interact/thirdweb-sdk.md
@@ -0,0 +1,188 @@
+---
+sidebar_position: 1
+---
+
+# thirdweb SDK
+
+## Introduction
+
+thirdweb is a complete web3 development framework that provides everything you need to connect your apps and games to decentralized networks
+
+## Prerequisites
+
+1. Latest version of [Node.js](https://nodejs.org/) installed.
+2. Astar network wallet set up with basic usage knowledge.
+3. Basic knowledge of React.
+
+## Getting started
+
+### Creating an app
+
+thirdweb provides several SDKs to allow you to interact with your contract including: [React](https://portal.thirdweb.com/react), [React Native](https://portal.thirdweb.com/react-native), [TypeScript](https://portal.thirdweb.com/typescript), [Python](https://portal.thirdweb.com/python), [Go](https://portal.thirdweb.com/go), and [Unity](https://portal.thirdweb.com/unity).
+
+This document will show you how to interact with your contract deployed to Astar using React
+
+> View the [full React SDK reference](https://portal.thirdweb.com/react) in thirdweb’s documentation.
+
+To create a new application pre-configured with thirdweb’s SDKs run and choose your preferred configurations:
+
+```jsx
+npx thirdweb create app --evm
+```
+
+or install it into your existing project by running:
+
+```jsx
+npx thirdweb install
+```
+
+### Initialize SDK on Astar
+
+Wrap your application in the `ThirdwebProvider` component and change the `activeChain` to Astar
+
+```jsx
+import { ThirdwebProvider } from "@thirdweb-dev/react";
+import { Astar } from "@thirdweb-dev/chains";
+
+const App = () => {
+ return (
+
+
+
+ );
+};
+```
+
+### Get contract
+
+To connect to your contract, use the SDK’s [`getContract`](https://portal.thirdweb.com/typescript/sdk.thirdwebsdk.getcontract) method.
+
+```jsx
+import { useContract } from "@thirdweb-dev/react";
+
+function App() {
+ const { contract, isLoading, error } = useContract("{{contract_address}}");
+}
+```
+
+### Calling contract functions
+
+- For extension based functions, use the built-in supported hooks. The following is an example using the NFTs extension to access a list of NFTs owned by an address- `useOwnedNFTs`
+
+ ```jsx
+ import { useOwnedNFTs, useContract, useAddress } from "@thirdweb-dev/react";
+
+ // Your smart contract address
+ const contractAddress = "{{contract_address}}";
+
+ function App() {
+ const address = useAddress();
+ const { contract } = useContract(contractAddress);
+ const { data, isLoading, error } = useOwnedNFTs(contract, address);
+ }
+ ```
+
+ Full reference: https://portal.thirdweb.com/react/react.usenft
+
+- Use the `useContractRead` hook to call any read functions on your contract by passing in the name of the function you want to use.
+
+ ```jsx
+ import { useContractRead, useContract } from "@thirdweb-dev/react";
+
+ // Your smart contract address
+ const contractAddress = "{{contract_address}}";
+
+ function App() {
+ const { contract } = useContract(contractAddress);
+ const { data, isLoading, error } = useContractRead(contract, "getName");
+ }
+ ```
+
+ Full reference: https://portal.thirdweb.com/react/react.usecontractread
+
+- Use the `useContractWrite` hook to call any writefunctions on your contract by passing in the name of the function you want to use.
+
+ ```jsx
+ import {
+ useContractWrite,
+ useContract,
+ Web3Button,
+ } from "@thirdweb-dev/react";
+
+ // Your smart contract address
+ const contractAddress = "{{contract_address}}";
+
+ function App() {
+ const { contract } = useContract(contractAddress);
+ const { mutateAsync, isLoading, error } = useContractWrite(
+ contract,
+ "setName"
+ );
+
+ return (
+ mutateAsync({ args: ["My Name"] })}
+ >
+ Send Transaction
+
+ );
+ }
+ ```
+
+ Full reference: [https://portal.thirdweb.com/react/react.usecontract](https://portal.thirdweb.com/react/react.usecontractread)write
+
+### Connect Wallet
+
+Create a custom connect wallet experience by declaring supported wallets passed to your provider.
+
+```jsx
+import {
+ ThirdwebProvider,
+ metamaskWallet,
+ coinbaseWallet,
+ walletConnectV1,
+ walletConnect,
+ safeWallet,
+ paperWallet,
+} from "@thirdweb-dev/react";
+
+function MyApp() {
+ return (
+
+
+
+ );
+}
+```
+
+Add in a connect wallet button to prompt end-users to login with any of the above supported wallets.
+
+```jsx
+import { ConnectWallet } from "@thirdweb-dev/react";
+
+function App() {
+ return ;
+}
+```
+
+Full reference: https://portal.thirdweb.com/react/connecting-wallets
+
+## Learn more
+
+If you have any further questions or encounter any issues during the process, please [reach out to thirdweb support](https://support.thirdweb.com).
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/interact/web3-onboard.md b/docs/build/build-on-layer-1/smart-contracts/EVM/interact/web3-onboard.md
new file mode 100644
index 0000000..5d47834
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/interact/web3-onboard.md
@@ -0,0 +1,134 @@
+---
+sidebar_position: 2
+---
+
+import Tabs from '@theme/Tabs'
+import TabItem from '@theme/TabItem'
+
+# Web3 EVM Wallet Library
+
+With a few lines of code you can now bring your app into the web3 world and connect to the Astar EVM networks.
+
+Web3-Onboard is the quickest and easiest way to add multi-wallet and multi-chain support to your project. With built-in modules for more than 35 unique hardware and software wallets, Web3-Onboard saves you time and headaches.
+
+## Install
+
+
+
+
+```bash
+yarn add @web3-onboard/core @web3-onboard/injected-wallets @web3-onboard/react ethers
+```
+
+
+
+
+```bash
+npm i @web3-onboard/core @web3-onboard/injected-wallets @web3-onboard/react ethers
+```
+
+
+
+
+## Configure
+
+Use this prepared config file. It can be easilly adapted for VueJs and others frameworks.
+
+```js
+import React from "react";
+import { init, useConnectWallet } from "@web3-onboard/react";
+import injectedModule from "@web3-onboard/injected-wallets";
+import { ethers } from "ethers";
+
+// Look at Web3-Onboard documentation here: https://onboard.blocknative.com/docs/overview/introduction
+const wallets = [injectedModule()];
+
+const chains = [
+ {
+ id: "0x150",
+ token: "SDN",
+ label: "Shiden",
+ icon: '',
+ color: "#a67cff",
+ rpcUrl: "https://evm.shiden.astar.network",
+ publicRpcUrl: "https://evm.shiden.astar.network",
+ blockExplorerUrl: "https://shiden.subscan.io",
+ },
+ {
+ id: "0x250",
+ token: "ASTR",
+ label: "Astar",
+ icon: '',
+ color: "#0085ff",
+ rpcUrl: "https://evm.astar.network",
+ publicRpcUrl: "https://evm.astar.network",
+ blockExplorerUrl: "https://astar.subscan.io",
+ },
+ {
+ id: "0x51",
+ token: "SBY",
+ label: "Shibuya",
+ icon: '',
+ color: "#2c3335",
+ rpcUrl: "https://evm.shibuya.astar.network",
+ publicRpcUrl: "https://evm.shibuya.astar.network",
+ blockExplorerUrl: "https://shibuya.subscan.io",
+ },
+ {
+ id: "0x1111",
+ token: "LOC",
+ label: "Localhost",
+ icon: '',
+ color: "#2c3335",
+ rpcUrl: "http://locahost:8545",
+ publicRpcUrl: "http://locahost:8545",
+ blockExplorerUrl:
+ "https://polkadot.js.org/apps/?rpc=ws%3A%2F%2Flocalhost%3A9944#/explorer",
+ },
+];
+
+const appMetadata = {
+ name: "This is a demo React App",
+ icon: '',
+ logo: '',
+ description: "My app using Onboard on Astar Network",
+ recommendedInjectedWallets: [
+ { name: "Talisman", url: "https://www.talisman.xyz/" },
+ { name: "MetaMask", url: "https://metamask.io" },
+ ],
+};
+
+// initialize Onboard
+init({
+ wallets,
+ chains,
+ appMetadata,
+});
+
+const [{ wallet, connecting }, connect, disconnect] = useConnectWallet();
+
+// create an ethers provider
+let ethersProvider;
+
+if (wallet) {
+ ethersProvider = new ethers.providers.Web3Provider(wallet.provider, "any");
+}
+```
+
+## Use
+
+Now you only need a button to connect and activate the wallet.
+
+```jsx
+
+```
+
+And of course if you want to know how to tweak this library more, just look at the [Web3-Onboard documentation](https://onboard.blocknative.com/docs/overview/introduction).
+
+You can also play with a live demo on [Repl.it](https://replit.com/@gluneau/Astar-web3-onboard-EVM-Demo#src/App.jsx)
+![web3-onboard-on-replit](img/web3-onboard-on-replit.png)
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/.batch.md b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/.batch.md
new file mode 100644
index 0000000..216e349
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/.batch.md
@@ -0,0 +1,33 @@
+# Batch Precompile
+
+The batch precompiled contract enables developers to consolidate several EVM calls into a single action. Previously, if users need to engage with multiple contracts, they would need to confirm multiple transactions in their wallet. For instance, this could involve approving a token's access for a smart contract and then performing a transfer or claiming staking rewards on DappsStaking for every era.Through the batch precompile feature, developers can enhance the user experience by condensing these transactions into a batch, thereby reducing the number of confirmations needed to just one. This approach also holds the potential to lower gas fees, as batching avoids incurring multiple base gas fees.
+
+The precompile directly interacts with Substrate's EVM pallet. The caller of the batch function will have their address serve as the `msg.sender` for all subtransactions. However, unlike delegate calls, the target contract will still impact its own storage. Essentially, this process mirrors the effect of a user signing multiple transactions, but with the requirement for only a single confirmation.
+
+| Precompile | Address |
+| -------- | -------- |
+| `Batch` | 0x0000000000000000000000000000000000005006 |
+
+## The Batch Solidity Interface
+
+Batch.sol is a Solidity interface that allows developers to interact with the precompile's three methods.
+
+The interface includes the following functions:
+
+* **batchSome(address[] to, uint256[] value, bytes[] callData, uint64[] gasLimit)** — performs multiple calls, where the same index of each array combine into the information required for a single subcall. If a subcall reverts, following subcalls will still be attempted
+
+* **batchSomeUntilFailure(address[] to, uint256[] value, bytes[] callData, uint64[] gasLimit)** — performs multiple calls, where the same index of each array combine into the information required for a single subcall. If a subcall reverts, no following subcalls will be executed
+
+* **batchAll(address[] to, uint256[] value, bytes[] callData, uint64[] gasLimit)** — performs multiple calls atomically, where the same index of each array combine into the information required for a single subcall. If a subcall reverts, all subcalls will revert
+
+### Each of these functions have the following parameters:
+
+* **address[] to** - an array of addresses to direct subtransactions to, where each entry is a subtransaction
+
+* **uint256[] value** - an array of native currency values to send in the subtransactions, where the index corresponds to the subtransaction of the same index in the to array. If this array is shorter than the to array, all the following subtransactions will default to a value of 0
+
+* **bytes[] callData** - an array of call data to include in the subtransactions, where the index corresponds to the subtransaction of the same index in the to array. If this array is shorter than the to array, all of the following subtransactions will include no call data
+
+* **uint64[] gasLimit** - an array of gas limits in the subtransactions, where the index corresponds to the subtransaction of the same index in the to array. Values of 0 are interpreted as unlimited and will have all remaining gas of the batch transaction forwarded. If this array is shorter than the to array, all of the following subtransactions will have all remaining gas forwarded
+
+For more information checkout [batch](https://github.com/AstarNetwork/Astar/blob/master/precompiles/batch/Batch.sol) on [astar](https://github.com/AstarNetwork/Astar)
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/_category_.json b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/_category_.json
new file mode 100644
index 0000000..d529fb0
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Precompiles",
+ "position": 5
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/index.md b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/index.md
new file mode 100644
index 0000000..06398fa
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/index.md
@@ -0,0 +1,77 @@
+---
+sidebar_position: 1
+---
+
+# Precompiles
+
+A precompile is a common functionality used in smart contracts that has been compiled in advance, so Ethereum nodes can run it more efficiently. From a contract's perspective, this is a single command like an opcode.
+The Frontier EVM used on Astar network provides several useful precompiled contracts. These contracts are implemented in our ecosystem as a native implementation. The precompiled contracts `0x01` through `0x08` are the same as those in Ethereum (see list below). Additionally, Astar implements precompiled contracts starting from `0x5001`, that support new Astar features.
+
+## Ethereum Native Precompiles
+
+| Precompile | Address |
+| -------- | -------- |
+| ECRecover | 0x0000000000000000000000000000000000000001 |
+| Sha256 | 0x0000000000000000000000000000000000000002 |
+| Ripemd160 | 0x0000000000000000000000000000000000000003 |
+| Identity | 0x0000000000000000000000000000000000000004 |
+| Modexp | 0x0000000000000000000000000000000000000005 |
+| Bn128Add | 0x0000000000000000000000000000000000000006 |
+| Bn128Mul | 0x0000000000000000000000000000000000000007 |
+| Bn128Pairing | 0x0000000000000000000000000000000000000008 |
+
+## Astar Specific Precompiles
+
+| Precompile | Address |
+| -------- | -------- |
+| [DappsStaking](staking.md) | 0x0000000000000000000000000000000000005001 |
+| [Sr25519](sr25519.md) | 0x0000000000000000000000000000000000005002 |
+| [SubstrateEcdsa](substrate-ecdsa.md) | 0x0000000000000000000000000000000000005003 |
+| [XCM](xcm/xcm.md) | 0x0000000000000000000000000000000000005004 |
+| [XVM](xvm.md) | 0x0000000000000000000000000000000000005005 |
+| [assets-erc20](xc20.md) | ASSET_PRECOMPILE_ADDRESS_PREFIX |
+
+The interface descriptions for these precompiles can be found in the `precompiles` folder: [Astar repo](https://github.com/AstarNetwork/Astar/).
+The Addresses can be checked in the [Astar repo](https://github.com/AstarNetwork/Astar/tree/master/runtime) for each runtime in `precompile.rs` files.
+
+## Usage Example
+
+Here we'll demonstrate how to interact with the dApp staking precompile using Remix IDE. Other precompiles can be accessed in a similar manner.
+
+```solidity
+import "./DappsStaking.sol";
+contract A {
+ DappsStaking public constant DAPPS_STAKING = DappsStaking(0x0000000000000000000000000000000000005001);
+
+ /// @notice Check current era
+ function checkCurrentEra() public view {
+ uint256 currentEra = DAPPS_STAKING.read_current_era();
+ }
+}
+```
+
+Example use: check `current era` and `total staked amount` in the `pallet-dapps-staking` for Shiden Network. For this example we will use Remix.
+
+1. Copy `DappsStaking.sol` from [Astar repo](https://github.com/AstarNetwork/Astar/) and create new contract in Remix:
+
+![](https://i.imgur.com/mr0TcLq.png)
+
+2. Compile the dAppStaking contract:
+
+![](https://i.imgur.com/6Wgg9rf.jpg)
+
+3. The precompile does not need to be deployed since it is already on the network, but you will need to tell Remix where to find it.
+After you connect your EVM wallet to Shiden Network (same applies for Astar Network and for Shibuya Testnet) follow these steps:
+ 1. Open the Deploy tab.
+ 2. Use injected Web3 environment. It should point to Shiden Mainnet with `ChainId 336`.
+ 3. Make sure you have the selected dAppStaking contract.
+ 4. Provide the address of the precompiled contract `0x0000000000000000000000000000000000005001`.
+ 5. The dApp Staking contract will appear under Deployed contracts.
+
+![](https://i.imgur.com/6RnQlkb.jpg)
+
+4. Interact with the contract.
+ 1. Check the current era.
+ 2. Use the current era as input to check total staked amount on the network.
+
+![precompile-interact](https://user-images.githubusercontent.com/34627453/159696985-19f67e95-807e-4c20-b74c-c9f4944ada32.jpg)
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/sr25519.md b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/sr25519.md
new file mode 100644
index 0000000..34ba7b9
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/sr25519.md
@@ -0,0 +1,20 @@
+
+# SR25519
+
+The SR25519 precompile provides an interface to verify a message signed with Schnorr sr25519 algorithm.
+
+> Web3 Foundation has implemented a Schnorr signature library using the more secure Ristretto compression over the Curve25519 in the Schnorrkel repository. Schnorrkel implements related protocols on top of this curve compression such as HDKD, MuSig, and a verifiable random function (VRF). It also includes various minor improvements such as the hashing scheme STROBE that can theoretically process huge amounts of data with only one call across the Wasm boundary.
+
+> The implementation of Schnorr signatures used in Polkadot that uses Schnorrkel protocols over a Ristretto compression of Curve25519, is known as sr25519.
+
+For [more context](https://wiki.polkadot.network/docs/learn-keys#what-is-sr25519-and-where-did-it-come-from) see the Polkadot Wiki.
+
+```js
+ function verify(
+ bytes32 public_key,
+ bytes calldata signature,
+ bytes calldata message
+ ) external view returns (bool);
+```
+
+The `verify` function can be used to check that `public_key` was used to generate `signature` for `message`.
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/staking.md b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/staking.md
new file mode 100644
index 0000000..51443c6
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/staking.md
@@ -0,0 +1,5 @@
+# DApp Staking
+
+The dApp Staking Precompile allows EVM smart contracts to access `pallet-dapps-staking` functionality.
+
+For more information see `precompiles/dapps-staking/DappsStaking.sol` in the [`Astar` repository](https://github.com/AstarNetwork/Astar/).
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/substrate-ecdsa.md b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/substrate-ecdsa.md
new file mode 100644
index 0000000..a186195
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/substrate-ecdsa.md
@@ -0,0 +1,17 @@
+# Substrate ECDSA
+
+The Substrate ECDSA precompile provides an interface to verify a message signed with ECDSA algorithm.
+
+> Most cryptocurrencies, including Bitcoin and Ethereum, currently use ECDSA signatures on the secp256k1 curve. This curve is considered much more secure than NIST curves, which have possible backdoors from the NSA. The Curve25519 is considered possibly even more secure than this one and allows for easier implementation of Schnorr signatures. A recent patent expiration on it has made it the preferred choice for use in Polkadot.
+
+For [more context](https://wiki.polkadot.network/docs/learn-keys#why-was-ed25519-selected-over-secp256k1) see the Polkadot Wiki.
+
+```js
+ function verify(
+ bytes32 public_key,
+ bytes calldata signature,
+ bytes calldata message
+ ) external view returns (bool);
+```
+
+The `verify` function can be used to check that `public_key` was used to generate `signature` for `message`.
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xc20.md b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xc20.md
new file mode 100644
index 0000000..3cf8515
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xc20.md
@@ -0,0 +1,10 @@
+# XC20
+
+XC20 standard, created by the Moonbeam team, ensures compatibility between the EVM and Substrate framework that powers Polkadot via precompiles — special built-in smart contracts made to look like ERC20s. Calling functions on an XC20 invokes underlying Substrate functionality, which could be instructions to move tokens to another chain, or send them to another local address. This compatibility layer connects the world of EVM and smart contracts to advanced Substrate-based interoperability scenarios.
+
+For XC20 overview see the following [page] (/docs/learn/interoperability/xcm/building-with-xcm/create-xc20-assets). [INSERT LINK]
+
+# See also
+
+- https://github.com/ethereum/EIPs/issues/20
+- https://github.com/OpenZeppelin/openzeppelin-contracts
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/_category.json b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/_category.json
new file mode 100644
index 0000000..f8cb21a
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/_category.json
@@ -0,0 +1,4 @@
+{
+ "label": "XCM",
+ "position": 6
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/native-transfer.md b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/native-transfer.md
new file mode 100644
index 0000000..8282255
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/native-transfer.md
@@ -0,0 +1,74 @@
+# Transfer native token
+
+Let's use `transfer_multiasset` to:
+
+- transfer native token of `parachainId` **2000** to a Wrapped version in **2007**
+- for amount **10000000000000000000**
+
+#### 1. Define call as payable
+
+As the call will be on behalf of the contract, the native amount should be held by the contract. Please make the function payable to ensure the native token will be transferred to contract.
+
+```solidity
+ function transfer_native() external payable {
+```
+
+#### 2. Asset Multilocation
+
+The Native token Multilocation is defined by: `Multilocation: { parents: 0, interior: Here }`
+The interior field is an empty bytes array (equivalent of `Here`).
+
+```solidity
+bytes[] memory interior1 = new bytes[](0);
+XCM.Multilocation memory asset = XCM.Multilocation({
+ parents: 0,
+ interior: interior1
+});
+```
+
+#### 3. Beneficiary Multilocation
+
+Let's suppose the `beneficiary` is the EVM address `0xd43593c715fdd31c61141abd04a99fd6822c8558` of the contract in parachain **2007**. The Multilocation is `{ parents: 1, interior: X2 [Parachain: 2007, AccountId20: { id: *caller address* , network: any }] }`.
+The interior field is of type H160 (20 bytes EVM address) so prefixed with 0x03 and suffix with 0x00 (network: any). The interior bytes are 0x03 + EVM address + 0x00
+
+```solidity
+bytes[] memory interior = new bytes[](2);
+interior[0] = bytes.concat(hex"00", bytes4(uint32(2007)));
+interior[1] = bytes.concat(hex"03", msg.sender, hex"00");
+XCM.Multilocation memory destination = XCM.Multilocation({
+ parents: 1,
+ interior: interior
+});
+```
+
+#### 4. Weight
+
+The weight we want to buy in the destination chain, to set the weightlimit to Unlimited, you should use the value 0 for ref_time.
+
+```solidity
+XCM.WeightV2 memory weight = XCM.WeightV2({
+ ref_time: 30_000_000_000,
+ proof_size: 300_000
+});
+```
+
+#### 5. calling XCM precompiles
+
+Import the XCM precompiles in your contract and call it like this:
+
+```solidity
+address public constant XCM_ADDRESS =
+0x0000000000000000000000000000000000005004;
+
+require(
+ XCM(XCM_ADDRESS).transfer_multiasset(
+ asset,
+ amount,
+ destination,
+ weight
+ ),
+ "Failed to send xcm"
+);
+```
+
+Please check full example in the [XCM EVM SDK](https://github.com/AstarNetwork/EVM-XCM-Examples/tree/main/contracts/transfer-native)
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/transfer-asssets.md b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/transfer-asssets.md
new file mode 100644
index 0000000..0c97856
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/transfer-asssets.md
@@ -0,0 +1,61 @@
+# Transfer Asset
+
+Let's use `transfer_multiasset` to:
+
+- transfer asset id = 1 token from `parachainId` **2000** to `parachainId` **2007**
+- for amount **10000000000000000000000**
+
+#### 1. Asset Address
+
+The assetId = 1 is defined with: address = '0xFFFFFFFF...' + DecimalToHex(AssetId) resulting to : `0xFfFFFFff00000000000000000000000000000001`
+
+```solidity
+address assetAddress = 0xFfFFFFff00000000000000000000000000000001;
+```
+
+#### 2. Beneficiary Multilocation
+
+Let's suppose the `beneficiary` is the EVM address `0xd43593c715fdd31c61141abd04a99fd6822c8558` of the contract in parachain **2007**. The Multilocation is `{ parents: 1, interior: X2 [Parachain: 2007, AccountId20: { id: *caller address* , network: any }] }`.
+The interior field is of type H160 (20 bytes EVM address) so prefixed with 0x03 and suffix with 0x00 (network: any). The interior bytes are 0x03 + EVM address + 0x00
+
+```solidity
+bytes[] memory interior = new bytes[](2);
+interior[0] = bytes.concat(hex"00", bytes4(uint32(2007)));
+interior[1] = bytes.concat(hex"03", msg.sender, hex"00");
+XCM.Multilocation memory destination = XCM.Multilocation({
+ parents: 1,
+ interior: interior
+});
+```
+
+#### 3. Weight
+
+The weight we want to buy in the destination chain, to set the weightlimit to Unlimited, you should use the value 0 for ref_time.
+
+```solidity
+XCM.WeightV2 memory weight = XCM.WeightV2({
+ ref_time: 30_000_000_000,
+ proof_size: 300_000
+});
+```
+
+#### 4. calling XCM precompiles
+
+Import the XCM precompiles in your contract and call it like this:
+
+```solidity
+address public constant XCM_ADDRESS =
+0x0000000000000000000000000000000000005004;
+
+require(
+ XCM(XCM_ADDRESS).transfer(
+ assetAddress,
+ amount,
+ destination,
+ weight
+ ),
+ "Failed to send xcm"
+);
+```
+
+Please check full example in the [XCM EVM SDK](https://github.com/AstarNetwork/EVM-XCM-Examples/tree/main/contracts/transfer-assets)
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/withdraw-assets.md b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/withdraw-assets.md
new file mode 100644
index 0000000..4ed4d0d
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/withdraw-assets.md
@@ -0,0 +1,69 @@
+# Withdraw Asset
+
+Let's use `transfer_multiasset` to:
+
+- transfer back asset id = 1 token of `parachainId` **2007** to `parachainId` **2000**
+- for amount **10000000000000000000000**
+
+#### 1. Asset Multilocation
+
+The assetId Multilocation in the Parachain **2000** is defined by: `Multilocation: { parents: 1, interior: X2 [Parachain: 2000, GeneralIndex: 1] }`
+parachainId `2000` prefixed with 0x00. So the interior bytes are: 0x00 + bytes4(2000)
+GeneralIndex should be prefixed by 0x05 and u128(1): 0x05 + 1
+
+```solidity
+bytes[] memory interior1 = new bytes[](2);
+interior1[0] = bytes.concat(hex"00", bytes4(uint32(2000)));
+interior1[1] = bytes.concat(hex"05", abi.encodePacked(uint128(1)));
+XCM.Multilocation memory asset = XCM.Multilocation({
+ parents: 1,
+ interior: interior1
+});
+```
+
+#### 2. Beneficiary Multilocation
+
+Let's suppose the `beneficiary` is the EVM address `0xd43593c715fdd31c61141abd04a99fd6822c8558` of the contract in parachain **2000**. The Multilocation is `{ parents: 1, interior: X2 [Parachain: 2000, AccountId20: { id: *caller address* , network: any }] }`.
+The interior field is of type H160 (20 bytes EVM address) so prefixed with 0x03 and suffix with 0x00 (network: any). The interior bytes are 0x03 + EVM address + 0x00
+
+```solidity
+bytes[] memory interior = new bytes[](2);
+interior[0] = bytes.concat(hex"00", bytes4(uint32(2000)));
+interior[1] = bytes.concat(hex"03", msg.sender, hex"00");
+XCM.Multilocation memory destination = XCM.Multilocation({
+ parents: 1,
+ interior: interior
+});
+```
+
+#### 3. Weight
+
+The weight we want to buy in the destination chain, to set the weightlimit to Unlimited, you should use the value 0 for ref_time.
+
+```solidity
+XCM.WeightV2 memory weight = XCM.WeightV2({
+ ref_time: 30_000_000_000,
+ proof_size: 300_000
+});
+```
+
+#### 4. calling XCM precompiles
+
+Import the XCM precompiles in your contract and call it like this:
+
+```solidity
+address public constant XCM_ADDRESS =
+0x0000000000000000000000000000000000005004;
+
+require(
+ XCM(XCM_ADDRESS).transfer_multiasset(
+ asset,
+ amount,
+ destination,
+ weight
+ ),
+ "Failed to send xcm"
+);
+```
+
+Please check full example in the [XCM EVM SDK](https://github.com/AstarNetwork/EVM-XCM-Examples/tree/main/contracts/withdraw-assets)
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/xcm.md b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/xcm.md
new file mode 100644
index 0000000..c63ccf1
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xcm/xcm.md
@@ -0,0 +1,248 @@
+---
+sidebar_position: 6
+---
+
+# XCM - xTokens
+
+### XCM precompiles - Interface
+
+The interface can be found [here](https://github.com/AstarNetwork/Astar/blob/master/precompiles/xcm/XCM_v2.sol#L1) and contains the following functions:
+
+:::info
+Only available in Shibuya for now. For Shiden and Astar please check this [interface](https://github.com/AstarNetwork/Astar/blob/master/precompiles/xcm/XCM.sol)
+:::
+
+#### transfer(currencyAddress, amount, destination, weight)
+
+Transfer a token through XCM based on its address
+
+```solidity
+function transfer(
+ address currencyAddress,
+ uint256 amount,
+ Multilocation memory destination,
+ WeightV2 memory weight
+) external returns (bool);
+```
+
+- **currencyAddress** - The ERC20 address of the currency we want to transfer
+- **amount** - The amount of tokens we want to transfer
+- **destination** - The Multilocation to which we want to send the tokens
+- **weight** - The weight we want to buy in the destination chain, to set the weightlimit to Unlimited, you should use the value 0 for ref_time
+
+#### transfer_with_fee(currencyAddress, amount, fee, destination, weight)
+
+Transfer a token through XCM based on its address specifying fee
+
+```solidity
+function transfer_with_fee(
+ address currencyAddress,
+ uint256 amount,
+ uint256 fee,
+ Multilocation memory destination,
+ WeightV2 memory weight
+) external returns (bool);
+```
+
+- **currencyAddress** - The ERC20 address of the currency we want to transfer
+- **amount** - The amount of tokens we want to transfer
+- **fee** - The amount to be spent to pay for execution in destination chain
+- **destination** - The Multilocation to which we want to send the tokens
+- **weight** - The weight we want to buy in the destination chain, to set the weightlimit to Unlimited, you should use the value 0 for ref_time
+
+#### transfer_multiasset(asset, amount, destination, weight)
+
+Transfer a token through XCM based on its MultiLocation
+
+```solidity
+function transfer_multiasset(
+ Multilocation memory asset,
+ uint256 amount,
+ Multilocation memory destination,
+ WeightV2 memory weight
+) external returns (bool);
+```
+
+- **asset** - The asset we want to transfer, defined by its multilocation. Currently only Concrete Fungible assets
+- **amount** - The amount of tokens we want to transfer
+- **destination** - The Multilocation to which we want to send the tokens
+- **weight** - The weight we want to buy in the destination chain, to set the weightlimit to Unlimited, you should use the value 0 for ref_time
+
+#### transfer_multiasset_with_fee(asset, amount, fee, destination, weight)
+
+Transfer a token through XCM based on its MultiLocation specifying fee
+
+```solidity
+function transfer_multiasset_with_fee(
+ Multilocation memory asset,
+ uint256 amount,
+ uint256 fee,
+ Multilocation memory destination,
+ WeightV2 memory weight
+) external returns (bool);
+```
+
+- **asset** - The asset we want to transfer, defined by its multilocation. Currently only Concrete Fungible assets
+- **amount** - The amount of tokens we want to transfer
+- **fee** - The amount to be spent to pay for execution in destination chain
+- **destination** - The Multilocation to which we want to send the tokens
+- **weight** - The weight we want to buy in the destination chain, to set the weightlimit to Unlimited, you should use the value 0 for ref_time
+
+#### transfer_multi_currencies(currencies, feeItem, destination, weight)
+
+Transfer several tokens at once through XCM based on its address specifying fee
+
+```solidity
+function transfer_multiasset_with_fee(
+ Multilocation memory asset,
+ uint256 amount,
+ uint256 fee,
+ Multilocation memory destination,
+ WeightV2 memory weight
+) external returns (bool);
+```
+
+- **currencies** - The currencies we want to transfer, defined by their address and amount.
+- **feeItem** - Which of the currencies to be used as fee
+- **destination** - The Multilocation to which we want to send the tokens
+- **weight** - The weight we want to buy in the destination chain, to set the weightlimit to Unlimited, you should use the value 0 for ref_time
+
+#### transfer_multi_assets(assets, feeItem, destination, weight)
+
+Transfer several tokens at once through XCM based on its location specifying fee
+
+:::caution
+Only a maximum of 2 assets can be transferred
+:::
+
+```solidity
+function transfer_multi_assets(
+ MultiAsset[] memory assets,
+ uint32 feeItem,
+ Multilocation memory destination,
+ WeightV2 memory weight
+) external returns (bool);
+```
+
+- **assets** - The assets we want to transfer, defined by their location and amount.
+- **feeItem** - Which of the currencies to be used as fee
+- **destination** - The Multilocation to which we want to send the tokens
+- **weight** - The weight we want to buy in the destination chain, to set the weightlimit to Unlimited, you should use the value 0 for ref_time
+
+#### send_xcm(destination, xcmCall)
+
+Send xcm using PalletXCM call
+
+```solidity
+function send_xcm(
+ Multilocation memory destination,
+ bytes memory xcm_call
+) external returns (bool);
+```
+
+- **destination** - Multilocation of destination chain
+- **xcmCall** - Encoded xcm call to send to destination
+
+### XCM EVM SDK
+
+Find it [here](https://github.com/AstarNetwork/EVM-XCM-Examples/tree/main).
+This repository contains examples demonstrating solidity contracts using XCM precompiles. It's an easy way to start if you want to understand and build with EVM & XCM
+Inside the repository:
+
+- Learn how to do: asset transfer & withdraw as well as native token transfer
+- Zombienet config file: spawn a local zombienet with one relay chain and two parachains (Shibuya and Shiden node)
+- A setup script in order to create an asset and register it in both networks
+- Solidity examples of usage of XCM precompiles
+- Integration tests (hardhat) in order to understand the flow of the examples
+
+Please follow the instructions in the README to try it on your local machine.
+
+### Create Multilocation
+
+A multilocation is defined by its number of parents and the encoded junctions (interior). Precompiles use the Multilocation type that is defined as follows:
+
+```solidity
+ struct Multilocation {
+ uint8 parents;
+ bytes[] interior;
+ }
+```
+
+Note that each multilocation has a `parents` element, defined in this case by a `uint8`, and an array of bytes. Parents refer to how many "hops" in the upwards direction you have to do if you are going through the relay chain. Being a `uint8`, the normal values you would see are:
+
+| Origin | Destination | Parents Value |
+|:-----------:|:-----------:|:-------------:|
+| Parachain A | Parachain A | 0 |
+| Parachain A | Relay Chain | 1 |
+| Parachain A | Parachain B | 1 |
+
+The bytes array (`bytes[]`) defines the interior and its content within the multilocation. The size of the array defines the `interior` value as follows:
+
+| Array | Size | Interior Value |
+|:------------:|:----:|:--------------:|
+| [] | 0 | Here |
+| [XYZ] | 1 | X1 |
+| [XYZ, ABC] | 2 | X2 |
+| [XYZ, ... N] | N | XN |
+
+:::note
+Interior value `Here` is often used for the relay chain (either as a destination or to target the relay chain asset).
+:::
+
+Suppose the bytes array contains data. Each element's first byte (2 hexadecimal numbers) corresponds to the selector of that `XN` field. For example:
+
+| Byte Value | Selector | Data Type |
+|:----------:|:--------------:|-----------|
+| 0x00 | Parachain | bytes4 |
+| 0x01 | AccountId32 | bytes32 |
+| 0x02 | AccountIndex64 | u64 |
+| 0x03 | AccountKey20 | bytes20 |
+| 0x04 | PalletInstance | byte |
+| 0x05 | GeneralIndex | u128 |
+| 0x06 | GeneralKey | bytes[] |
+
+Next, depending on the selector and its data type, the following bytes correspond to the actual data being provided. Note that for `AccountId32`, `AccountIndex64`, and `AccountKey20`, the `network` field seen in the Polkadot.js Apps example is appended at the end. For example:
+
+| Selector | Data Value | Represents |
+|:--------------:|:----------------------:|:----------------------------------:|
+| Parachain | "0x00+000007E7" | Parachain ID 2023 |
+| AccountId32 | "0x01+AccountId32+00" | AccountId32, Network(Option) Null |
+| AccountId32 | "0x01+AccountId32+03" | AccountId32, Network Polkadot |
+| AccountKey20 | "0x03+AccountKey20+00" | AccountKey20, Network(Option) Null |
+| PalletInstance | "0x04+03" | Pallet Instance 3 |
+
+For example in solidity:
+
+```solidity
+// Multilocation: { parents: 1, interior: X1 [Parachain: 2000] }
+bytes[] memory interior1 = new bytes[](1);
+interior1[0] = bytes.concat(hex"00", bytes4(uint32(2000)));
+Multilocation memory destination = Multilocation({
+ parents: 1,
+ interior: interior1
+});
+
+// Multilocation: { parents: 0, interior: Here }
+bytes[] memory interior1 = new bytes[](0); // Initialize as an empty bytes array
+Multilocation memory destination = Multilocation({
+ parents: 1,
+ interior: interior1
+});
+
+// Multilocation: { parents: 1, interior: X2 [Parachain: 2000, GeneralIndex: 1] }
+bytes[] memory interior = new bytes[](2);
+interior[0] = bytes.concat(hex"00", bytes4(uint32(2000)));
+interior[1] = bytes.concat(hex"05", abi.encodePacked(uint128(1)));
+XCM.Multilocation memory asset = XCM.Multilocation({
+ parents: 1,
+ interior: interior
+});
+```
+
+#### Builder Guides
+
+Three builder guides on the subject of EVM XCM are available in Builder section:
+
+- [How to create and interact with a mintable XC20 asset via Solidity smart contract](https://docs.astar.network/build/build-on-layer-1/builder-guides/leverage_parachains/interact_with_xc20.md)
+- [Harnessing Crust Network for NFT Minting: A Developer's Guide](/docs/build/build-on-layer-1/builder-guides/leverage_parachains/mint-nfts-crust.md)
+- [How to set up a Zombienet for XCM testing](/docs/build/build-on-layer-1/builder-guides/leverage_parachains/zombienet.md)
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xvm.md b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xvm.md
new file mode 100644
index 0000000..62b5926
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/precompiles/xvm.md
@@ -0,0 +1,48 @@
+
+# XVM
+
+The XVM Precompile provides an interface for EVM to call into XVM.
+
+XVM is designed to be a communication layer and universal execution engine on Astar Network. The idea being that it provides an abstract execution environment which can be used by different execution engines to seamlessly interact with one another. For example, XVM allows EVM smart contract written in Solidity to call into WebAssembly smart contracts written in ink! and vice versa.
+
+Please note that XVM is still in its alpha.
+
+# Call API
+
+```js
+ function xvm_call(
+ bytes calldata context,
+ bytes calldata to,
+ bytes calldata input,
+ ) external returns (bool success, bytes memory data);
+```
+
+Since the interface is abstract and extensible, and each VM treats its parameters differently, the only way to provide a future-proof API is to use byte strings. Under the hood it uses XVM Codec based on SCALE.
+
+### Input parameters
+
+- `context` is a set of data built by caller that is specific to a particular execution environment. Depending on a VM it may contain the id of a virtual machine and its exexcution environment, gas limits and execution tickets, apparent value, continuation info and other information.
+- `to` is an abstraction of an address, anything can be viewed as a destination of a XVM call
+- `input` is a SCALE encoded input parameters specific to this particular call which is created by a sender
+
+### Output data
+`success` is a boolean outcome flag. If `true`, then XVM call was dispatched successfully and `data` contains data returned from the callee. If `false`, then `data` contains error data. In both cases, the contents and the format of `data` are specific to a particular backend. For EVM is would typically be Keccac, for Wasm it would be SCALE.
+
+Please note that this is a low-level interface that is not expected to be used directly. Instead, library authors use such an API to build idiomatic wrappers for specific execution environments.
+
+For example, [ink! XVM SDK](https://github.com/AstarNetwork/ink-xvm-sdk) uses this API to provide XVM functionality for smart contracts written in ink!:
+```rust
+ #[ink(message)]
+ pub fn claim(&mut self) -> bool {
+ let to = [0xffu8; 20];
+ let value = 424242u128;
+ self.erc20.transfer(to, value)
+ }
+```
+
+In this example we can see that an ink message is created that seamlessly calls into an ERC20 contract that resides in EVM. In its implementation it uses `xvm_call` to dispatch the call.
+
+# Notes
+
+1. In future, the XVM API will be be extended to support asynchronous methods like `xvm_query` and `xvm_send`.
+2. Currently the API does not support nested XVM calls.
diff --git a/docs/build/build-on-layer-1/smart-contracts/EVM/quickstart-evm.md b/docs/build/build-on-layer-1/smart-contracts/EVM/quickstart-evm.md
new file mode 100644
index 0000000..d618e9d
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/EVM/quickstart-evm.md
@@ -0,0 +1,120 @@
+---
+title: Quickstart Guide
+---
+
+import Figure from '/src/components/figure'
+
+# Quickstart Guide for Astar Substrate EVM
+
+Everything required to start deploying dApps on Astar Substrate EVM (hereafter referred to as **Astar EVM**), and nothing more.
+
+## Connecting to Astar EVM Networks
+
+:::info
+Although the free endpoints below are intended for end users, they can still be used to interact with dApps or deploy/call smart contracts. It should be noted however that
+they rate-limit API calls, so are not suitable for high demand applications, such as dApp UIs that scrape users' blockchain history.
+:::
+
+:::tip
+To meet the demands of production dApps developers should run their own [archive node](/docs/build/build-on-layer-1/nodes/archive-node/index.md) **or** obtain an API key from one of our [infrastructure partners](/docs/build/build-on-layer-1/integrations/node-providers/index.md).
+:::
+
+
+
+
+| | Public endpoint Astar |
+| --- | --- |
+| Network | Astar |
+| Parent chain | Polkadot |
+| ParachainID | 2006 |
+| HTTPS | Astar Team: https://evm.astar.network |
+| | Alchemy: Get started [here](https://www.alchemy.com/astar) |
+| | BlastAPI: https://astar.public.blastapi.io |
+| | Dwellir: https://astar-rpc.dwellir.com |
+| | OnFinality: https://astar.api.onfinality.io/public |
+| | Pinknode: Get started [here](https://www.pinknode.io/) |
+| | Automata 1RPC: https://1rpc.io/astr, get started [here](https://www.1rpc.io) |
+| Websocket | Astar Team: wss://rpc.astar.network |
+| | Alchemy: Get started [here](https://www.alchemy.com/astar) |
+| | BlastAPI: wss://astar.public.blastapi.io |
+| | Dwellir: wss://astar-rpc.dwellir.com |
+| | OnFinality: wss://astar.api.onfinality.io/public-ws |
+| | Pinknode: Get started [here](https://www.pinknode.io/) |
+| | Automata 1RPC: wss://1rpc.io/astr, get started [here](https://www.1rpc.io) |
+| chainID | 592 |
+| Symbol | ASTR |
+
+
+
+
+
+| | Public endpoint Shiden |
+| --- | --- |
+| Network | Shiden |
+| Parent chain | Kusama |
+| ParachainID | 2007 |
+| HTTPS | Astar Team: https://evm.shiden.astar.network |
+| | BlastAPI: https://shiden.public.blastapi.io |
+| | Dwellir: https://shiden-rpc.dwellir.com |
+| | OnFinality: https://shiden.api.onfinality.io/public |
+| | Pinknode: Get started [here](https://www.pinknode.io/) |
+| Websocket | Astar Team: wss://rpc.shiden.astar.network |
+| | BlastAPI: wss://shiden.public.blastapi.io |
+| | Dwellir: wss://shiden-rpc.dwellir.com |
+| | OnFinality: wss://shiden.api.onfinality.io/public-ws |
+| | Pinknode: Get started [here](https://www.pinknode.io/) |
+| chainID | 336 |
+| Symbol | SDN |
+
+
+
+
+
+| | Public endpoint Shibuya |
+| --- | --- |
+| Network | Shibuya (parachain testnet) |
+| Parent chain | Tokyo relay chain (hosted by Astar Team) |
+| ParachainID | 1000 |
+| HTTPS | Astar Team: https://evm.shibuya.astar.network (only EVM/Ethereum RPC available) |
+| | BlastAPI: https://shibuya.public.blastapi.io |
+| | Dwellir: https://shibuya-rpc.dwellir.com |
+| Websocket | Astar Team: wss://rpc.shibuya.astar.network |
+| | BlastAPI: wss://shibuya.public.blastapi.io |
+| | Dwellir: wss://shibuya-rpc.dwellir.com |
+| chainID | 81 |
+| Symbol | SBY |
+
+
+
+
+
+## Obtaining tokens from the faucet
+
+[INSERT FAUCET INSTRUCTIONS]
+
+## Block Explorer
+
+[INSERT BLOCK EXPLORER]
+
+## Deploying Smart Contracts
+
+The development experience on Astar EVM is seamless and nearly identical to the Ethereum Virtual Machine. Developers can use existing code and tools on Astar EVM and users benefit from high transaction throughput and low fees. Read more about deploying smart contracts on Astar EVM [here.](/docs/build/build-on-layer-1/smart-contracts/EVM/index.md)
+
+## Metamask setup for Shibuya testnet
+To add Shibuya testnet to MetaMask, use the link at the bottom of the [block explorer](https://zkatana.blockscout.com/), or fill in the following details manually:
+
+
+
+## Astar EVM Support for Developers
+
+Developers requiring support can join the [Astar Discord server](https://discord.gg/astarnetwork).
+
+
+Astar Discord server
+
+1. Join the **Astar Discord** server [here](https://discord.gg/astarnetwork).
+2. Accept the invite.
+3. Take the **Developer** role under **#roles**.
+4. Navigate to the **Builder/#-astar-polkadot** channel.
+
+
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/_category_.json b/docs/build/build-on-layer-1/smart-contracts/_category_.json
new file mode 100644
index 0000000..669e062
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Smart Contracts",
+ "position": 1
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/index.md b/docs/build/build-on-layer-1/smart-contracts/index.md
new file mode 100644
index 0000000..2dd1d50
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/index.md
@@ -0,0 +1,22 @@
+---
+title: Smart Contracts
+---
+
+# Overview
+
+This section contains all the information required to start building, testing, and deploying Substrate-native or EVM-based smart contracts on Astar's Substrate-based networks. Substrate is a modular blockchain framework using pallets to support granular features, such as execution environments for both Wasm and EVM smart contracts.
+
+## Wasm (ink!)
+
+Wasm smart contracts are powered by the `pallet-contracts` pallet for Substrate. For information about how to build and deploy ink! and Rust-based smart contracts on the Substrate-native virtual machine, otherwise known as the Wasm VM or **Astar Substrate Native Network**, see the Wasm section.
+
+## EVM
+
+For information about how to build and deploy Solidity-based smart contracts on the Frontier-based `EVM` pallet for Substrate, otherwise known as the **Astar Substrate EVM Network**, see the EVM section.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/_category_.json
new file mode 100644
index 0000000..bf10b78
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Wasm Smart Contracts",
+ "position": 3
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/ask_contracts.md b/docs/build/build-on-layer-1/smart-contracts/wasm/ask_contracts.md
new file mode 100644
index 0000000..702667a
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/ask_contracts.md
@@ -0,0 +1,268 @@
+---
+sidebar_position: 5
+---
+
+# Ask! Smart Contracts
+
+:::caution
+
+Ask! eDSL has many [limitations and issues](https://github.com/ask-lang/ask/issues/)
+which are actively worked upon but at this moment it is not recommended
+for **PRODUCTION** environment. Please consider [ink!](dsls#ink) if you are building a contract for production.
+
+:::
+
+This guide will help you set up your local environment and deploy a simple Ask! contract on our testnet, Shibuya.
+
+---
+
+## What will we do?
+
+We will setup the local environment for developing ask! smart contract and deploys it Shibuya Testnet.
+
+## What is Ask!?
+
+Ask! is a framework for AssemblyScript developers to write Wasm smart contracts for `pallet-contracts`, otherwise known as the Wasm Virtual Machine. Its syntax is similar to TypeScript. The [current project](https://polkadot.polkassembly.io/post/949) is funded by Polkadot treasury, and still under active development.
+
+---
+
+## Prerequisites
+
+This tutorial targets developers who are new to ask! and AssemblyScript.
+Prior knowledge of using/setting up typescript/javascript project is helpful but not needed.
+
+## Setup Environment
+
+#### Install yarn package manager.
+
+We will be using `yarn` package manager to manage our ask! project. For installation instructions [read here](https://classic.yarnpkg.com/lang/en/docs/install)
+
+```
+npm install --global yarn
+```
+
+#### Clone the `ask-template` repo
+
+Simply clone the template provided by ask! team - `ask-template`.
+Execute the below commands to clone the repository and cd into it.
+
+```bash
+git clone https://github.com/ask-lang/ask-template.git
+cd ask-template
+```
+
+After executing above commands you will have a following project structure.
+
+```
+ask-template
+├── asconfig.json (assemblyscript config)
+├── askconfig.json (ask-lang config)
+├── build (build targets, configurable, see asconfig.json and askconfig.json)
+│ └── metadata.json (Ask! contract metadata)
+├── flipper.ts (Ask! contract code)
+├── index.d.ts (typescript definition file, used for syntax and code hinting)
+├── LICENSE
+├── node_modules
+├── package.json (npm package config)
+├── README.md
+├── tsconfig.json (typescript config)
+└── yarn.lock
+```
+
+The command above sets you a simple Ask! contract in `flipper.ts`. You are good to go now!
+
+## Flipper Contract
+
+### `flipper.ts` file
+
+Below is the content of `flipper.ts` file. It contains a very basic flipper contract which has only two contract methods, `flip()`
+and `get()`.
+
+```ts
+/* eslint-disable @typescript-eslint/no-inferrable-types */
+import { env, Pack } from "ask-lang";
+
+@event({ id: 1 })
+export class FlipEvent {
+ flag: bool;
+
+ constructor(flag: bool) {
+ this.flag = flag;
+ }
+}
+
+@spreadLayout
+@packedLayout
+export class Flipper {
+ flag: bool;
+ constructor(flag: bool = false) {
+ this.flag = flag;
+ }
+}
+
+@contract
+export class Contract {
+ _data: Pack;
+
+ constructor() {
+ this._data = instantiate>(new Flipper(false));
+ }
+
+ get data(): Flipper {
+ return this._data.unwrap();
+ }
+
+ set data(data: Flipper) {
+ this._data = new Pack(data);
+ }
+
+ @constructor()
+ default(flag: bool): void {
+ this.data.flag = flag;
+ }
+
+ @message({ mutates: true })
+ flip(): void {
+ this.data.flag = !this.data.flag;
+ let event = new FlipEvent(this.data.flag);
+ // @ts-ignore
+ env().emitEvent(event);
+ }
+
+ @message()
+ get(): bool {
+ return this.data.flag;
+ }
+}
+```
+
+### Contract Structure
+
+```ts
+/*
+ * @event() is use to define Event, that can be emitted using env().emitEvent().
+ */
+@event({ id: 1 })
+export class FlipEvent {}
+
+/*
+ * This is the smart contract storage
+ */
+@spreadLayout
+@packedLayout
+export class Flipper {}
+
+/*
+ * @contract is use declare a smart contract, contains the functional logic.
+ */
+@contract
+export class Contract {}
+```
+
+#### Storage
+
+```ts
+@spreadLayout
+@packedLayout
+export class Flipper {
+ flag: bool;
+ constructor(flag: bool = false) {
+ this.flag = flag;
+ }
+}
+```
+
+The `Flipper` class is our contract's storage. The `@spreadLayout` and `@packedLayout` are used to describe how it will
+be stored internally, see for more details - [here](https://use.ink/3.x/datastructures/spread-storage-layout).
+
+#### Callable functions
+
+```ts
+@contract
+export class Contract {
+ @constructor()
+ default(flag: bool): void {}
+
+ @message({ mutates: true })
+ flip(): void {}
+
+ @message()
+ get(): bool {}
+}
+```
+
+- `@constructor` - This is used for bootstrapping the initial contract state into the storage when contract is deployed for first time.
+
+- `@message()` - This marks a function as publicly dispatchable, meaning that it is exposed in the contract interface to the outside world.
+
+#### Events
+
+Events in ask! are simple classes that are anointed with `@event()` decorator with id specified.
+
+Note:- No two events can have same id
+
+```ts
+@event({ id: 1 })
+export class FlipEvent {
+ flag: bool;
+
+ constructor(flag: bool) {
+ this.flag = flag;
+ }
+}
+```
+
+`env().emitEvent()` is used to emit events in ask!, just like in `flip()` method of contract.
+
+```ts
+let event = new FlipEvent(this.data.flag);
+// @ts-ignore
+env().emitEvent(event);
+```
+
+## Build
+
+Run the command below, which will build the template contract.
+```bash
+# Install dependencies and Build the template contract
+yarn && yarn build flipper.ts
+```
+
+The above command will generate the Wasm code and metadata file for the contract in `flipper.optimized.wasm`, and `metadata.json`, respectively.
+
+```
+ask-template
+├── asconfig.json (assemblyscript config)
+├── askconfig.json (ask-lang config)
+├── build
+│ └── metadata.json generated by Ask!, configurable by `askconfig.json`
+│ └── flipper.optimized.wasm generated by AssemblyScript, configurable by `asconfig.json`
+│ └── flipper.wat generated by AssemblyScript, configurable by `asconfig.json`
+```
+
+## Deploy
+
+Now we will deploy this smart contract on our testnet.
+
+We will access [polkadot.js](https://polkadot.js.org/apps/) and deploy the smart contract. Select Shibuya testnet and pick `metadata.json` for “json for either ABI or .contract bundle” section and pick `flipper.optimized.wasm` for “compiled contract WASM” section.
+
+![09](img/09a.png)
+
+![10](img/10.png)
+
+![11](img/11.png)
+
+![12](img/12.png)
+
+After following the steps above. We can confirm that the contract is deployed on Shibuya testnet.
+
+![12](img/12.png)
+
+That’s a wrap!
+If you have any questions, please feel free to ask us in our [official discord channel](https://discord.gg/GhTvWxsF6S).
+
+---
+
+## Reference
+
+- [Official documentation for ask!](https://github.com/ask-lang/ask)
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/basic-contract.md b/docs/build/build-on-layer-1/smart-contracts/wasm/basic-contract.md
new file mode 100644
index 0000000..735888d
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/basic-contract.md
@@ -0,0 +1,78 @@
+---
+sidebar_position: 8
+---
+
+# Basic Contract
+
+Each contract should be in its **own crate**. In a folder, create two files:
+
+- Cargo.toml: The manifest.
+- lib.rs: The default library file.
+
+Inside the Cargo.toml you will need to specify parameters in the `[package]`, `[dependencies]`, `[lib]` type, and `[features]` sections:
+
+```toml
+[package]
+name = "my_contract"
+version = "0.1.0"
+authors = ["Your Name "]
+edition = "2021"
+
+[dependencies]
+ink = { version = "4.0.0", default-features = false}
+ink_metadata = { version = "4.0.0", features = ["derive"], optional = true }
+
+scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] }
+scale-info = { version = "2.3", default-features = false, features = ["derive"], optional = true }
+
+[lib]
+path = "lib.rs"
+
+[features]
+default = ["std"]
+std = [
+ "ink/std",
+ "scale/std",
+ "scale-info/std"
+]
+ink-as-dependency = []
+```
+
+In the library file - ink! has a few minimum requirements:
+
+- `#![cfg_attr(not(feature = "std"), no_std)]` at the beginning of each contract file.
+- a module with `#[ink::contract]`.
+- a (storage) struct - that can be empty - with `#[ink(storage)]`.
+- at least one constructor with `#[ink(constructor)]`.
+- at least one fn with `#[ink(message)]`.
+
+In the lib.rs the minimum implementation is:
+
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+
+#[ink::contract]
+mod my_contract {
+
+ #[ink(storage)]
+ pub struct MyContract {}
+
+ impl MyContract {
+ #[ink(constructor)]
+ pub fn new() -> Self {
+ Self {}
+ }
+
+ #[ink(message)]
+ pub fn do_something(&self) {
+ ()
+ }
+ }
+}
+```
+
+The [flipper](https://github.com/paritytech/ink/blob/master/examples/flipper/lib.rs) smart contract is most basic example provided by ink! team.
+
+# Using Swanky
+
+You can also use Swanky Suite to fast track your development efforts when setting up the project. Go to this section to learn how to [bootstrap a smart contract using Swanky](/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/cli.md#bootstrap-a-new-project)
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/community.md b/docs/build/build-on-layer-1/smart-contracts/wasm/community.md
new file mode 100644
index 0000000..997fbf4
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/community.md
@@ -0,0 +1,15 @@
+---
+sidebar_position: 14
+---
+
+# Community
+
+## Didn't find what you're looking for?
+
+### Polkadot and Substrate Stack Exchange
+
+Many answers can be found in the [Polkadot and Substrate Stack Exchange](https://substrate.stackexchange.com/). Feel free to ask a question if you have one, or participate in ongoing discussions.
+
+### Astar Discord
+
+We also have a Discord server where you can ask our developers questions directly [here](https://discord.gg/AstarNetwork) and you can also ask [Astari](https://medium.com/astar-network/your-personal-guide-to-astar-network-is-here-af4344ee8d73) in the [Astari AI channel.](https://discord.com/channels/644182966574252073/1097773162248278077)
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/_category_.json
new file mode 100644
index 0000000..04752f1
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Contract Environment",
+ "position": 10
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/chain-extension/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/chain-extension/_category_.json
new file mode 100644
index 0000000..87af07b
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/chain-extension/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Chain Extensions",
+ "position": 1
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/chain-extension/assets-ce.md b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/chain-extension/assets-ce.md
new file mode 100644
index 0000000..c3d9541
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/chain-extension/assets-ce.md
@@ -0,0 +1,93 @@
+---
+sidebar_position: 2
+---
+
+# Pallet-Assets Chain Extension
+
+### API
+This chain extension allows contracts to call pallet-assets functions.
+It includes extrinsics:
+```rust
+fn create()
+fn mint()
+fn burn()
+fn transfer()
+fn approve_transfer()
+fn cancel_approval()
+fn transfer_approved()
+fn set_metadata()
+fn transfer_ownership()
+```
+
+And these queries:
+```rust
+fn balance_of()
+fn total_supply()
+fn allowance()
+fn metadata_name()
+fn metadata_symbol()
+fn metadata_decimals()
+```
+
+Some extrinsics are NOT part of the chain extension because they have no or limited usage for smart contracts.
+```rust
+fn clear_metadata()
+fn start_destroy()
+fn destroy_accounts()
+fn destroy_approvals()
+fn finish_destroy()
+fn freeze()
+fn freeze_asset()
+fn refund()
+fn set_team()
+fn thaw()
+fn touch()
+fn transfer_keep_alive()
+```
+
+#### Storage deposit
+
+Creating an asset within a smart contract with `fn create()` will reserve `approvalDeposit` amount of contract balance. Either transfer funds to contract or make this function payable.
+The same applies for `fn set_metadata()` but it depends on the bytes size you are storing on-chain.
+
+#### Destroy an asset
+
+As destroy functions are not part of the chain extension, please implement `fn transfer_ownership()` in your contract. And call it to set an external account as owner that will call the _destroy_ extrinsics on his behalf.
+
+#### Only call on behalf of the contract is implemented
+
+Chain extension allow to set the orgin of the call as the contract or as the caller. The caller is this case is the address that calls the contract (it can be an user or a contract in case of a cross-contract-call). But allowing call on behalf of the caller means user need to only interact with verified contract. Calling a non verified contract can have a risk for user funds.
+As ink! contract verification is not mature enough for now, only calls on behalf of the contract are implemented. We are still using an `Origin` enum that has `Address` (address of the contract) or `Caller` (address of the caller) fields. But for now only `Origin::Address` is supported, and `Origin::Caller` will return `OriginCannotBeCaller` Error. So that, in the future, call on behalf of the `Caller` will be activated without any change in the API.
+
+#### Usage in your contract
+
+:::note
+Your contract should be in ink! 4.0.0 or above
+:::
+
+
+1. add `assets_extension` in your `Cargo.toml` and to the `std` `features`
+```toml
+assets_extension = { git = "https://github.com/swanky-dapps/chain-extension-contracts", default-features = false }
+
+[features]
+default = ["std"]
+std = [
+ "ink_metadata/std",
+ "ink/std",
+ "scale/std",
+ "scale-info/std",
+ "assets_extension/std",
+]
+```
+
+2. Add `use` statement in your contract module
+```rust
+use assets_extension::*;
+
+```
+
+3. Use struct functions directly in your contract
+```rust
+AssetsExtension::create(Origin::Address, asset_id, contract, min_balance)
+```
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/chain-extension/chain_extensions.md b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/chain-extension/chain_extensions.md
new file mode 100644
index 0000000..a7287a8
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/chain-extension/chain_extensions.md
@@ -0,0 +1,57 @@
+---
+sidebar_position: 1
+---
+
+# Chain Extensions
+
+Chain extension is a way to extend contracts API to add contracts to runtime pallet interaction. By default, contracts can only do cross-contract calls within their environment (pallet-contracts). Chain extension allows to add custom callable pallet functions.
+
+![ink-ce](../../img/ink-ce.png)
+
+### What chain extensions are available ?
+
+#### XVM
+
+This chain extension enables usage of XVM in your contracts. More info in the [ink! XVM SDK repo](https://github.com/AstarNetwork/ink-xvm-sdk).
+
+#### DApp Staking
+
+This chain extension adds call to `pallet_dapps_staking` so that you can use dApp Staking in your contracts. More info in the [chain-extensions contracts repo](https://github.com/swanky-dapps/chain-extension-contracts).
+
+#### Assets
+
+This chain extension adds call to `pallet_assets` so that you can use Assets in your contracts. More info in the [chain-extensions contracts repo](https://github.com/swanky-dapps/chain-extension-contracts).
+
+### Availability in networks
+
+
+| Chain extension | Swanky | Shibuya | Shiden | Astar |
+|---|---|---|---|---|
+| XVM | :white_large_square: | :white_check_mark: | :white_large_square: | :white_large_square: |
+| Dapp Staking | :white_check_mark:| :white_check_mark: | :white_large_square: | :white_large_square: |
+| Assets | :white_check_mark: | :white_check_mark: | :white_large_square: | :white_large_square: |
+
+
+### Implementations
+
+There are two implementations: one in the runtime and one on the ink! side.
+
+#### Runtine
+
+Implementation of the chain extension on runtime side is available on [Astar repository](https://github.com/AstarNetwork/Astar/), under `chain-extensions` folder
+
+#### ink! implementation
+
+On contract side the implementation is made using [ChainExtensionMethod](https://github.com/paritytech/ink/blob/db7a906522a7e97ed5057b193df1253b33e99ee4/crates/env/src/chain_extension.rs#L77) that uses a custom environment
+(so it can be used with other libraries that use custom environment like OpenBrush). It is implemented as a crate that you can import in you contract. It can be found in [chain-extension contracts repository](https://github.com/swanky-dapps/chain-extension-contracts)
+
+#### Contracts examples
+
+- [PSP22 pallet-assets wrapper](https://github.com/swanky-dapps/chain-extension-contracts/tree/main/contracts/psp22_pallet_wrapper)
+- [Asset Chain Extension](https://github.com/swanky-dapps/chain-extension-contracts/tree/main/examples/assets)
+- [dApp Staking](https://github.com/swanky-dapps/chain-extension-contracts/tree/main/examples/dapps-staking)
+
+#### Video tutorials
+
+- dApp Staking Chain Extension on ink! Smart Contracts by @AstarNetwork on [Youtube](https://www.youtube.com/watch?v=-T-HKy_vFCo)
+- Build a Scheduler Chain Extension by @Parity on [Youtube](https://www.youtube.com/watch?v=yykPQF0tkqk)
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/explorers.md b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/explorers.md
new file mode 100644
index 0000000..58a6e40
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/explorers.md
@@ -0,0 +1,40 @@
+---
+sidebar_position: 1
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Block Explorers
+## Overview
+
+Block explorers are the Google for searching data on a blockchain. They give developers and users the ability to search information such as balances, contracts, tokens, transactions, and API services.
+
+## Native Astar Explorers
+
+
+
+
Subscan is the most widely used explorer in the Polkadot ecosystem. Subscan has indexed Astar Network in its entirety, and supports both Substrate and Ethereum APIs. BlockScout is the best explorer for developers who are building on Astar EVM, as it has all the features of EtherScan.
+
Under certain circumstances, the Polkadot.js apps portal may also be used to explore blocks.
Subscan is the most used explorer in the Polkadot ecosystem. With Subscan you can search the complete Astar Network. Subscan support Substrate and Ethereum API. BlockScout is the best explorer for developers who are building on Astar EVM, as it has all the features of EtherScan.
+
Under certain circumstances, the Polkadot.js apps portal may also be used to explore blocks.
Subscan is the most used explorer in the Polkadot ecosystem. With Subscan you can search the complete Astar Network. Subscan support Substrate and Ethereum API. BlockScout is the best explorer for developers who are building on Astar EVM, as it has all the features of EtherScan.
+
Sirato is a contract explorer for ink! smart contracts. Sirato provides a contract verification service enabling users to decode information about contract code and instances that have been deployed using the Contracts Pallet. The service allows users to upload source code and metadata to the service which will verify it matches the on-chain. For instructions on how to use the service you can refer here.
+
Under certain circumstances, the Polkadot.js apps portal may also be used to explore blocks.
+
+
+
+
+Visit the Subscan [tutorial page](/docs/build/build-on-layer-1/integrations/indexers/subscan.md) for more information.
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/index.md b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/index.md
new file mode 100644
index 0000000..579bf5d
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/index.md
@@ -0,0 +1,8 @@
+# Contract Environment
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/verification.md b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/verification.md
new file mode 100644
index 0000000..574b764
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/verification.md
@@ -0,0 +1,7 @@
+---
+sidebar_position: 2
+---
+
+# Contract Verification
+
+Will be supported after ink! v4 introduction
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/vrf.md b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/vrf.md
new file mode 100644
index 0000000..0a22c75
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/contract_environment/vrf.md
@@ -0,0 +1,14 @@
+---
+sidebar_position: 2
+---
+
+# Verifiable Randomness Function
+
+Unfortunately at the moment there is no way to generate randomness using ink!
+The available options are:
+
+* Creating a VRF oracle contract that will generate randomness.
+ * (DIA is working on it for Astar.)
+* On the runtime level, adding a Chain Extension to the RandomnessCollectiveFlip pallet so it's accessible within ink! contracts.
+ * Astar team is working on this.
+* Add a function in the ink_env to retrieve the current and previous block hashes, on which to base randomness.
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/dsls.md b/docs/build/build-on-layer-1/smart-contracts/wasm/dsls.md
new file mode 100644
index 0000000..58aa36d
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/dsls.md
@@ -0,0 +1,27 @@
+---
+sidebar_position: 3
+---
+
+# DSLs
+
+Embedded Domain-Specific Languages (eDSLs), are tools used to improve the blockchain and smart contract development experience by making it easier to write and understand code. EDSLs are programming languages or libraries that are designed to be used within the context of another programming language, to provide a more expressive and intuitive way to write smart contracts. In other words, an eDSL allows developers to write smart contracts at a higher-level, which makes the code easier to read and interpret, and less prone to error.
+
+For example, instead of using pure Rust to write Wasm smart contracts or blockchain logic, a Rust eDSL, such as Substrate, can be used instead, as an eDSL specifically targetting development within those domains. Substrate allows developers to express the intent of their code in a more natural way, making it easier to understand and maintain.
+
+EDSLs can also provide features such as error checking, debugging, and testing, which can further improve the development experience, within the realm of their specific domains.
+
+## `Ink!`
+
+Ink! is an eDSL written in Rust and developed by Parity. It specifically targets Substrate’s `pallet-contracts` [API](https://docs.rs/pallet-contracts/latest/pallet_contracts/api_doc/trait.Current.html).
+
+Ink! offers Rust [procedural macros](https://doc.rust-lang.org/reference/procedural-macros.html#procedural-macro-hygiene) and a list of crates to help facilitate development, and save time by avoiding boilerplate code.
+
+Check out the official documentation [here](https://ink.substrate.io/why-rust-for-smart-contracts) and `Ink!` GitHub repo [here](https://github.com/paritytech/ink).
+
+## `Ask!`
+
+Ask! is a framework for AssemblyScript developers that allows them to write Wasm smart contracts for `pallet-contracts`. Its syntax is similar to TypeScript.
+
+This project is funded by the Polkadot treasury - link [here](https://polkadot.polkassembly.io/post/949), and is still under development.
+
+Check out the official GitHub [here](https://github.com/ask-lang/ask).
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/_category_.json
new file mode 100644
index 0000000..c92991b
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Tutorials",
+ "position": 9
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Factory/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Factory/_category_.json
new file mode 100644
index 0000000..c682676
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Factory/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Factory Contract",
+ "position": 4
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Factory/create-pair.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Factory/create-pair.md
new file mode 100644
index 0000000..cea005b
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Factory/create-pair.md
@@ -0,0 +1,362 @@
+---
+sidebar_position: 2
+---
+
+# Create Pair
+
+If you are starting the tutorial from here, please check out this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/storage-end) and open it in your IDE.
+
+## 1. Add Create Pair to Factory Trait
+
+We will implement the [createPair](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Factory.sol#L23) function of the Factory contract.
+In the *./logics/traits/factory.rs* file, add the **create_pair** function to the Factory trait, as well as the internal child function **_instantiate_pair** that will need to be implemented in the contract crate.
+The reason why we need an internal **_instantiate_pair** function here is because the instantiate builder is not part of the `#[openbrush::wrapper]`, so we will need to use the one from ink! by importing the Pair contract as an `ink-as-dependancy`.
+The **create_pair** message function returns the address of the instantiated Pair contract.
+The function that emits a create_pair event will also have to be implemented in the contract:
+```rust
+pub trait Factory {
+ ...
+ #[ink(message)]
+ fn create_pair(
+ &mut self,
+ token_a: AccountId,
+ token_b: AccountId,
+ ) -> Result;
+
+ fn _instantiate_pair(&mut self, salt_bytes: &[u8]) -> Result;
+ ...
+ fn _emit_create_pair_event(
+ &self,
+ _token_0: AccountId,
+ _token_1: AccountId,
+ _pair: AccountId,
+ _pair_len: u64,
+ );
+}
+```
+
+## 2. Implement Create Pair
+
+In the *./logics/impls/factory/factory.rs* file, let's implement the **create_pair** function body:
+#### 1. Checks that addresses are not identical
+
+AccountId derives `Eq` trait, so comparison operators can be used:
+```rust
+impl> Factory for T {
+ ...
+ fn create_pair(
+ &mut self,
+ token_a: AccountId,
+ token_b: AccountId,
+ ) -> Result {
+ if token_a == token_b {
+ return Err(FactoryError::IdenticalAddresses)
+ }
+ }
+}
+```
+
+### 2. Order The Tuple
+```rust
+let token_pair = if token_a < token_b {
+ (token_a, token_b)
+} else {
+ (token_b, token_a)
+};
+```
+
+#### 3. Check if the First Tuple Address is not the ZERO_ADDRESS
+```rust
+if token_pair.0 == ZERO_ADDRESS.into() {
+ return Err(FactoryError::ZeroAddress)
+}
+```
+
+### 4. Instantiate the Pair Contract
+The [generate_address](https://github.com/paritytech/substrate/blob/982f5998c59bd2bd455808345ae1bd2b1767f353/frame/contracts/src/lib.rs#L187) function in `pallet_contracts` is akin to the formula of ETH's CREATE2 opcode. There is no CREATE equivalent because CREATE2 is strictly more powerful. Formula: `hash(deploying_address ++ code_hash ++ salt)`
+Instantiation of a contract will define its own contract address by using the concatenated hash of:
+- salt (in bytes)
+- address of deployer
+- code_hash
+
+As the `code_hash` and `deployer` (Factory contract address) values will be unchanged during each call, the `salt_bytes` value must be unique for each call. As the Factory contract will instantiate a unique Pair contract for each pair, we will hash over the token address to produce a unique salt:
+```rust
+let salt = Self::env().hash_encoded::(&token_pair);
+let pair_contract = self._instantiate_pair(salt.as_ref())?;
+```
+
+## 5. Initialize Pair
+```rust
+PairRef::initialize(&pair_contract, token_pair.0, token_pair.1)?;
+```
+
+#### 6. Create Storage Mappings in Both Directions and Push the Pair Address to `all_pairs`
+```rust
+self.data::()
+ .get_pair
+ .insert(&(token_pair.0, token_pair.1), &pair_contract);
+self.data::()
+ .get_pair
+ .insert(&(token_pair.1, token_pair.0), &pair_contract);
+self.data::().all_pairs.push(pair_contract);
+```
+
+#### 6. Emit a `create_pair` Event
+```rust
+self._emit_create_pair_event(
+ token_pair.0,
+ token_pair.1,
+ pair_contract,
+ self.all_pair_length(),
+);
+```
+
+#### 7. Return the Address of the Instantiated Contract
+```rust
+Ok(pair_contract)
+```
+
+The entire function should look like this:
+```rust
+ fn create_pair(
+ &mut self,
+ token_a: AccountId,
+ token_b: AccountId,
+) -> Result {
+ if token_a == token_b {
+ return Err(FactoryError::IdenticalAddresses)
+ }
+ let token_pair = if token_a < token_b {
+ (token_a, token_b)
+ } else {
+ (token_b, token_a)
+ };
+ if token_pair.0 == ZERO_ADDRESS.into() {
+ return Err(FactoryError::ZeroAddress)
+ }
+
+ let salt = Self::env().hash_encoded::(&token_pair);
+ let pair_contract = self._instantiate_pair(salt.as_ref())?;
+
+ PairRef::initialize(&pair_contract, token_pair.0, token_pair.1)?;
+
+ self.data::()
+ .get_pair
+ .insert(&(token_pair.0, token_pair.1), &pair_contract);
+ self.data::()
+ .get_pair
+ .insert(&(token_pair.1, token_pair.0), &pair_contract);
+ self.data::().all_pairs.push(pair_contract);
+
+ self._emit_create_pair_event(
+ token_pair.0,
+ token_pair.1,
+ pair_contract,
+ self.all_pair_length(),
+ );
+
+ Ok(pair_contract)
+}
+```
+
+Implement an **_instantiate_pair** function with an `unimplemented!()` macro in the body, to ensure it will be overridden (`default` keyword should be added):
+```rust
+default fn _instantiate_pair(&mut self, _salt_bytes: &[u8]) -> Result {
+ // needs to be overridden in contract
+ unimplemented!()
+}
+```
+
+Add import statements:
+```rust
+use crate::traits::pair::PairRef;
+pub use crate::{
+ impls::factory::*,
+ traits::factory::*,
+};
+use ink::env::hash::Blake2x256;
+use openbrush::traits::{
+ AccountId,
+ Storage,
+ ZERO_ADDRESS,
+};
+...
+```
+
+### 3. Implement Event
+
+In the *./logics/impls/factory/factory.rs* file, add empty implementation of **_emit_create_pair_event**:
+```rust
+default fn _emit_create_pair_event(
+ &self,
+ _token_0: AccountId,
+ _token_1: AccountId,
+ _pair: AccountId,
+ _pair_len: u64,
+) {
+}
+```
+
+Within the contracts folder, in the *./contracts/factory/lib.rs* file, add a `PairCreated` event struct and override the implementation of emit event:
+```rust
+...
+use ink::{
+ codegen::{
+ EmitEvent,
+ Env,
+ },
+ ToAccountId,
+};
+...
+#[ink(event)]
+pub struct PairCreated {
+ #[ink(topic)]
+ pub token_0: AccountId,
+ #[ink(topic)]
+ pub token_1: AccountId,
+ pub pair: AccountId,
+ pub pair_len: u64,
+}
+...
+impl Factory for FactoryContract {
+ fn _emit_create_pair_event(
+ &self,
+ token_0: AccountId,
+ token_1: AccountId,
+ pair: AccountId,
+ pair_len: u64,
+ ) {
+ EmitEvent::::emit_event(
+ self.env(),
+ PairCreated {
+ token_0,
+ token_1,
+ pair,
+ pair_len,
+ },
+ )
+ }
+}
+```
+
+### 4. Override `_instantiate_pair`
+
+As it's not possible to call a constructor of a contract using `#[openbrush::wrapper]`, we will need to use a contract Ref from ink!.
+If you would like to import a contract as an `ink-as-dependency`, it should be built as a library crate `rlib`. Add this to the `Cargo.toml` of the Pair contract in the *./contracts/pair/Cargo.toml* file:
+```toml
+...
+[lib]
+name = "pair_contract"
+path = "lib.rs"
+crate-type = [
+ "cdylib",
+ "rlib"
+]
+...
+```
+
+Then import the Pair contract as an `ink-as-dependency` in the Factory contract. Add the dependency to the `Cargo.toml` of the Factory contract in the *./contracts/factory/Cargo.toml* file:
+```toml
+...
+pair_contract = { path = "../pair", default-features = false, features = ["ink-as-dependency"] }
+...
+[features]
+default = ["std"]
+std = [
+"ink/std",
+"scale/std",
+"scale-info/std",
+"openbrush/std",
+"uniswap_v2/std",
+"pair_contract/std",
+]
+```
+
+In the contract crate *./contracts/factory/lib.rs* add import statements:
+```rust
+...
+use openbrush::traits::{
+ Storage,
+ ZERO_ADDRESS,
+};
+use pair_contract::pair::PairContractRef;
+```
+
+In **_instantiate_pair** function body:
+#### 1. Get pair code_hash from storage
+```rust
+...
+impl Factory for FactoryContract {
+ fn _instantiate_pair(&mut self, salt_bytes: &[u8]) -> Result {
+ let pair_hash = self.factory.pair_contract_code_hash;
+ }
+ ...
+}
+```
+
+#### 2. Instantiate Pair
+Using [create builder](https://github.com/paritytech/ink/blob/ad4f5e579e39926704e182736af4fa945982ac2b/crates/env/src/call/create_builder.rs#L269) from ink! we call a **new** constructor from Pair, and pass no endowment (as storage rent has been removed it is not needed). This returns the accountId back to the caller:
+```rust
+...
+let pair = match PairContractRef::new()
+ .endowment(0)
+ .code_hash(pair_hash)
+ .salt_bytes(&salt_bytes[..4])
+ .try_instantiate()
+{
+ Ok(Ok(res)) => Ok(res),
+ _ => Err(FactoryError::PairInstantiationFailed),
+}?;
+```
+
+#### 3. Return Pair Address
+```rust
+...
+Ok(pair.to_account_id())
+```
+
+Full function:
+```rust
+fn _instantiate_pair(&mut self, salt_bytes: &[u8]) -> Result {
+ let pair_hash = self.factory.pair_contract_code_hash;
+ let pair = match PairContractRef::new()
+ .endowment(0)
+ .code_hash(pair_hash)
+ .salt_bytes(&salt_bytes[..4])
+ .try_instantiate()
+ {
+ Ok(Ok(res)) => Ok(res),
+ _ => Err(FactoryError::PairInstantiationFailed),
+ }?;
+ Ok(pair.to_account_id())
+}
+```
+
+### 5. Implement Error Handling
+
+In the *./logics/traits/factory.rs* file, implement `From` trait from `PairError` and add it to `FactoryError`. Also add Error variants used in the create pair implementation:
+```rust
+use crate::traits::pair::PairError;
+...
+#[derive(Debug, PartialEq, Eq, scale::Encode, scale::Decode)]
+#[cfg_attr(feature = "std", derive(scale_info::TypeInfo))]
+pub enum FactoryError {
+ PairError(PairError),
+ ZeroAddress,
+ IdenticalAddresses,
+ PairInstantiationFailed,
+}
+
+impl From for FactoryError {
+ fn from(error: PairError) -> Self {
+ FactoryError::PairError(error)
+ }
+}
+```
+
+
+And that's it! Check your Factory contract with (run in contract folder):
+```console
+cargo contract build
+```
+It should now look like this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/factory_create_pair_end).
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Factory/getters.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Factory/getters.md
new file mode 100644
index 0000000..93331b7
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Factory/getters.md
@@ -0,0 +1,312 @@
+---
+sidebar_position: 1
+---
+
+# Factory Storage and Getters
+
+If you are starting the tutorial from here, please check out this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/modifiers_end) and open it in your IDE.
+
+## 1. Factory Storage
+
+The Factory contract has [storage fields](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Factory.sol#L7) implemented in Solidity that we will need to implement in our contract(s):
+
+```solidity
+ address public feeTo;
+ address public feeToSetter;
+
+ mapping(address => mapping(address => address)) public getPair;
+ address[] public allPairs;
+```
+
+Ink! uses most Substrate primitive types. Here is a table of conversion between Solidity and ink! types:
+
+| Solidity | ink! |
+|-----------------------------------------|-------------------------------------------------------------------------------------------|
+| uint256 | [U256](https://docs.rs/primitive-types/latest/primitive_types/struct.U256.html) |
+| any other uint | u128 (or lower) |
+| address | AccountId |
+| mapping(key => value) | [Mapping(key, value)](https://docs.rs/ink_storage/latest/ink_storage/struct.Mapping.html) |
+| mapping(key1 => mapping(key2 => value)) | [Mapping((key1 ,key2), value)](https://substrate.stackexchange.com/a/3993/567) |
+
+Let's create a storage struct in the *./logics/impls/factory/data.rs* file. Name the struct `Data` and add the required fields:
+```rust
+pub struct Data {
+ pub fee_to: AccountId,
+ pub fee_to_setter: AccountId,
+ pub get_pair: Mapping<(AccountId, AccountId), AccountId>,
+ pub all_pairs: Vec,
+}
+```
+
+The Factory contract will deploy instances of the Pair contract . In Substrate, the contract deployment process is split into [two steps](https://use.ink/getting-started/deploy-your-contract):
+1. Deploying your contract code to the blockchain (the Wasm blob will be uploaded and has a unique `code_hash`).
+2. Creating an instance of your contract (by calling a constructor).
+
+That's why the Factory Storage should save the Pair contract `code_hash` in order to instantiate it. Add a Pair `code_hash` field to the Storage:
+```rust
+ pub pair_contract_code_hash: Hash,
+```
+
+OpenBrush uses a specified storage key instead of the default one in the attribute [openbrush::upgradeable_storage](https://github.com/727-Ventures/openbrush-contracts/blob/35aae841cd13ca4e4bc6d63be96dc27040c34064/lang/macro/src/lib.rs#L466). It implements all [required traits](https://docs.openbrush.io/smart-contracts/upgradeable#suggestions-on-how-follow-the-rules) with the specified storage key (storage key is a required input argument of the macro).
+To generate a unique key, Openbrush provides a [openbrush::storage_unique_key!](https://docs.openbrush.io/smart-contracts/upgradeable#unique-storage-key) declarative macro that is based on the name of the struct and its file path. Let's add this to our struct and import the required fields:
+```rust
+use ink::{
+ prelude::vec::Vec,
+ primitives::Hash,
+};
+use openbrush::{
+ storage::Mapping,
+ traits::{
+ AccountId,
+ ZERO_ADDRESS,
+ },
+};
+
+pub const STORAGE_KEY: u32 = openbrush::storage_unique_key!(Data);
+
+#[derive(Debug)]
+#[openbrush::upgradeable_storage(STORAGE_KEY)]
+pub struct Data {
+ pub fee_to: AccountId,
+ pub fee_to_setter: AccountId,
+ pub get_pair: Mapping<(AccountId, AccountId), AccountId>,
+ pub all_pairs: Vec,
+ pub pair_contract_code_hash: Hash,
+}
+
+impl Default for Data {
+ fn default() -> Self {
+ Self {
+ fee_to: ZERO_ADDRESS.into(),
+ fee_to_setter: ZERO_ADDRESS.into(),
+ get_pair: Default::default(),
+ all_pairs: Default::default(),
+ pair_contract_code_hash: Default::default(),
+ }
+ }
+}
+```
+*./logics/impls/factory/data.rs*
+
+## 2. Trait for Getters
+
+Unlike Solidity, which will automatically create getters for the storage items, with ink! you will need add them yourself. There is already a `Factory` trait with `fee_to` function in the file *./logics/traits/factory.rs*.
+Add all getters:
+```rust
+use openbrush::traits::AccountId;
+
+#[openbrush::wrapper]
+pub type FactoryRef = dyn Factory;
+
+#[openbrush::trait_definition]
+pub trait Factory {
+ #[ink(message)]
+ fn all_pair_length(&self) -> u64;
+
+ #[ink(message)]
+ fn set_fee_to(&mut self, fee_to: AccountId) -> Result<(), FactoryError>;
+
+ #[ink(message)]
+ fn set_fee_to_setter(&mut self, fee_to_setter: AccountId) -> Result<(), FactoryError>;
+
+ #[ink(message)]
+ fn fee_to(&self) -> AccountId;
+
+ #[ink(message)]
+ fn fee_to_setter(&self) -> AccountId;
+
+ #[ink(message)]
+ fn get_pair(&self, token_a: AccountId, token_b: AccountId) -> Option;
+
+ fn _emit_create_pair_event(
+ &self,
+ _token_0: AccountId,
+ _token_1: AccountId,
+ _pair: AccountId,
+ _pair_len: u64,
+ );
+}
+```
+
+The last thing to do is to add the Error enum, and each contract should use its own. As they will be used in function arguments, we should implement Scale encoding & decoding.
+For the moment we don't need a properly defined error, so simply add `Error` as a field:
+```rust
+...
+#[derive(Debug, PartialEq, Eq, scale::Encode, scale::Decode)]
+#[cfg_attr(feature = "std", derive(scale_info::TypeInfo))]
+pub enum FactoryError {
+ Error
+}
+```
+*./logics/traits/factory.rs*
+
+## 3. Implement Getters
+
+in *./logics/impls/factory/factory.rs* add and impl a block for the generic type `data::Data`. We will wrap the Data struct in the Storage trait to add it as trait bound:
+```rust
+pub use crate::{
+ impls::factory::*,
+ traits::factory::*,
+};
+use openbrush::{
+ traits::{
+ AccountId,
+ Storage
+ },
+};
+
+impl> Factory for T {}
+```
+
+**all_pair_length**
+
+This getter returns the total number of pairs:
+```rust
+ fn all_pair_length(&self) -> u64 {
+ self.data::().all_pairs.len() as u64
+}
+```
+
+**set_fee_to**
+
+This setter sets the address collecting the fee:
+```rust
+ fn set_fee_to(&mut self, fee_to: AccountId) -> Result<(), FactoryError> {
+ self.data::().fee_to = fee_to;
+ Ok(())
+}
+```
+
+**set_fee_to_setter**
+
+This setter sets the address of the fee setter:
+```rust
+ fn set_fee_to_setter(&mut self, fee_to_setter: AccountId) -> Result<(), FactoryError> {
+ self.data::().fee_to_setter = fee_to_setter;
+ Ok(())
+}
+```
+
+**fee_to**
+
+This getter returns the address collecting the fee:
+```rust
+ fn fee_to(&self) -> AccountId {
+ self.data::().fee_to
+}
+```
+
+**fee_to_setter**
+
+This getter returns the address of the fee setter:
+```rust
+ fn fee_to(&self) -> AccountId {
+ self.data::().fee_to
+}
+```
+
+**get_pair**
+
+This getter takes two addresses as arguments and returns the Pair contract address (or None if not found):
+```rust
+ fn get_pair(&self, token_a: AccountId, token_b: AccountId) -> Option {
+ self.data::().get_pair.get(&(token_a, token_b))
+}
+```
+
+### 4. Implement gGetters in Contract
+
+In the *./uniswap-v2/contracts* folder, create a `factory` folder containing `Cargo.toml` and `lib.rs` files.
+The `Cargo.toml` should look like this:
+```toml
+[package]
+name = "factory_contract"
+version = "0.1.0"
+authors = ["Stake Technologies "]
+edition = "2021"
+
+[dependencies]
+ink = { version = "4.0.0", default-features = false}
+
+scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] }
+scale-info = { version = "2.3", default-features = false, features = ["derive"], optional = true }
+
+openbrush = { git = "https://github.com/727-Ventures/openbrush-contracts", version = "3.0.0", default-features = false }
+uniswap_v2 = { path = "../../logics", default-features = false }
+
+[lib]
+path = "lib.rs"
+crate-type = ["cdylib"]
+
+[features]
+default = ["std"]
+std = [
+ "ink/std",
+ "scale/std",
+ "scale-info/std",
+ "openbrush/std",
+ "uniswap_v2/std"
+]
+ink-as-dependency = []
+
+[profile.dev]
+overflow-checks = false
+
+[profile.release]
+overflow-checks = false
+```
+
+In the `lib.rs` file, create a factory module with Openbrush contract. Import the `Storage` trait from Openbrush (as well as `ZERO_ADDRESS`) and `SpreadAllocate` from ink!
+As reminder the `#![cfg_attr(not(feature = "std"), no_std)]` attribute is for [conditional compilation](https://use.ink/faq#what-does-the-cfg_attrnotfeature--std-no_std-at-the-beginning-of-each-contract-mean) and the `#![feature(min_specialization)]` is the feature needed to enable [specialization](../Structure/file-structure.md).
+Also import everything (with `*`) from `impls::factory` and `traits::factory`:
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+#![feature(min_specialization)]
+
+#[openbrush::contract]
+pub mod factory {
+ use openbrush::traits::{
+ Storage,
+ ZERO_ADDRESS,
+ };
+ use uniswap_v2::{
+ impls::factory::*,
+ traits::factory::*,
+ };
+```
+
+Add the [storage struct](https://use.ink/macros-attributes/storage) and Factory field (that we defined in traits):
+
+```rust
+ #[ink(storage)]
+#[derive(Default, Storage)]
+pub struct FactoryContract {
+ #[storage_field]
+ factory: data::Data,
+}
+```
+
+Implement the Factory trait in your contract struct:
+```rust
+ impl Factory for FactoryContract {}
+```
+
+Add an `impl` block for the contract, and add the constructor. The constructor takes 2 arguments `fee_to_setter` and the `pair_code_hash` and saved them in storage:
+```rust
+ impl FactoryContract {
+ #[ink(constructor)]
+ pub fn new(fee_to_setter: AccountId, pair_code_hash: Hash) -> Self {
+ let mut instance = Self::default();
+ instance.factory.pair_contract_code_hash = pair_code_hash;
+ instance.factory.fee_to_setter = fee_to_setter;
+ instance.factory.fee_to = ZERO_ADDRESS.into();
+ instance
+ }
+}
+```
+
+And that's it! Check your Factory contract with (run in contract folder):
+```console
+cargo contract build
+```
+It should now look like this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/factory_storage).
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Factory/modifiers.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Factory/modifiers.md
new file mode 100644
index 0000000..503fcb1
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Factory/modifiers.md
@@ -0,0 +1,77 @@
+---
+sidebar_position: 3
+---
+
+# Custom Modifier
+
+In the Factory contract, prior to the **setFeeTo** and **setFeeToSetter** entries, there is a [check](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Factory.sol#L41) that occurs, and the caller is `feeToSetter`.
+Let's create a custom modifier for it.
+
+## `only_fee_setter`
+
+In the *.logics/impls/factory/factory.rs* file, import `modifier_definition` and `modifiers`:
+```rust
+use openbrush::{
+ modifier_definition,
+ modifiers,
+ traits::{
+ AccountId,
+ Storage,
+ ZERO_ADDRESS,
+ },
+};
+```
+
+Let's define the generic modifier below the `impl` block. Some rules for the generic type parameters:
+- If it should use Storage structs - it should accept a type parameter.
+- It should have the same return type - `Result` where E is wrapped in From Trait.
+
+In the body of the modifier we will ensure that the caller address is equal to `fee_to_setter`, otherwise return an Error:
+```rust
+#[modifier_definition]
+pub fn only_fee_setter(instance: &mut T, body: F) -> Result
+ where
+ T: Storage,
+ F: FnOnce(&mut T) -> Result,
+ E: From,
+{
+ if instance.data().fee_to_setter != T::env().caller() {
+ return Err(From::from(FactoryError::CallerIsNotFeeSetter))
+ }
+ body(instance)
+}
+```
+
+Prepend the modifier to the top of the **set_fee_to** and **set_fee_to_setter** functions:
+```rust
+#[modifiers(only_fee_setter)]
+fn set_fee_to(&mut self, fee_to: AccountId) -> Result<(), FactoryError> {
+ self.data::().fee_to = fee_to;
+ Ok(())
+}
+
+#[modifiers(only_fee_setter)]
+fn set_fee_to_setter(&mut self, fee_to_setter: AccountId) -> Result<(), FactoryError> {
+ self.data::().fee_to_setter = fee_to_setter;
+ Ok(())
+}
+```
+
+Add `CallerIsNotFeeSetter` variant to `FactoryError`:
+```rust
+#[derive(Debug, PartialEq, Eq, scale::Encode, scale::Decode)]
+#[cfg_attr(feature = "std", derive(scale_info::TypeInfo))]
+pub enum FactoryError {
+ PairError(PairError),
+ CallerIsNotFeeSetter,
+ ZeroAddress,
+ IdenticalAddresses,
+ PairInstantiationFailed,
+}
+```
+
+And that's it! Check your Factory contract with (run in the contract folder):
+```console
+cargo contract build
+```
+It should now look like this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/factory_modifiers).
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/_category_.json
new file mode 100644
index 0000000..5cccc54
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Pair Contract",
+ "position": 3
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/burn.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/burn.md
new file mode 100644
index 0000000..8107405
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/burn.md
@@ -0,0 +1,218 @@
+---
+sidebar_position: 4
+---
+
+# Burn
+
+If you are starting the tutorial from here, please check out this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/storage-end) and open it in your IDE.
+
+## 1. Add Burn Functions to Pair Trait
+At this stage, we will implement a [burn](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L134) function in the Pair contract.
+In *./logics/traits/pair.rs* add the **burn** function to the Pair trait, as well as the internal child function **_safe_transfer**.
+Also, we will add a function to emit a burn event in the contract:
+
+```rust
+pub trait Pair {
+ ...
+ #[ink(message)]
+ fn burn(&mut self, to: AccountId) -> Result<(Balance, Balance), PairError>;
+
+ fn _safe_transfer(
+ &mut self,
+ token: AccountId,
+ to: AccountId,
+ value: Balance,
+ ) -> Result<(), PairError>;
+
+ fn _emit_burn_event(
+ &self,
+ _sender: AccountId,
+ _amount_0: Balance,
+ _amount_1: Balance,
+ _to: AccountId,
+ );
+}
+```
+
+## 2. Safe Transfer
+
+In the Pair.sol contract, within the burn function, there is a [_safeTransfer](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L148) function. In PSP22, a transfer is [safe by default](https://github.com/w3f/PSPs/blob/master/PSPs/psp-22.md#psp22receiver) if it's implemented with `PSP22Receiver`, which is the case for the Openbrush PSP22 implementation (in [_do_safe_transfer_check](https://github.com/Supercolony-net/openbrush-contracts/blob/e366f6ff1e5892c6a624833dd337a6da16a06baa/contracts/src/token/psp22/psp22.rs#L172))
+We will use a basic call to **transfer** the PSP22:
+```rust
+impl + Storage> Pair for T {
+ ...
+ fn _safe_transfer(
+ &mut self,
+ token: AccountId,
+ to: AccountId,
+ value: Balance,
+ ) -> Result<(), PairError> {
+ PSP22Ref::transfer(&token, to, value, Vec::new())?;
+ Ok(())
+ }
+ ...
+}
+```
+*./logics/impls/pair/pair.rs*
+
+and add the import statement for Vec:
+```rust
+use ink::prelude::vec::Vec;
+```
+
+### 3. Burn
+
+The first line of this function is the same as mint (as we obtain the same values).
+In the [line #147](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L147) `_burn(address(this), liquidity);` actually calls the burn of the internal ERC20 (as Pair is an extended ERC20).
+The flow of the function body:
+1. First obtain the values for reserves, balances and liquidity.
+2. `mint_fee`
+3. Burn liquidity and transfer token from contract to `to`
+4. Update reserves.
+5. Emit an event.
+
+```rust
+impl + Storage> Pair for T {
+ ...
+ fn burn(&mut self, to: AccountId) -> Result<(Balance, Balance), PairError> {
+ let reserves = self.get_reserves();
+ let contract = Self::env().account_id();
+ let token_0 = self.data::().token_0;
+ let token_1 = self.data::().token_1;
+ let mut balance_0 = PSP22Ref::balance_of(&token_0, contract);
+ let mut balance_1 = PSP22Ref::balance_of(&token_1, contract);
+ let liquidity = self._balance_of(&contract);
+
+ let fee_on = self._mint_fee(reserves.0, reserves.1)?;
+ let total_supply = self.data::().supply;
+ let amount_0 = liquidity
+ .checked_mul(balance_0)
+ .ok_or(PairError::MulOverFlow6)?
+ .checked_div(total_supply)
+ .ok_or(PairError::DivByZero3)?;
+ let amount_1 = liquidity
+ .checked_mul(balance_1)
+ .ok_or(PairError::MulOverFlow7)?
+ .checked_div(total_supply)
+ .ok_or(PairError::DivByZero4)?;
+
+ if amount_0 == 0 || amount_1 == 0 {
+ return Err(PairError::InsufficientLiquidityBurned)
+ }
+
+ self._burn_from(contract, liquidity)?;
+
+ self._safe_transfer(token_0, to, amount_0)?;
+ self._safe_transfer(token_1, to, amount_1)?;
+
+ balance_0 = PSP22Ref::balance_of(&token_0, contract);
+ balance_1 = PSP22Ref::balance_of(&token_1, contract);
+
+ self._update(balance_0, balance_1, reserves.0, reserves.1)?;
+
+ if fee_on {
+ let k = reserves
+ .0
+ .checked_mul(reserves.1)
+ .ok_or(PairError::MulOverFlow5)?;
+ self.data::().k_last = k;
+ }
+
+ self._emit_burn_event(Self::env().caller(), amount_0, amount_1, to);
+
+ Ok((amount_0, amount_1))
+ }
+ ...
+}
+```
+
+Add the empty implementation of **_emit_burn_event**. It should have the `default` keyword as we will override this function in the Pair contract:
+```rust
+impl + Storage> Pair for T {
+ ...
+ default fn _emit_burn_event(
+ &self,
+ _sender: AccountId,
+ _amount_0: Balance,
+ _amount_1: Balance,
+ _to: AccountId,
+ ) {
+ }
+ ...
+}
+```
+
+Adds the Error fields to `PairError` in *./logics/traits/pair.rs*:
+```rust
+#[derive(Debug, PartialEq, Eq, scale::Encode, scale::Decode)]
+#[cfg_attr(feature = "std", derive(scale_info::TypeInfo))]
+pub enum PairError {
+ PSP22Error(PSP22Error),
+ TransferError,
+ InsufficientLiquidityMinted,
+ InsufficientLiquidityBurned,
+ Overflow,
+ SubUnderFlow1,
+ SubUnderFlow2,
+ SubUnderFlow3,
+ SubUnderFlow14,
+ MulOverFlow1,
+ MulOverFlow2,
+ MulOverFlow3,
+ MulOverFlow4,
+ MulOverFlow5,
+ MulOverFlow6,
+ MulOverFlow7,
+ MulOverFlow14,
+ MulOverFlow15,
+ DivByZero1,
+ DivByZero2,
+ DivByZero3,
+ DivByZero4,
+ DivByZero5,
+ AddOverflow1,
+}
+```
+
+## 4. Implement Event
+
+In the contracts *./contracts/pair/lib.rs* file, add the Event struct and override the implementation of emit event:
+```rust
+...
+#[ink(event)]
+pub struct Burn {
+ #[ink(topic)]
+ pub sender: AccountId,
+ pub amount_0: Balance,
+ pub amount_1: Balance,
+ #[ink(topic)]
+ pub to: AccountId,
+}
+...
+impl Pair for PairContract {
+ ...
+ fn _emit_burn_event(
+ &self,
+ sender: AccountId,
+ amount_0: Balance,
+ amount_1: Balance,
+ to: AccountId,
+ ) {
+ self.env().emit_event(Burn {
+ sender,
+ amount_0,
+ amount_1,
+ to,
+ })
+ }
+}
+...
+```
+
+And that's it! In these examples we have demonstrated how to easily build an advanced Rust & ink! implementation.
+Check your Pair contract with (run in contract folder):
+```console
+cargo contract build
+```
+It should now look like this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/burn_end).
+
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/mint.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/mint.md
new file mode 100644
index 0000000..cffc750
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/mint.md
@@ -0,0 +1,417 @@
+---
+sidebar_position: 3
+---
+
+# Mint
+
+If you are starting the tutorial from here, please check out this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/storage-end) and open it in your IDE.
+
+### 1. Add Mint Functions to Pair Trait
+
+At this stage, we will implement the [mint](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L110) function of the Pair contract.
+In the *./logics/traits/pair.rs* file add the **mint** function to the Pair trait. You should also add two internal **_mint_fee** and **_update** functions.
+As those functions modify the state, they should take a `&mut self` as first argument. When sending transaction (as tx) it return nothing (a tx cannot return a value neither a variant of the Error enum) so in most cases state changes function will return `Result<(), PairError>`.
+But if you call the function as dry-run (as a query, it will not modify the state) it can return a value (any value and Error enum as well). That is why the **mint** message function returns a `Balance` (and not `()`). So before calling **mint** as tx you can call it as dry-run and gets the liquidity that will be minted.
+Also add the function to emit mint event that will have to be implemented in the contract:
+```rust
+pub trait Pair {
+ ...
+ #[ink(message)]
+ fn mint(&mut self, to: AccountId) -> Result;
+
+ fn _mint_fee(&mut self, reserve_0: Balance, reserve_1: Balance) -> Result;
+
+ fn _update(
+ &mut self,
+ balance_0: Balance,
+ balance_1: Balance,
+ reserve_0: Balance,
+ reserve_1: Balance,
+ ) -> Result<(), PairError>;
+
+ fn _emit_mint_event(&self, _sender: AccountId, _amount_0: Balance, _amount_1: Balance);
+
+ fn _emit_sync_event(&self, reserve_0: Balance, reserve_1: Balance);
+}
+```
+
+### 2. Mint Fee and Factory Trait
+
+As **_update** and **_mint_fee** are child functions of **mint**, let's start by implementing those.
+Have a look at [_mintFee](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L89) in Solidity, which takes `uint112 _reserve0` and `uint112 _reserve1` as arguments, and translates to `Balance` in ink! that returns a bool, and can make state changes (it can save `k_last` to storage) so in ink! it should return `Result`.
+Let's add it to *./logics/impls/pair/pair.rs*:
+
+```rust
+impl + Storage> Pair for T {
+ ...
+ fn _mint_fee(
+ &mut self,
+ reserve_0: Balance,
+ reserve_1: Balance,
+ ) -> Result {}
+}
+```
+
+In the [first line](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L90) of **_mintFee** there is a cross-contract call to the Factory contract to obtain the address of the account collecting the fees. To do so we will use Openbrush wrapper around a Factory trait (and demonstrate that the trait only is needed - no implementation).
+create a file *./logics/traits/factory.rs* and add the `Factory` trait and a **fee_to** function getter.
+Add `#[openbrush::trait_definition]` to the top of the file:
+
+```rust
+#[openbrush::trait_definition]
+pub trait Factory {
+ #[ink(message)]
+ fn fee_to(&self) -> AccountId;
+}
+```
+
+And then add a wrapper around this trait. Imports what needs to be imported:
+
+```rust
+use openbrush::traits::AccountId;
+
+#[openbrush::wrapper]
+pub type FactoryRef = dyn Factory;
+...
+```
+
+Add this file to *./logics/traits/mod.rs*:
+```rust
+pub mod pair;
+pub mod factory;
+```
+
+In *./logics/impls/pair/pair.rs* import this contract `FactoryRef`:
+```rust
+use crate::traits::factory::FactoryRef;
+```
+
+And in the body of **_mint_fee** we will obtain the `fee_to` with a cross-contract call to Factory. When using OpenBrush wrapper around a trait, the first argument of the function should be the contract address you call. So add this line as it is shown below:
+```rust
+ fn _mint_fee(
+ &mut self,
+ reserve_0: Balance,
+ reserve_1: Balance,
+ ) -> Result {
+ let fee_to = FactoryRef::fee_to(&self.data::().factory);
+}
+```
+
+The rest of the function body may be somewhat difficult to interpret, so here are a few tips:
+
+- For ` address(0)` in Solidity you can use `openbrush::traits::ZERO_ADDRESS` (which is a const `[0; 32]`).
+- For `sqrt` you can either implement the [same function](https://github.com/AstarNetwork/wasm-tutorial-dex/blob/4afd2d2a0503ad5dfcecd87e2b40d55cd3c854a0/uniswap-v2/logics/impls/pair/pair.rs#L437) or use [integer-sqrt](https://crates.io/crates/integer-sqrt).
+- When doing Math operations you should handle overflow cases (and return an Error if there is an overflow). You can perform check operations on `u128`
+- Use each Error variant only once, so when testing or debugging you will know immediately which line the Error come from.
+
+Then implement line-by-line the same logic as in Uniswap-V2:
+```rust
+ fn _mint_fee(
+ &mut self,
+ reserve_0: Balance,
+ reserve_1: Balance,
+) -> Result {
+ let fee_to = FactoryRef::fee_to(&self.data::().factory);
+ let fee_on = fee_to != ZERO_ADDRESS.into();
+ let k_last = self.data::().k_last;
+ if fee_on {
+ if k_last != 0 {
+ let root_k = sqrt(
+ reserve_0
+ .checked_mul(reserve_1)
+ .ok_or(PairError::MulOverFlow14)?,
+ );
+ let root_k_last = sqrt(k_last);
+ if root_k > root_k_last {
+ let total_supply = self.data::().supply;
+ let numerator = total_supply
+ .checked_mul(
+ root_k
+ .checked_sub(root_k_last)
+ .ok_or(PairError::SubUnderFlow14)?,
+ )
+ .ok_or(PairError::MulOverFlow15)?;
+ let denominator = root_k
+ .checked_mul(5)
+ .ok_or(PairError::MulOverFlow15)?
+ .checked_add(root_k_last)
+ .ok_or(PairError::AddOverflow1)?;
+ let liquidity = numerator
+ .checked_div(denominator)
+ .ok_or(PairError::DivByZero5)?;
+ if liquidity > 0 {
+ self._mint_to(fee_to, liquidity)?;
+ }
+ }
+ }
+ } else if k_last != 0 {
+ self.data::().k_last = 0;
+ }
+ Ok(fee_on)
+}
+```
+
+### 3. Update
+The update function will update the [oracle price](https://docs.uniswap.org/contracts/v2/concepts/core-concepts/oracles) of the tokens with time-weighted average prices (TWAPs). Please check the Uniswap V2 [implementation](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L73).
+To implement this in ink!:
+- ink! contracts should [never panic!](https://substrate.stackexchange.com/questions/2391/panic-in-ink-smart-contracts). The reason being that a panic! will give the user no information about the Error (it only returns `CalleeTrapped`). Every potential business/logical error should be returned in a predictable way using `Result`.
+- To handle time use `Self::env().block_timestamp()` that is the time in milliseconds since the Unix epoch.
+- In Solidity, float point division is not supported, it uses Q number UQ112x112 for more precision. We will use div for our example (note that is the DEX template we use [U256](https://github.com/swanky-dapps/dex/blob/4676a73f4ab986a3a3f3de42be1b0052562953f1/uniswap-v2/logics/impls/pair/pair.rs#L374) for more precision).
+- To store values in storage (but first verify, then save), modify the value of the Storage field (as the function takes `&mut self` it can modify Storage struct fields)
+
+You can then implement **update**:
+
+```rust
+ fn _update(
+ &mut self,
+ balance_0: Balance,
+ balance_1: Balance,
+ reserve_0: Balance,
+ reserve_1: Balance,
+) -> Result<(), PairError> {
+ if balance_0 == u128::MAX || balance_1 == u128::MAX {
+ return Err(PairError::Overflow)
+ }
+ let now = Self::env().block_timestamp();
+ let time_elapsed = now - self.data::().block_timestamp_last;
+ if time_elapsed > 0 && reserve_0 != 0 && reserve_1 != 0 {
+ let price_cumulative_last_0 = (reserve_1 / reserve_0)
+ .checked_mul(time_elapsed as u128)
+ .ok_or(PairError::MulOverFlow4)?;
+ let price_cumulative_last_1 = (reserve_0 / reserve_1)
+ .checked_mul(time_elapsed as u128)
+ .ok_or(PairError::MulOverFlow4)?;
+ self.data::().price_0_cumulative_last += price_cumulative_last_0;
+ self.data::().price_1_cumulative_last += price_cumulative_last_1;
+ }
+ self.data::().reserve_0 = balance_0;
+ self.data::().reserve_1 = balance_1;
+ self.data::().block_timestamp_last = now;
+
+ self._emit_sync_event(reserve_0, reserve_1);
+ Ok(())
+}
+```
+
+### 4. Mint
+
+Now that all child functions have been added, we can add **mint**.
+First, add the function definition in the impl block of *./logics/impls/pair/pair.rs* :
+
+```rust
+fn mint(&mut self, to: AccountId) -> Result {}
+```
+
+On line [112](https://github.com/Uniswap/v2-core/blob/master/contracts/UniswapV2Pair.sol#L112) of *Pair.sol* there is a cross-contract call to the ERC20 to obtain the balance of the contract `uint balance0 = IERC20(token0).balanceOf(address(this));`.
+To implement this cross-contract call we will use `PSP22Ref` from Openbrush. To obtain the address of the contract, use `Self::env().account_id()`.
+Read more about how to find all the ink_env getters in this [doc](https://docs.rs/ink_env/latest/ink_env/).
+
+First, add the `psp22::Data` Trait bound to the generic impl block:
+```rust
+impl + Storage> Pair for T {
+ ...
+}
+```
+
+In the body of **mint**:
+```rust
+use openbrush::contracts::traits::psp22::PSP22Ref;
+...
+fn mint(&mut self, to: AccountId) -> Result {
+ let reserves = self.get_reserves();
+ let contract = Self::env().account_id();
+ let balance_0 = PSP22Ref::balance_of(&self.data::().token_0, contract);
+ let balance_1 = PSP22Ref::balance_of(&self.data::().token_1, contract);
+ ...
+}
+```
+
+Now, as the call to `PSP22Ref` returns `Result` we should implement the `From` trait for our `PairError` (to not have to map_err for every calls).
+We will do so in the file *.logics/traits/pair.rs* where we defined `PairError`. Add a field that takes an `PSP22Error`, and implement the `From` Trait for it (also add all the error fields used in the implementation):
+```rust
+use openbrush::contracts::psp22::PSP22Error;
+...
+#[derive(Debug, PartialEq, Eq, scale::Encode, scale::Decode)]
+#[cfg_attr(feature = "std", derive(scale_info::TypeInfo))]
+pub enum PairError {
+ PSP22Error(PSP22Error),
+ InsufficientLiquidityMinted,
+ Overflow,
+ SubUnderFlow1,
+ SubUnderFlow2,
+ SubUnderFlow3,
+ SubUnderFlow14,
+ MulOverFlow1,
+ MulOverFlow2,
+ MulOverFlow3,
+ MulOverFlow4,
+ MulOverFlow5,
+ MulOverFlow14,
+ MulOverFlow15,
+ DivByZero1,
+ DivByZero2,
+ DivByZero5,
+ AddOverflow1,
+}
+
+impl From for PairError {
+ fn from(error: PSP22Error) -> Self {
+ PairError::PSP22Error(error)
+ }
+}
+```
+
+For the **MINIMUM_LIQUIDTY** constant, please add:
+```rust
+pub const MINIMUM_LIQUIDITY: u128 = 1000;
+```
+
+For the **min** function, add the [same implementation](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/libraries/Math.sol#L6) below the `impl` block:
+```rust
+ default fn _emit_mint_event(&self, _sender: AccountId, _amount_0: Balance, _amount_1: Balance) {}
+
+ default fn _emit_sync_event(&self, _reserve_0: Balance, _reserve_1: Balance) {}
+}
+
+fn min(x: u128, y: u128) -> u128 {
+ if x < y {
+ return x
+ }
+ y
+}
+```
+
+For **sqrt** function add the [same implementation](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/libraries/Math.sol#L11) below the **min** function:
+```rust
+fn sqrt(y: u128) -> u128 {
+ let mut z = 1;
+ if y > 3 {
+ z = y;
+ let mut x = y / 2 + 1;
+ while x < z {
+ z = x;
+ x = (y / x + x) / 2;
+ }
+ }
+ z
+}
+```
+
+If you handle all overflows (that requires most of the lines of the function body):
+1. First obtain the values for reserves, balances and liquidity.
+2. `mint_fee`
+3. Mint liquidity to `to`
+4. Update reserves.
+5. Emit an event.
+```rust
+ fn mint(&mut self, to: AccountId) -> Result {
+ let reserves = self.get_reserves();
+ let contract = Self::env().account_id();
+ let balance_0 = PSP22Ref::balance_of(&self.data::().token_0, contract);
+ let balance_1 = PSP22Ref::balance_of(&self.data::().token_1, contract);
+ let amount_0 = balance_0
+ .checked_sub(reserves.0)
+ .ok_or(PairError::SubUnderFlow1)?;
+ let amount_1 = balance_1
+ .checked_sub(reserves.1)
+ .ok_or(PairError::SubUnderFlow2)?;
+
+ let fee_on = self._mint_fee(reserves.0, reserves.1)?;
+ let total_supply = self.data::().supply;
+
+ let liquidity;
+ if total_supply == 0 {
+ let liq = amount_0
+ .checked_mul(amount_1)
+ .ok_or(PairError::MulOverFlow1)?;
+ liquidity = sqrt(liq)
+ .checked_sub(MINIMUM_LIQUIDITY)
+ .ok_or(PairError::SubUnderFlow3)?;
+ self._mint_to(ZERO_ADDRESS.into(), MINIMUM_LIQUIDITY)?;
+ } else {
+ let liquidity_1 = amount_0
+ .checked_mul(total_supply)
+ .ok_or(PairError::MulOverFlow2)?
+ .checked_div(reserves.0)
+ .ok_or(PairError::DivByZero1)?;
+ let liquidity_2 = amount_1
+ .checked_mul(total_supply)
+ .ok_or(PairError::MulOverFlow3)?
+ .checked_div(reserves.1)
+ .ok_or(PairError::DivByZero2)?;
+ liquidity = min(liquidity_1, liquidity_2);
+ }
+
+ if liquidity == 0 {
+ return Err(PairError::InsufficientLiquidityMinted)
+ }
+
+ self._mint_to(to, liquidity)?;
+
+ self._update(balance_0, balance_1, reserves.0, reserves.1)?;
+
+ if fee_on {
+ let k = reserves
+ .0
+ .checked_mul(reserves.1)
+ .ok_or(PairError::MulOverFlow5)?;
+ self.data::().k_last = k;
+ }
+
+ self._emit_mint_event(Self::env().caller(), amount_0, amount_1);
+
+ Ok(liquidity)
+ }
+```
+
+Add the empty implementation of **_emit_mint_event** and **_emit_sync_event** in the Pair impl. It should have the `default` keyword as we will override those functions in the Pair contract.
+```rust
+ default fn _emit_mint_event(&self, _sender: AccountId, _amount_0: Balance, _amount_1: Balance) {}
+
+ default fn _emit_sync_event(&self, _reserve_0: Balance, _reserve_1: Balance) {}
+```
+
+The `default` keyword needs to have the attribute `min_specialization` added in **./logics/lib.rs** :
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+#![feature(min_specialization)]
+```
+
+### 5. Implement Event
+
+in the contracts *./cotnracts/pair/lib.rs* file, add the Event struct and override the implementation of emit event:
+```rust
+...
+#[ink(event)]
+pub struct Mint {
+ #[ink(topic)]
+ pub sender: AccountId,
+ pub amount_0: Balance,
+ pub amount_1: Balance,
+}
+...
+impl Pair for PairContract {
+ fn _emit_mint_event(&self, sender: AccountId, amount_0: Balance, amount_1: Balance) {
+ self.env().emit_event(Mint {
+ sender,
+ amount_0,
+ amount_1,
+ })
+ }
+}
+```
+
+Don't forget to add `overflow-checks = false` in your pair `Cargo.toml`:
+```toml
+[profile.dev]
+overflow-checks = false
+
+[profile.release]
+overflow-checks = false
+```
+
+And that's it! In these examples we have created a wrapper around a Trait that performs cross-contract calls, which is an advanced Rust & ink! implementation.
+Check your Pair contract with (run in contract folder):
+```console
+cargo contract build
+```
+It should now look like this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/mint_end).
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/modifiers.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/modifiers.md
new file mode 100644
index 0000000..153bfaa
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/modifiers.md
@@ -0,0 +1,245 @@
+---
+sidebar_position: 6
+---
+
+# Modifiers
+
+Modifiers ensure certain conditions are fulfilled prior to entering a function. By defining modifiers, you will reduce code redundancy (keep it DRY), and increase its readability as you will not have to add guards for each of your functions.
+The Pair contract defines and uses a [lock](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L31) modifier that prevents reentrancy attacks. During **initialization**, it also ensures that the [caller is the Factory](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L67), so it can be used as modifier.
+
+## 1. Reentrancy Guard
+
+To protect callable functions from reentrancy attacks, we will use the [reentrancy guard](https://github.com/Supercolony-net/openbrush-contracts/blob/d6e29f05fd462e4e027de1f2f9177d594a5a0f05/contracts/src/security/reentrancy_guard/mod.rs#L54) modifier from Openbrush, which saves the lock status in storage (either `ENTERED` or `NOT_ENTERED`) to prevent reentrancy.
+In the *./contracts/pair/Cargo.toml* file, add the `"reentrancy_guard"` feature to the Openbrush dependencies:
+
+```toml
+openbrush = { git = "https://github.com/727-Ventures/openbrush-contracts", version = "3.0.0", default-features = false, features = ["psp22", "reentrancy_guard"] }
+```
+
+In the *./contracts/pair/lib.rs* file, add an import statement, and reentrancy_guard as a Storage field:
+```rust
+...
+use openbrush::{
+ contracts::{
+ ownable::*,
+ psp22::*,
+ reentrancy_guard,
+ },
+ traits::Storage,
+};
+...
+#[ink(storage)]
+#[derive(Default, Storage)]
+pub struct PairContract {
+ #[storage_field]
+ psp22: psp22::Data,
+ #[storage_field]
+ guard: reentrancy_guard::Data,
+ #[storage_field]
+ pair: data::Data,
+}
+...
+```
+
+In the *./logics/Cargo.toml* file, add the `"reentrancy_guard"` feature as an Openbrush dependency:
+```toml
+openbrush = { git = "https://github.com/727-Ventures/openbrush-contracts", version = "3.0.0", default-features = false, features = ["psp22", "reentrancy_guard"] }
+```
+
+Modifiers should be added in the impl block on top of the function, as an attribute macro.
+In the *./logics/impls/pair/pair.rs* file, add `"reentrancy_guard"`, import statements, and modifier on top of **mint**, **burn** and **swap** as well as the `Storage` trait bound:
+```rust
+...
+use openbrush::{
+ contracts::{
+ psp22::*,
+ reentrancy_guard::*,
+ traits::psp22::PSP22Ref,
+ },
+ modifiers,
+ traits::{
+ AccountId,
+ Balance,
+ Storage,
+ Timestamp,
+ ZERO_ADDRESS,
+ },
+};
+...
+impl + Storage + Storage> Pair for T {
+...
+ #[modifiers(non_reentrant)]
+ fn mint(&mut self, to: AccountId) -> Result {
+...
+ #[modifiers(non_reentrant)]
+ fn burn(&mut self, to: AccountId) -> Result<(Balance, Balance), PairError> {
+...
+ #[modifiers(non_reentrant)]
+ fn swap(
+...
+```
+
+Finally, the `non_reentrant` modifier returns `ReentrancyGuardError`. So let's impl `From` `ReentrancyGuardError` for `PairError`:
+```rust
+use openbrush::{
+ contracts::{
+ reentrancy_guard::*,
+ traits::{
+ psp22::PSP22Error,
+ },
+ },
+ traits::{
+ AccountId,
+ Balance,
+ Timestamp,
+ },
+};
+...
+#[derive(Debug, PartialEq, Eq, scale::Encode, scale::Decode)]
+#[cfg_attr(feature = "std", derive(scale_info::TypeInfo))]
+pub enum PairError {
+ PSP22Error(PSP22Error),
+ ReentrancyGuardError(ReentrancyGuardError),
+...
+impl From for PairError {
+ fn from(error: ReentrancyGuardError) -> Self {
+ PairError::ReentrancyGuardError(error)
+ }
+}
+```
+
+## 2. Only Owner
+
+In **initialize** there is a guard that ensures the [caller is the Factory](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L67). We can use the [ownable modifier](https://github.com/Supercolony-net/openbrush-contracts/blob/main/contracts/src/access/ownable/mod.rs) to store the deployer address in storage, and restrict function access to this address only.
+In the *./contracts/pair/Cargo.toml* file, add the `"ownable"` feature to the Openbrush dependency:
+
+```toml
+openbrush = { tag = "v2.3.0", git = "https://github.com/Supercolony-net/openbrush-contracts", default-features = false, features = ["psp22", "ownable", "reentrancy_guard"] }
+```
+
+```rust
+...
+use openbrush::{
+ contracts::{
+ ownable::*,
+ psp22::*,
+ reentrancy_guard,
+ },
+ traits::Storage,
+};
+...
+#[ink(storage)]
+#[derive(Default, SpreadAllocate, Storage)]
+pub struct PairContract {
+ #[storage_field]
+ psp22: psp22::Data,
+ #[storage_field]
+ ownable: ownable::Data,
+ #[storage_field]
+ guard: reentrancy_guard::Data,
+ #[storage_field]
+ pair: data::Data,
+}
+...
+impl Pair for PairContract {}
+
+impl Ownable for PairContract {}
+...
+impl PairContract {
+ #[ink(constructor)]
+ pub fn new() -> Self {
+ let mut instance = Self::default();
+ let caller = instance.env().caller();
+ instance._init_with_owner(caller);
+ instance.pair.factory = caller;
+ instance
+ }
+}
+```
+
+Update `Internal` Trait to `psp22::Internal`:
+```rust
+...
+ impl psp22::Internal for PairContract {
+...
+ }
+```
+
+In the *./logics/Cargo.toml* file, add the `"ownable"` feature to openbrush dependency:
+```toml
+openbrush = { git = "https://github.com/727-Ventures/openbrush-contracts", version = "3.0.0", default-features = false, features = ["psp22", "ownable", "reentrancy_guard"] }
+```
+
+Modifiers should be added in the impl block on top of the function, as an attribute macro.
+In the *./logics/impls/pair/pair.rs* file, add `"ownable"`, the import statements, and modifier on top of **initialize**, as well as the `Storage` trait bound:
+```rust
+...
+use openbrush::{
+ contracts::{
+ ownable::*,
+ psp22::*,
+ reentrancy_guard::*,
+ traits::psp22::PSP22Ref,
+ },
+ modifiers,
+ traits::{
+ AccountId,
+ Balance,
+ Storage,
+ Timestamp,
+ ZERO_ADDRESS,
+ },
+};
+...
+impl<
+ T: Storage
+ + Storage
+ + Storage
+ + Storage,
+> Pair for T
+{
+...
+ #[modifiers(only_owner)]
+ fn initialize(&mut self, token_0: AccountId, token_1: AccountId) -> Result<(), PairError> {
+...
+```
+
+Finally, the `ownable` modifier returns `OwnableError`. So let's impl `From` `OwnableError` for `PairError`:
+```rust
+use openbrush::{
+ contracts::{
+ reentrancy_guard::*,
+ traits::{
+ ownable::*,
+ psp22::PSP22Error,
+ },
+ },
+ traits::{
+ AccountId,
+ Balance,
+ Timestamp,
+ },
+};
+...
+#[derive(Debug, PartialEq, Eq, scale::Encode, scale::Decode)]
+#[cfg_attr(feature = "std", derive(scale_info::TypeInfo))]
+pub enum PairError {
+ PSP22Error(PSP22Error),
+ OwnableError(OwnableError),
+ ReentrancyGuardError(ReentrancyGuardError),
+...
+impl From for PairError {
+ fn from(error: OwnableError) -> Self {
+ PairError::OwnableError(error)
+ }
+}
+```
+
+And that's it!
+
+By following along with these examples you will have implemented modifiers from Openbrush, and should also be able to implement your own by using information contained in this [tutorial](https://medium.com/supercolony/how-to-use-modifiers-for-ink-smart-contracts-using-openbrush-7a9e53ba1c76).
+Check your Pair contract with (run in contract folder):
+```console
+cargo contract build
+```
+It should now look like this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/modifiers_end)
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/psp22.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/psp22.md
new file mode 100644
index 0000000..ccf8d44
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/psp22.md
@@ -0,0 +1,315 @@
+---
+sidebar_position: 1
+---
+
+# Implement PSP22 for Pair
+
+Please check out this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/start) and open it in your IDE.
+
+Pair contract implements an ERC-20 (slightly modified as uint256::MAX does not [decrease allowance](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2ERC20.sol#L74)).
+In Astar the standard for fungible tokens is [PSP22](https://github.com/w3f/PSPs/blob/master/PSPs/psp-22.md). We will use the OpenBrush [PSP22 implementation](https://github.com/Supercolony-net/openbrush-contracts/tree/main/contracts/src/token/psp22).
+
+## 1. Implement Basic PSP22 in our Contract.
+
+In the `Cargo.toml` file, import crates from ink!, scale, and Openbrush (with feature `"psp22"`).
+
+```toml
+[package]
+name = "pair_contract"
+version = "0.1.0"
+authors = ["Stake Technologies "]
+edition = "2021"
+
+[dependencies]
+ink = { version = "4.0.0", default-features = false}
+
+scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] }
+scale-info = { version = "2.3", default-features = false, features = ["derive"], optional = true }
+
+openbrush = { git = "https://github.com/727-Ventures/openbrush-contracts", version = "3.0.0", default-features = false, features = ["psp22"] }
+
+[lib]
+name = "pair_contract"
+path = "lib.rs"
+crate-type = [
+ "cdylib"
+]
+
+[features]
+default = ["std"]
+std = [
+ "ink/std",
+ "scale/std",
+ "scale-info/std",
+ "openbrush/std",
+]
+ink-as-dependency = []
+
+[profile.dev]
+overflow-checks = false
+
+[profile.release]
+overflow-checks = false
+
+```
+*contracts/pair/Cargo.toml*
+
+In the `lib.rs` file in the contract crate import everything (with `*`) from `openbrush::contracts::psp22` as well as the `Storage` trait and `SpreadAllocate` from ink!
+
+As reminder the `#![cfg_attr(not(feature = "std"), no_std)]` attribute is for [conditional compilation](https://use.ink/faq#what-does-the-cfg_attrnotfeature--std-no_std-at-the-beginning-of-each-contract-mean) and the `#![feature(min_specialization)]` is the feature needed to enable [specialization](../Structure/file-structure.md):
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+#![feature(min_specialization)]
+
+#[openbrush::contract]
+pub mod pair {
+ use openbrush::{
+ contracts::{
+ psp22::{
+ Internal,
+ *,
+ },
+ },
+ traits::Storage,
+ };
+
+}
+```
+
+Add the [storage struct](https://use.ink/macros-attributes/storage) and add the psp22 field:
+
+```rust
+#[ink(storage)]
+#[derive(Default, Storage)]
+pub struct PairContract {
+ #[storage_field]
+ psp22: psp22::Data,
+}
+```
+
+implement PSP22 trait into your contract struct:
+
+```rust
+ impl PSP22 for PairContract {}
+```
+
+Add an `impl` block for the contract and add the constructor:
+
+```rust
+impl PairContract {
+ #[ink(constructor)]
+ pub fn new() -> Self {
+ Self { psp22: Default::default() }
+ }
+}
+```
+
+Your contract should look like the following, and build if you run:
+```console
+cargo contract build
+```
+
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+#![feature(min_specialization)]
+
+#[openbrush::contract]
+pub mod pair {
+ use openbrush::{
+ contracts::{
+ psp22::{
+ Internal,
+ *,
+ },
+ },
+ traits::Storage,
+ };
+
+ #[ink(storage)]
+ #[derive(Default, Storage)]
+ pub struct PairContract {
+ #[storage_field]
+ psp22: psp22::Data,
+ }
+
+ impl PSP22 for PairContract {}
+
+ impl PairContract {
+ #[ink(constructor)]
+ pub fn new() -> Self {
+ Self { psp22: Default::default() }
+ }
+ }
+}
+```
+*contracts/pair/lib.rs*
+
+## 2. Add Events
+
+You should add an [events struct](https://use.ink/macros-attributes/event) to your contract and also override the event emission methods from the PSP22 implementation.
+Import what's needed for editing events:
+
+```rust
+use ink::{
+ codegen::{
+ EmitEvent,
+ Env,
+ }
+};
+```
+
+[PSP22](https://github.com/w3f/PSPs/blob/master/PSPs/psp-22.md#events) emits `Transfer` and `Approval` events. An event is a struct with `#[ink(event)]` [attribute](https://use.ink/macros-attributes/event). Some fields can be marked with `#[ink(topic)]` [attribute](https://use.ink/macros-attributes/topic) which acts as `indexed` in Solidity:
+
+```rust
+#[ink(event)]
+pub struct Transfer {
+ #[ink(topic)]
+ from: Option,
+ #[ink(topic)]
+ to: Option,
+ value: Balance,
+}
+
+#[ink(event)]
+pub struct Approval {
+ #[ink(topic)]
+ owner: AccountId,
+ #[ink(topic)]
+ spender: AccountId,
+ value: Balance,
+}
+```
+
+And finally, override the event emitting methods of the PSP22 Internal trait (Due to ink!'s [design](https://github.com/paritytech/ink/issues/809) it is not possible to share event definitions between multiple contracts since events can only be defined in the ink! module scope directly.):
+
+```rust
+impl Internal for PairContract {
+ fn _emit_transfer_event(
+ &self,
+ from: Option,
+ to: Option,
+ amount: Balance,
+ ) {
+ self.env().emit_event(Transfer {
+ from,
+ to,
+ value: amount,
+ });
+ }
+
+ fn _emit_approval_event(&self, owner: AccountId, spender: AccountId, amount: Balance) {
+ self.env().emit_event(Approval {
+ owner,
+ spender,
+ value: amount,
+ });
+ }
+}
+```
+
+## 3. Override Generic Function of PSP22
+
+The PSP22 OpenBrush implementation has a built-in check for a zero account in [mint](https://github.com/Supercolony-net/openbrush-contracts/blob/e366f6ff1e5892c6a624833dd337a6da16a06baa/contracts/src/token/psp22/psp22.rs#L270), [burn](https://github.com/Supercolony-net/openbrush-contracts/blob/e366f6ff1e5892c6a624833dd337a6da16a06baa/contracts/src/token/psp22/psp22.rs#L286), [transfer_from](https://github.com/Supercolony-net/openbrush-contracts/blob/e366f6ff1e5892c6a624833dd337a6da16a06baa/contracts/src/token/psp22/psp22.rs#L223) and [approve](https://github.com/Supercolony-net/openbrush-contracts/blob/e366f6ff1e5892c6a624833dd337a6da16a06baa/contracts/src/token/psp22/psp22.rs#L257) functions. But Uniswap V2 uses the zero address to [lock tokens](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L121).
+The upside in our case is that we can override any functions of the generic implementation, by using the same function body, but removing the check for the zero address:
+
+```rust
+impl Internal for PairContract {
+ // in uniswapv2 no check for zero account
+ fn _mint_to(&mut self, account: AccountId, amount: Balance) -> Result<(), PSP22Error> {
+ let mut new_balance = self._balance_of(&account);
+ new_balance += amount;
+ self.psp22.balances.insert(&account, &new_balance);
+ self.psp22.supply += amount;
+ self._emit_transfer_event(None, Some(account), amount);
+ Ok(())
+ }
+
+ fn _burn_from(&mut self, account: AccountId, amount: Balance) -> Result<(), PSP22Error> {
+ let mut from_balance = self._balance_of(&account);
+
+ if from_balance < amount {
+ return Err(PSP22Error::InsufficientBalance)
+ }
+
+ from_balance -= amount;
+ self.psp22.balances.insert(&account, &from_balance);
+ self.psp22.supply -= amount;
+ self._emit_transfer_event(Some(account), None, amount);
+ Ok(())
+ }
+
+ fn _approve_from_to(
+ &mut self,
+ owner: AccountId,
+ spender: AccountId,
+ amount: Balance,
+ ) -> Result<(), PSP22Error> {
+ self.psp22.allowances.insert(&(&owner, &spender), &amount);
+ self._emit_approval_event(owner, spender, amount);
+ Ok(())
+ }
+
+ fn _transfer_from_to(
+ &mut self,
+ from: AccountId,
+ to: AccountId,
+ amount: Balance,
+ _data: Vec,
+ ) -> Result<(), PSP22Error> {
+ let from_balance = self._balance_of(&from);
+
+ if from_balance < amount {
+ return Err(PSP22Error::InsufficientBalance)
+ }
+
+ self.psp22.balances.insert(&from, &(from_balance - amount));
+ let to_balance = self._balance_of(&to);
+ self.psp22.balances.insert(&to, &(to_balance + amount));
+
+ self._emit_transfer_event(Some(from), Some(to), amount);
+ Ok(())
+ }
+ ...
+```
+
+Also in Uniswap V2 max allowance will not [decrease allowance](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2ERC20.sol#L74). For this, we need to override `transfer_from` and not decrease allowance if it's u128::MAX.
+Important here: please note that `#[ink(message)]` is needed in order to compile correctly. Inside the existing `impl PSP22` block, add:
+
+```rust
+impl PSP22 for PairContract {
+ #[ink(message)]
+ fn transfer_from(
+ &mut self,
+ from: AccountId,
+ to: AccountId,
+ value: Balance,
+ data: Vec,
+ ) -> Result<(), PSP22Error> {
+ let caller = self.env().caller();
+ let allowance = self._allowance(&from, &caller);
+
+ // In uniswapv2 max allowance never decrease
+ if allowance != u128::MAX {
+ if allowance < value {
+ return Err(PSP22Error::InsufficientAllowance)
+ }
+
+ self._approve_from_to(from, caller, allowance - value)?;
+ }
+ self._transfer_from_to(from, to, value, data)?;
+ Ok(())
+ }
+}
+```
+
+Import Vec from `ink::prelude`:
+
+```rust
+ use ink::prelude::vec::Vec;
+```
+
+And that's it! You implemented PSP22, its event and overrode its default implementation. Check your Pair contract (run in contract folder):
+```console
+cargo contract build
+```
+It should now look like this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/psp22).
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/storage.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/storage.md
new file mode 100644
index 0000000..a63c91a
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/storage.md
@@ -0,0 +1,379 @@
+---
+sidebar_position: 2
+---
+
+# Pair Storage and Getters
+
+If you are starting the tutorial from here, Please checkout this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/psp22) and open it in your IDE.
+
+## 1. Logics Crate
+
+As described in the [File & Folder structure](../Structure/file-structure.md) section, the Pair business logic will be in the uniswap-v2 logics crate.
+Let's create (empty) files and folders so your project looks like this:
+
+```bash
+├── uniswap-v2
+│ ├── contracts
+│ ├── logics
+│ │ ├── impls
+│ │ │ ├── pair
+│ │ │ │ ├── mod.rs
+│ │ │ │ ├── data.rs
+│ │ │ │ └── pair.rs
+│ │ │ └── mod.rs
+│ │ └── traits
+│ │ ├── mod.rs
+│ │ ├── pair.rs
+│ ├── Cargo.toml
+│ └── lib.rs
+├── Cargo.lock
+├── Cargo.toml
+├── .rustfmt
+└── .gitignore
+```
+
+The *./uniswap-v2/logics/Cargo.toml* will be a `rlib` crate named `"uniswap_v2""` and import crates from ink!, scale, and Openbrush (with feature `"psp22"`)
+
+```toml
+[package]
+name = "uniswap_v2"
+version = "0.1.0"
+authors = ["Stake Technologies "]
+edition = "2021"
+
+[dependencies]
+ink = { version = "4.0.0", default-features = false}
+
+scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] }
+scale-info = { version = "2.3", default-features = false, features = ["derive"], optional = true }
+
+openbrush = { git = "https://github.com/727-Ventures/openbrush-contracts", version = "3.0.0", default-features = false, features = ["psp22"] }
+
+[lib]
+name = "uniswap_v2"
+path = "lib.rs"
+crate-type = [
+ "rlib",
+]
+
+[features]
+default = ["std"]
+std = [
+ "ink/std",
+ "scale/std",
+ "scale-info/std",
+ "openbrush/std",
+]
+```
+*./uniswap-v2/logics/Cargo.toml*
+
+The `lib.rs` file should contain a conditional compilation attribute. It should also export `impls` and `traits`
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+
+pub mod impls;
+pub mod traits;
+```
+*./uniswap-v2/logics/lib.rs*
+
+## 2. Pair Storage
+
+The Uniswap V2 Pair contract has [storage fields](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L18) in Solidity that we should implement as shown below:
+
+```solidity
+address public factory;
+address public token0;
+address public token1;
+
+uint112 private reserve0; // uses single storage slot, accessible via getReserves
+uint112 private reserve1; // uses single storage slot, accessible via getReserves
+uint32 private blockTimestampLast; // uses single storage slot, accessible via getReserves
+
+uint public price0CumulativeLast;
+uint public price1CumulativeLast;
+uint public kLast; // reserve0 * reserve1, as of immediately after the most recent liquidity event
+```
+
+ink! uses most Substrate primitive types. Here is a conversion table between Solidity and ink! types:
+
+| Solidity | ink! |
+|-----------------------------------------|-------------------------------------------------------------------------------------------|
+| uint256 | [U256](https://docs.rs/primitive-types/latest/primitive_types/struct.U256.html) |
+| any other uint | u128 (or lower) |
+| address | AccountId |
+| mapping(key => value) | [Mapping(key, value)](https://docs.rs/ink_storage/latest/ink_storage/struct.Mapping.html) |
+| mapping(key1 => mapping(key2 => value)) | [Mapping((key1 ,key2), value)](https://substrate.stackexchange.com/a/3993/567) |
+
+Let's create a storage struct in *./logics/impls/pair/data.rs*. Name the struct `Data` and add all the required fields.
+
+```rust
+pub struct Data {
+ pub factory: AccountId,
+ pub token_0: AccountId,
+ pub token_1: AccountId,
+ pub reserve_0: Balance,
+ pub reserve_1: Balance,
+ pub block_timestamp_last: Timestamp,
+ pub price_0_cumulative_last: Balance,
+ pub price_1_cumulative_last: Balance,
+ pub k_last: u128,
+}
+```
+
+Openbrush uses a specified storage key instead of the default one in the attribute [openbrush::upgradeable_storage](https://github.com/Supercolony-net/openbrush-contracts/blob/main/lang/macro/src/lib.rs#L447). It implements all [required traits](https://docs.openbrush.io/smart-contracts/upgradeable#suggestions-on-how-follow-the-rules) with the specified storage key (storage key is a required input argument of the macro).
+To generate a unique key Openbrush provides the [openbrush::storage_unique_key!](https://docs.openbrush.io/smart-contracts/upgradeable#unique-storage-key) declarative macro that is based on the name of the struct and its file path. Let's add this to our struct and import the required fields.
+
+```rust
+use openbrush::traits::{
+ AccountId,
+ Balance,
+ Timestamp,
+};
+
+pub const STORAGE_KEY: u32 = openbrush::storage_unique_key!(Data);
+
+#[derive(Debug)]
+#[openbrush::upgradeable_storage(STORAGE_KEY)]
+pub struct Data {
+ pub factory: AccountId,
+ pub token_0: AccountId,
+ pub token_1: AccountId,
+ pub reserve_0: Balance,
+ pub reserve_1: Balance,
+ pub block_timestamp_last: Timestamp,
+ pub price_0_cumulative_last: Balance,
+ pub price_1_cumulative_last: Balance,
+ pub k_last: u128,
+}
+```
+*./logics/impls/pair/data.rs*
+
+And impl `Default` for the `Data` struct:
+```rust
+...
+impl Default for Data {
+ fn default() -> Self {
+ Self {
+ factory: ZERO_ADDRESS.into(),
+ token_0: ZERO_ADDRESS.into(),
+ token_1: ZERO_ADDRESS.into(),
+ reserve_0: 0,
+ reserve_1: 0,
+ block_timestamp_last: 0,
+ price_0_cumulative_last: Default::default(),
+ price_1_cumulative_last: Default::default(),
+ k_last: Default::default(),
+ }
+ }
+}
+```
+
+## 3. Trait for Getters
+
+Unlike Solidity that will automatically create getters for the storage items, you need to add them yourself in ink!. For this we will create a trait and add generic implementation.
+in the *./logics/traits/pair.rs* file, let's create a trait with the getters functions and make them callable with `#[ink(message)]` :
+
+```rust
+pub trait Pair {
+ #[ink(message)]
+ fn get_reserves(&self) -> (Balance, Balance, Timestamp);
+
+ #[ink(message)]
+ fn initialize(&mut self, token_0: AccountId, token_1: AccountId) -> Result<(), PairError>;
+
+ #[ink(message)]
+ fn get_token_0(&self) -> AccountId;
+
+ #[ink(message)]
+ fn get_token_1(&self) -> AccountId;
+}
+```
+
+Openbrush provides `#[openbrush::trait_definition]` that will make sure your trait (and its default implementation) will be generated in the contract. Also, you can create a wrapper around this trait so it can be used for cross-contract calls (so no need to import the contract as ink-as-dependancy). Import what is needed from Openbrush:
+
+```rust
+use openbrush::traits::{
+ AccountId,
+ Balance,
+ Timestamp,
+};
+
+#[openbrush::wrapper]
+pub type PairRef = dyn Pair;
+
+#[openbrush::trait_definition]
+pub trait Pair {
+ #[ink(message)]
+ fn get_reserves(&self) -> (Balance, Balance, Timestamp);
+
+ #[ink(message)]
+ fn initialize(&mut self, token_0: AccountId, token_1: AccountId) -> Result<(), PairError>;
+
+ #[ink(message)]
+ fn get_token_0(&self) -> AccountId;
+
+ #[ink(message)]
+ fn get_token_1(&self) -> AccountId;
+}
+```
+*./logics/traits/pair.rs*
+
+The last thing to add will be the Error enum, and each contract should use its own. As it will be used in function arguments it should implement Scale encode & decode.
+For the moment we don't need a proper error so just add `Error` as field:
+
+```rust
+...
+#[derive(Debug, PartialEq, Eq, scale::Encode, scale::Decode)]
+#[cfg_attr(feature = "std", derive(scale_info::TypeInfo))]
+pub enum PairError {
+ Error,
+}
+```
+*./logics/traits/pair.rs*
+
+## 4. Implement Getters
+
+in *./logics/impls/pair/pair.rs* add an impl block for generic type `data::Data`. We wrap the Data struct in Storage trait to add it as trait bound.
+
+```rust
+impl> Pair for T {}
+```
+
+**get_reserves**
+
+This function should return a tuple of reserves & timestamp of type `(Balance, Balance, Timestamp)`. It takes `&self` as it should access to Data storage struct but will not modify it hence no need for a mutable ref.
+```rust
+fn get_reserves(&self) -> (Balance, Balance, Timestamp) {
+ (
+ self.data::().reserve_0,
+ self.data::().reserve_1,
+ self.data::().block_timestamp_last,
+ )
+}
+```
+
+**initialize**
+
+This method is more of a setter as it will set token address in storage. That's why it takes a `&mut self` as the first argument.
+As a general rule if a function only takes `&self` then it will not modify the state so it will only be called as a query.
+If the functions takes an `&mut self` it will make state change so can be called as a transaction, and should return a Result.
+Only factory can call this [function](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L67), but we will add `only_owner` modifier later in this tutorial.
+
+```rust
+fn initialize(
+ &mut self,
+ token_0: AccountId,
+ token_1: AccountId,
+) -> Result<(), PairError> {
+ self.data::().token_0 = token_0;
+ self.data::().token_1 = token_1;
+ Ok(())
+}
+```
+
+**get_token**
+
+These two functions return the accountId of the tokens
+
+```rust
+fn get_token_0(&self) -> AccountId {
+ self.data::().token_0
+}
+
+fn get_token_1(&self) -> AccountId {
+ self.data::().token_1
+}
+```
+
+Add imports, and your file should look like this:
+
+```rust
+pub use crate::{
+ impls::pair::*,
+ traits::pair::*,
+};
+use openbrush::traits::{
+ AccountId,
+ Balance,
+ Storage,
+ Timestamp,
+};
+
+impl> Pair for T {
+ fn get_reserves(&self) -> (Balance, Balance, Timestamp) {
+ (
+ self.data::().reserve_0,
+ self.data::().reserve_1,
+ self.data::().block_timestamp_last,
+ )
+ }
+
+ fn initialize(
+ &mut self,
+ token_0: AccountId,
+ token_1: AccountId,
+ ) -> Result<(), PairError> {
+ self.data::().token_0 = token_0;
+ self.data::().token_1 = token_1;
+ Ok(())
+ }
+
+ fn get_token_0(&self) -> AccountId {
+ self.data::().token_0
+ }
+
+ fn get_token_1(&self) -> AccountId {
+ self.data::().token_1
+ }
+}
+```
+
+## 5. Implement Getters to Pair contract
+
+In *./contracts/pair/Cargo.toml* import the uniswap-v2 logics crate and add it to the std features
+
+```toml
+...
+uniswap_v2 = { path = "../../logics", default-features = false }
+...
+std = [
+"ink/std",
+"scale/std",
+"scale-info/std",
+"openbrush/std",
+"uniswap_v2/std"
+]
+```
+
+In the contract *lib.rs* import everything from pair traits (and impls):
+
+```rust
+use uniswap_v2::{
+ impls::pair::*,
+ traits::pair::*,
+};
+```
+
+Add the `Data` storage struct to the contract storage struct:
+```rust
+#[ink(storage)]
+#[derive(Default, Storage)]
+pub struct PairContract {
+ #[storage_field]
+ psp22: psp22::Data,
+ #[storage_field]
+ pair: data::Data,
+}
+```
+
+And just below the storage struct impl Pair trait for the PairContract:
+```rust
+ impl Pair for PairContract {}
+```
+
+And that's it!
+In this section we've gone over how to create a trait and its generic implementation, and added them to the Pair contract. Check your Pair contract with (run from the contract folder):
+```console
+cargo contract build
+```
+It should now look like this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/storage-end).
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/swap.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/swap.md
new file mode 100644
index 0000000..7fcc688
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Pair/swap.md
@@ -0,0 +1,290 @@
+---
+sidebar_position: 5
+---
+
+# Swap
+
+If you are starting the tutorial from here, please check out this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/burn_end) and open it in your IDE.
+
+## 1. Add Swap Functions to Pair Trait
+
+At this stage, we will implement a [swap](https://github.com/Uniswap/v2-core/blob/ee547b17853e71ed4e0101ccfd52e70d5acded58/contracts/UniswapV2Pair.sol#L159) function in the Pair contract.
+[Swap](https://docs.uniswap.org/contracts/v2/concepts/core-concepts/swaps) is a way for traders to exchange one PSP22 token for another one in a simple way.
+In the *./logics/traits/pair.rs* file add the **swap** function to the Pair trait. As this function modifies the state, `&mut self` should be used as the first argument.
+Also, we will add a function to emit a swap event in the contract:
+
+```rust
+pub trait Pair {
+ ...
+ #[ink(message)]
+ fn swap(
+ &mut self,
+ amount_0_out: Balance,
+ amount_1_out: Balance,
+ to: AccountId,
+ ) -> Result<(), PairError>;
+ ...
+ fn _emit_swap_event(
+ &self,
+ _sender: AccountId,
+ _amount_0_in: Balance,
+ _amount_1_in: Balance,
+ _amount_0_out: Balance,
+ _amount_1_out: Balance,
+ _to: AccountId,
+ );
+}
+```
+
+## 2. Swap
+
+First, check user inputs, then *get_reserves* and check liquidity:
+```rust
+impl + Storage> Pair for T {
+ ...
+ fn swap(
+ &mut self,
+ amount_0_out: Balance,
+ amount_1_out: Balance,
+ to: AccountId,
+ ) -> Result<(), PairError> {
+ if amount_0_out == 0 && amount_1_out == 0 {
+ return Err(PairError::InsufficientOutputAmount)
+ }
+ let reserves = self.get_reserves();
+ if amount_0_out >= reserves.0 || amount_1_out >= reserves.1 {
+ return Err(PairError::InsufficientLiquidity)
+ }
+ }
+ ...
+}
+```
+
+Then, obtain the `token_0` and `token_1` addresses and process the swap. Ensure the amount transferred out is not `0`:
+```rust
+...
+ let token_0 = self.data::().token_0;
+ let token_1 = self.data::().token_1;
+
+ if to == token_0 || to == token_1 {
+ return Err(PairError::InvalidTo)
+ }
+ if amount_0_out > 0 {
+ self._safe_transfer(token_0, to, amount_0_out)?;
+ }
+ if amount_1_out > 0 {
+ self._safe_transfer(token_1, to, amount_1_out)?;
+ }
+...
+```
+
+Then, obtain the balance of both token contracts, which will be used to update the price:
+```rust
+...
+ let contract = Self::env().account_id();
+ let balance_0 = PSP22Ref::balance_of(&token_0, contract);
+ let balance_1 = PSP22Ref::balance_of(&token_1, contract);
+...
+```
+
+Ensure that no swap attempted will leave the trading pair with less than the minimum amount of reserves.
+`balance_0` and `balance_1` are the balances/reserves after the swap is finished, and `reserve_0` and `reserve_1` are the values previous to that (swap is done first and then possibly reverted, if the requirements are not met).
+We will need to check that the swap did not reduce the product of the reserves (otherwise liquidity from the pool can be stolen).
+Hence the reason why we check `balance_0 * balance_1 >= reserve_0 * reserve_1`.
+`balance_0_adjusted` and `balance_1_adjusted` are adjusted with 0.3% swap [liquidity provider fees](https://docs.uniswap.org/contracts/v2/concepts/advanced-topics/fees#liquidity-provider-fees).
+```rust
+...
+ let amount_0_in = if balance_0
+ > reserves
+ .0
+ .checked_sub(amount_0_out)
+ .ok_or(PairError::SubUnderFlow4)?
+ {
+ balance_0
+ .checked_sub(
+ reserves
+ .0
+ .checked_sub(amount_0_out)
+ .ok_or(PairError::SubUnderFlow5)?,
+ )
+ .ok_or(PairError::SubUnderFlow6)?
+ } else {
+ 0
+ };
+ let amount_1_in = if balance_1
+ > reserves
+ .1
+ .checked_sub(amount_1_out)
+ .ok_or(PairError::SubUnderFlow7)?
+ {
+ balance_1
+ .checked_sub(
+ reserves
+ .1
+ .checked_sub(amount_1_out)
+ .ok_or(PairError::SubUnderFlow8)?,
+ )
+ .ok_or(PairError::SubUnderFlow9)?
+ } else {
+ 0
+ };
+ if amount_0_in == 0 && amount_1_in == 0 {
+ return Err(PairError::InsufficientInputAmount)
+ }
+
+ let balance_0_adjusted = balance_0
+ .checked_mul(1000)
+ .ok_or(PairError::MulOverFlow8)?
+ .checked_sub(amount_0_in.checked_mul(3).ok_or(PairError::MulOverFlow9)?)
+ .ok_or(PairError::SubUnderFlow10)?;
+ let balance_1_adjusted = balance_1
+ .checked_mul(1000)
+ .ok_or(PairError::MulOverFlow10)?
+ .checked_sub(amount_1_in.checked_mul(3).ok_or(PairError::MulOverFlow11)?)
+ .ok_or(PairError::SubUnderFlow11)?;
+
+ if balance_0_adjusted
+ .checked_mul(balance_1_adjusted)
+ .ok_or(PairError::MulOverFlow16)?
+ < reserves
+ .0
+ .checked_mul(reserves.1)
+ .ok_or(PairError::MulOverFlow17)?
+ .checked_mul(1000u128.pow(2))
+ .ok_or(PairError::MulOverFlow18)?
+ {
+ return Err(PairError::K)
+ }
+```
+
+Then update the pool reserves, and emit a swap event:
+```rust
+ self._update(balance_0, balance_1, reserves.0, reserves.1)?;
+
+ self._emit_swap_event(
+ Self::env().caller(),
+ amount_0_in,
+ amount_1_in,
+ amount_0_out,
+ amount_1_out,
+ to,
+ );
+ Ok(())
+```
+
+Add the empty implementation of **_emit_swap_event**. It should have the `default` keyword, as we will override this function in the Pair contract:
+```rust
+impl + Storage> Pair for T {
+ ...
+ default fn _emit_swap_event(
+ &self,
+ _sender: AccountId,
+ _amount_0_in: Balance,
+ _amount_1_in: Balance,
+ _amount_0_out: Balance,
+ _amount_1_out: Balance,
+ _to: AccountId,
+ ) {
+ }
+ ...
+}
+```
+
+Add the Error fields to `PairError` in the *./logics/traits/pair.rs* file:
+```rust
+#[derive(Debug, PartialEq, Eq, scale::Encode, scale::Decode)]
+#[cfg_attr(feature = "std", derive(scale_info::TypeInfo))]
+pub enum PairError {
+ PSP22Error(PSP22Error),
+ TransferError,
+ K,
+ InsufficientLiquidityMinted,
+ InsufficientLiquidityBurned,
+ InsufficientOutputAmount,
+ InsufficientLiquidity,
+ Overflow,
+ InvalidTo,
+ InsufficientInputAmount,
+ SubUnderFlow1,
+ SubUnderFlow2,
+ SubUnderFlow3,
+ SubUnderFlow4,
+ SubUnderFlow5,
+ SubUnderFlow6,
+ SubUnderFlow7,
+ SubUnderFlow8,
+ SubUnderFlow9,
+ SubUnderFlow10,
+ SubUnderFlow11,
+ SubUnderFlow14,
+ MulOverFlow1,
+ MulOverFlow2,
+ MulOverFlow3,
+ MulOverFlow4,
+ MulOverFlow5,
+ MulOverFlow6,
+ MulOverFlow7,
+ MulOverFlow8,
+ MulOverFlow9,
+ MulOverFlow10,
+ MulOverFlow11,
+ MulOverFlow14,
+ MulOverFlow15,
+ MulOverFlow16,
+ MulOverFlow17,
+ MulOverFlow18,
+ DivByZero1,
+ DivByZero2,
+ DivByZero3,
+ DivByZero4,
+ DivByZero5,
+ AddOverflow1,
+}
+```
+
+## 3. Implement Event
+
+In the contracts *./cotnracts/pair/lib.rs* file, add the Event struct and override the implementation of emit event:
+```rust
+...
+#[ink(event)]
+pub struct Swap {
+ #[ink(topic)]
+ pub sender: AccountId,
+ pub amount_0_in: Balance,
+ pub amount_1_in: Balance,
+ pub amount_0_out: Balance,
+ pub amount_1_out: Balance,
+ #[ink(topic)]
+ pub to: AccountId,
+}
+...
+impl Pair for PairContract {
+ ...
+ fn _emit_swap_event(
+ &self,
+ sender: AccountId,
+ amount_0_in: Balance,
+ amount_1_in: Balance,
+ amount_0_out: Balance,
+ amount_1_out: Balance,
+ to: AccountId,
+ ) {
+ self.env().emit_event(Swap {
+ sender,
+ amount_0_in,
+ amount_1_in,
+ amount_0_out,
+ amount_1_out,
+ to,
+ })
+ }
+}
+...
+```
+
+And that's it! Check your Pair contract with (run in contract folder):
+```console
+cargo contract build
+```
+It should now look like this [branch](https://github.com/AstarNetwork/wasm-tutorial-dex/tree/tutorial/swap_end).
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Structure/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Structure/_category_.json
new file mode 100644
index 0000000..65c54b2
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Structure/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Structure",
+ "position": 2
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Structure/file-structure.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Structure/file-structure.md
new file mode 100644
index 0000000..7abaf12
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/Structure/file-structure.md
@@ -0,0 +1,141 @@
+# File & Folder Structure
+
+## Each Contract Should be in its Own Crate
+
+ink! uses [macros](https://use.ink/macros-attributes) to define your contract. It is composed of a struct that define your contract storage and implementation of associated methods and functions.
+
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+
+#[ink::contract]
+pub mod contract {
+ #[ink(storage)]
+ pub struct ContractStorage {
+ any_value: bool,
+ }
+
+ impl ContractStorage {
+ #[ink(constructor)]
+ pub fn constructor(init_value: bool) -> Self {
+ Self { any_value: init_value }
+ }
+
+ #[ink(message)]
+ pub fn callable_method(&mut self) {
+ self.value = !self.value;
+ }
+ }
+}
+```
+
+As defining an inherent `impl` for a type that is external to the crate where the type is defined is not [supported](https://doc.rust-lang.org/error_codes/E0116.html), we will need to define a Trait in an external crate and implement it, instead. This functionality is supported using the ink! macro `#[ink::trait_definition]` (see [ink! trait-definitions doc](https://use.ink/basics/trait-definitions/) for more information), but has some limitations, and it is not possible to have a default implementation.
+
+Therefore, the only solution, in ink!, is to implement an omnibus contract with all the code in the same file, which will not be easily readable or maintainable.
+
+## Trait and Generic Implementation in Separate Files
+
+In order to organise the business logic into different files, OpenBrush uses [specialization](https://github.com/rust-lang/rfcs/pull/1210) that permits multiple `impl` blocks to be applied to the same type.
+With OpenBrush, you can define as many Trait and generic implementations as are needed, which allows you to split up your code to more easily implement it into your contract. Of course, specialization also allows you to override a default implementation (if the method or impl is specialized with the [`default`](https://github.com/rust-lang/rfcs/blob/master/text/1210-impl-specialization.md#the-default-keyword) keyword).
+So you define a Trait and a generic implementation in a crate and within the contract you implement this Trait. If this impl block is empty `{}` specialization will implement the most specific implementation, which is the one you defined in the file. Every generic implementation in Openbrush (PSP22, PSP34, ..) uses the `default` keyword that makes these functions *overrideable*.
+
+Define your Trait in a file:
+
+```rust
+#[openbrush::trait_definition]
+pub trait MyTrait {
+ #[ink(message)]
+ fn method(&self) -> u32;
+}
+```
+*trait.rs*
+
+And a generic implementation in another file:
+```rust
+impl MyTrait for T
+{
+ #[ink(message)]
+ fn method(&self) -> u32 {
+ unimplemented!()
+ }
+}
+```
+*impl.rs*
+
+And implement it in your contract file:
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+
+#[ink::contract]
+pub mod contract {
+ // Import everything from the crate
+ use external_crate::traits::*;
+ #[ink(storage)]
+ pub struct ContractStorage {
+ any_value: bool,
+ }
+
+ // Implement the Trait
+ // Even if this impl block is empty
+ // Specialization will implement the one defined in the impl.rs file
+ impl MyTrait for ContractStorage {}
+
+ impl ContractStorage {
+ #[ink(constructor)]
+ pub fn constructor(init_value: bool) -> Self {
+ Self { any_value: init_value }
+ }
+
+ #[ink(message)]
+ pub fn callable_method(&mut self) {
+ self.value = !self.value;
+ }
+ }
+}
+```
+*lib.rs*
+
+## File Structure of the DEX Project
+
+We will put the Trait and generic implementations in separate files, as described below, when building the DEX.
+The contracts will be in the `contracts` folder and the Traits & generic implementation will be in the `logics` folder. All of these will be within the same project workspace.
+
+```bash
+├── uniswap-v2
+│ ├── contracts
+│ └── logics
+├── Cargo.lock
+├── Cargo.toml
+├── .rustfmt
+└── .gitignore
+```
+
+In the `contracts` folder there should be one folder for each contract, each packaged as crates with their own `Cargo.toml` and `lib.rs` files.
+The `logics` folder is a crate which contains a folder for `traits` and another for `impls`.
+Inside the `traits` folder there should be one file per contract. Inside the `impls` there should be one folder per contract and inside, one file for the implementation of the trait, and another 'data' file used for storage.
+
+```bash
+├── logics
+│ ├── impls
+│ │ ├── factory
+│ │ │ ├── mod.rs
+│ │ │ ├── data.rs
+│ │ │ └── factory.rs
+│ │ ├── pair
+│ │ │ ├── mod.rs
+│ │ │ ├── data.rs
+│ │ │ └── pair.rs
+│ │ └── mod.rs
+│ └── traits
+│ ├── mod.rs
+│ ├── factory.rs
+│ ├── pair.rs
+│ ├── math.rs
+├── Cargo.toml
+└── lib.rs
+```
+
+
+## Resources
+OpenBrush - [Setup a project](https://docs.openbrush.io/smart-contracts/example/setup_project).
+
+
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/_category_.json
new file mode 100644
index 0000000..1cabd0f
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Build Uniswap V2 core DEX",
+ "position": 4
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/dex.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/dex.md
new file mode 100644
index 0000000..90fc8b2
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/dex/dex.md
@@ -0,0 +1,43 @@
+---
+sidebar_position: 1
+---
+# Overview
+
+This tutorial is suitable for developers with **advanced** knowledge of ink! and **intermediate** understanding of Rust. Using the examples provided, you will build and deploy a full implementation of Uniswap V2 DEX.
+
+# Prerequisites
+
+Experience gained from following the previous guides will be beneficial for this tutorial.
+
+| Tutorial | Difficulty |
+|----------------------------------------------------------------------------|--------------------------------|
+| [Your First Flipper Contract](../flipper-contract/flipper-contract.md) | Basic ink! - Basic Rust |
+| [NFT contract with PSP34](../nft/nft.md) | Intermediate ink! - Basic Rust |
+
+In addition to:
+- An already provisioned [ink! environment](/docs/build/build-on-layer-1/environment/ink_environment.md).
+- Intermediate knowledge of Rust. Visit [here for more information about Rust](https://www.rust-lang.org/learn).
+- General knowledge of AMMs & [Uniswap V2](https://docs.uniswap.org/contracts/v2/overview) (as this tutorial will focus on implementation).
+
+### What Will We Be Doing?
+In this tutorial we will build and deploy the following Solidity implementations of Uniswap V2 Core, using ink!
+- [Pair](https://github.com/Uniswap/v2-core/blob/master/contracts/UniswapV2Pair.sol)
+- [Factory](https://github.com/Uniswap/v2-core/blob/master/contracts/UniswapV2Factory.sol)
+
+### What Version of Ink! Will I Need?
+[ink! 4.0.0](https://github.com/paritytech/ink/tree/v4.0.0)
+[OpenBrush 3.0.0](https://github.com/727-Ventures/openbrush-contracts/tree/3.0.0)
+
+### What Topics Will Be Covered in This Guide?
+- The full implementation of Uniswap V2 DEX.
+- File structure for a project composed of several contracts.
+- Trait and generic implementations in separate files.
+- Using safe math in Rust/ink!
+- Porting Solidity to ink!
+- Using modifiers and creating custom modifiers.
+- Using cross-contract calls.
+
+## Summary
+[I. File & Folder structure of the project](./Structure/file-structure.md)
+[II. Pair contract](./Pair/psp22.md)
+[III. Factory contract](./Factory/getters.md)
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/flipper-contract/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/flipper-contract/_category_.json
new file mode 100644
index 0000000..b2eade6
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/flipper-contract/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Your First Flipper Contract",
+ "position": 1
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/flipper-contract/flipper-contract.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/flipper-contract/flipper-contract.md
new file mode 100644
index 0000000..4484c83
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/flipper-contract/flipper-contract.md
@@ -0,0 +1,28 @@
+---
+sidebar_position: 1
+---
+
+# Prerequisites
+
+This tutorial targets developers with no experience in ink! and a **basic** level in Rust.
+
+| Tutorial | Difficulty |
+|----------------------------------------------------------------------------|--------------------------------|
+| [NFT contract with PSP34](../nft/nft.md) | Intermediate ink! - Basic Rust |
+| [Implement Uniswap V2 core DEX](../dex/dex.md) | Advanced ink! - Basic Rust |
+
+### To follow this tutorial you will need:
+- To [set up your ink! environment](/docs/build/build-on-layer-1/environment/ink_environment.md).
+- Basic Rust knowledge. [Learn Rust](https://www.rust-lang.org/learn).
+
+### What will we do?
+In this tutorial we will implement the most basic contract: [Flipper](https://github.com/paritytech/ink/blob/v4.0.0/examples/flipper/lib.rs) in ink!.
+
+### What will we use?
+- [ink! 4.0.0](https://github.com/paritytech/ink/tree/v4.0.0)
+
+### What will you learn?
+- Anatomy of an ink! contract
+- Define contract storage
+- Callable functions
+- Unit test your contract
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/flipper-contract/flipper.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/flipper-contract/flipper.md
new file mode 100644
index 0000000..caaae73
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/flipper-contract/flipper.md
@@ -0,0 +1,301 @@
+---
+sidebar_position: 2
+---
+
+# Flipper Contract
+This is step-by-step explanation of the process behind building an ink! smart contract, using a simple app called Flipper. The examples provided within this guide will help you develop an understanding of the basic elements and structure of ink! smart contracts.
+
+## What is Flipper?
+Flipper is a basic smart contract that allows the user to toggle a boolean value located in storage to either `true` or `false`. When the flip function is called, the value will change from one to the other.
+
+## Prerequisites
+Please refer to the [previous section](./flipper-contract.md) for the list of prerequisites.
+
+## Flipper Smart Contract
+In a new project folder, execute the following:
+
+```bash
+$ cargo contract new flipper # flipper is introduced from the beginning.
+```
+
+```bash
+$ cd flipper/
+$ cargo contract build #build flipper app
+```
+
+💡 If you receive an error such as:
+```bash
+ERROR: cargo-contract cannot build using the "stable" channel. Switch to nightly.
+```
+Execute:
+```bash
+$ rustup default nightly
+```
+to reconfigure the default Rust toolchain to use the nightly build, or
+```
+$ cargo +nightly contract build
+```
+to use the nightly build explicitly, which may be appropriate for developers working exclusively with ink!
+
+Once the operation has finished, and the Flipper project environment has been initialized, we can perform an examination of the file and folder structure.
+Let’s dive a bit deeper into the project structure:
+
+### The File Structure of Flipper
+
+- `target`: Contains build / binary information.
+- `Cargo.lock`: The lock file for dependency package.
+- `Cargo.toml`: The package configuration.
+- `lib.rs`: The contract logic.
+
+### Flipper Contract `lib.rs`
+
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+
+#[ink::contract]
+mod flipper {
+
+ /// Defines the storage of your contract.
+ /// Add new fields to the below struct in order
+ /// to add new static storage fields to your contract.
+ #[ink(storage)]
+ pub struct Flipper {
+ /// Stores a single `bool` value on the storage.
+ value: bool,
+ }
+
+ impl Flipper {
+ /// Constructor that initializes the `bool` value to the given `init_value`.
+ #[ink(constructor)]
+ pub fn new(init_value: bool) -> Self {
+ Self { value: init_value }
+ }
+
+ /// Constructor that initializes the `bool` value to `false`.
+ ///
+ /// Constructors can delegate to other constructors.
+ #[ink(constructor)]
+ pub fn default() -> Self {
+ Self::new(Default::default())
+ }
+
+ /// A message that can be called on instantiated contracts.
+ /// This one flips the value of the stored `bool` from `true`
+ /// to `false` and vice versa.
+ #[ink(message)]
+ pub fn flip(&mut self) {
+ self.value = !self.value;
+ }
+
+ /// Simply returns the current value of our `bool`.
+ #[ink(message)]
+ pub fn get(&self) -> bool {
+ self.value
+ }
+ }
+
+ /// Unit tests in Rust are normally defined within such a `#[cfg(test)]`
+ /// module and test functions are marked with a `#[test]` attribute.
+ /// The below code is technically just normal Rust code.
+ #[cfg(test)]
+ mod tests {
+ /// Imports all the definitions from the outer scope so we can use them here.
+ use super::*;
+
+ /// We test if the default constructor does its job.
+ #[ink::test]
+ fn default_works() {
+ let flipper = Flipper::default();
+ assert_eq!(flipper.get(), false);
+ }
+
+ /// We test a simple use case of our contract.
+ #[ink::test]
+ fn it_works() {
+ let mut flipper = Flipper::new(false);
+ assert_eq!(flipper.get(), false);
+ flipper.flip();
+ assert_eq!(flipper.get(), true);
+ }
+ }
+}
+```
+
+### Contract Structure `lib.rs`
+
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+
+use ink_lang as ink;
+
+#[ink::contract]
+mod flipper {
+
+ // This section defines storage for the contract.
+ #[ink(storage)]
+ pub struct Flipper {
+ }
+
+ // This section defines the functional logic of the contract.
+ impl Flipper {
+ }
+
+ // This section is used for testing, in order to verify contract validity.
+ #[cfg(test)]
+ mod tests {
+ }
+}
+```
+
+### Storage
+
+```rust
+ #[ink(storage)]
+ pub struct Flipper {
+
+ }
+```
+
+This annotates a struct that represents the **contract's internal state.** ([details](https://use.ink/macros-attributes/storage)):
+
+```rust
+#[ink(storage)]
+```
+
+Storage types:
+
+- Rust primitives types
+ - `bool`
+ - `u{8,16,32,64,128}`
+ - `i{8,16,32,64,128}`
+ - `String`
+- Substrate specific types
+ - `AccountId`
+ - `Balance`
+ - `Hash`
+- ink! storage type
+ - `Mapping`
+- Custom data structure [details](https://use.ink/datastructures/custom-datastructure)
+
+This means the contract (Flipper) stores a single `bool` value in storage.
+
+```rust
+#[ink(storage)]
+pub struct Flipper {
+ value: bool,
+}
+```
+
+### Callable Functions
+At the time the contract is deployed, a constructor is responsible for **bootstrapping the initial state** into storage. [For more information](https://use.ink/macros-attributes/constructor).
+
+```rust
+#[ink(constructor)]
+```
+
+The addition of the following function will initialize `bool` to the specified `init_value`.
+
+```rust
+#[ink(constructor)]
+pub fn new(init_value: bool) -> Self {
+ Self { value: init_value }
+}
+```
+
+Contracts can also contain multiple constructors. Here is a constructor that assigns a default value to `bool`. As other language, default value of `bool` is `false`.
+
+```rust
+#[ink(constructor)]
+pub fn default() -> Self {
+ Self::new(Default::default())
+}
+```
+
+The following will permit a function to be **publicly dispatchable**, meaning that the function can be called through a message, which is a way for contracts and external accounts to interact with the contract. Find more information [here](https://use.ink/macros-attributes/message)). Note that all public functions **must** use the `#[ink(message)]` attribute.
+
+```rust
+#[ink(message)]
+```
+
+The `flip` function modifies storage items, and `get` function retrieves a storage item.
+
+```rust
+#[ink(message)]
+pub fn flip(&mut self) {
+ self.value = !self.value;
+}
+
+#[ink(message)]
+pub fn get(&self) -> bool {
+ self.value
+}
+```
+
+💡 If you are simply *reading* from contract storage, you will only need to pass `&self`, but if you wish to *modify* storage items, you will need to explicitly mark it as mutable `&mut self`.
+
+```rust
+impl Flipper {
+
+ #[ink(constructor)]
+ pub fn new(init_value: bool) -> Self {
+ Self { value: init_value }
+ }
+
+ /// Constructor that initializes the `bool` value to `false`.
+ ///
+ /// Constructors can delegate to other constructors.
+ #[ink(constructor)]
+ pub fn default() -> Self {
+ Self::new(Default::default())
+ }
+
+ /// A message that can be called on instantiated contracts.
+ /// This one flips the value of the stored `bool` from `true`
+ /// to `false` and vice versa.
+ #[ink(message)]
+ pub fn flip(&mut self) {
+ self.value = !self.value;
+ }
+
+ /// Simply returns the current value of our `bool`.
+ #[ink(message)]
+ pub fn get(&self) -> bool {
+ self.value
+ }
+ }
+```
+
+### Test
+
+```rust
+#[cfg(test)]
+ mod tests {
+ /// Imports all the definitions from the outer scope so we can use them here.
+ use super::*;
+
+ /// Imports `ink_lang` so we can use `#[ink::test]`.
+ use ink_lang as ink;
+
+ /// We test if the default constructor does its job.
+ #[ink::test]
+ fn default_works() {
+ let flipper = Flipper::default();
+ assert_eq!(flipper.get(), false);
+ }
+
+ /// We test a simple use case of our contract.
+ #[ink::test]
+ fn it_works() {
+ let mut flipper = Flipper::new(false);
+ assert_eq!(flipper.get(), false);
+ flipper.flip();
+ assert_eq!(flipper.get(), true);
+ }
+ }
+```
+
+### Compile, Deploy and Interact with Contracts
+
+
+Follow this guide to deploy your contract [using Polkadot UI](https://docs.astar.network/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/polkadotjs/). Once deployed, you will be able to interact with it.
+
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/index.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/index.md
new file mode 100644
index 0000000..c764eeb
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/index.md
@@ -0,0 +1,8 @@
+# From Zero to ink! Hero
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/_category_.json
new file mode 100644
index 0000000..4e39af2
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Manic Minter",
+ "position": 3
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/manic-contract.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/manic-contract.md
new file mode 100644
index 0000000..b2388e6
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/manic-contract.md
@@ -0,0 +1,156 @@
+---
+sidebar_position: 3
+---
+
+# ManicMinter Contract
+This tutorial will not include complete code from the contract to keep it short and focused on the cross contract calls and e2e tests. The full code for this example is available [here](https://github.com/swanky-dapps/manic-minter)
+
+## Storage
+To start with delete the boilerplate code from the `lib.rs` file.
+The storage will be defined as follows:
+```rust
+pub struct ManicMinter {
+ /// Contract owner
+ owner: AccountId,
+ /// Oxygen contract address
+ token_contract: AccountId,
+ /// Minting price. Caller must pay this price to mint one new token from Oxygen contract
+ price: Balance,
+}
+```
+
+## Error and Types
+We will define the error and types as follows:
+```rust
+/// The ManicMinter error types.
+pub enum Error {
+ /// Returned if not enough balance to fulfill a request is available.
+ BadMintValue,
+ /// Returned if the token contract account is not set during the contract creation.
+ ContractNotSet,
+ /// The call is not allowed if the caller is not the owner of the contract
+ NotOwner,
+ /// Returned if multiplication of price and amount overflows
+ OverFlow,
+ /// Returned if the cross contract transaction failed
+ TransactionFailed,
+}
+
+pub type Result = core::result::Result;
+```
+## Contract Trait
+The following trait will be used to define the contract interface:
+```rust
+pub trait Minting {
+ /// Mint new tokens from Oxygen contract
+ #[ink(message, payable)]
+ fn manic_mint(&mut self, amount: Balance) -> Result<()>;
+
+ /// Set minting price for one Oxygen token
+ #[ink(message)]
+ fn set_price(&mut self, price: Balance) -> Result<()>;
+
+ /// Get minting price for one Oxygen token
+ #[ink(message)]
+ fn get_price(&self) -> Balance;
+}
+```
+
+## Constructor
+The constructor will be defined as follows:
+```rust
+impl ManicMinter {
+ #[ink(constructor)]
+ pub fn new(contract_acc: AccountId) -> Self {
+ Self {
+ owner: Self::env().caller(),
+ token_contract: contract_acc,
+ price: 0,
+ }
+ }
+}
+```
+## Cross Contract Call
+The `manic_mint` method will execute cross contract call to Oxygen contract using `Call Builder`. The method will be defined as follows:
+```rust
+impl Minting for ManicMinter {
+ #[ink(message, payable)]
+ fn manic_mint(&mut self, amount: Balance) -> Result<()> {
+ //---- snip ----
+
+ let mint_result = build_call::()
+ .call(self.token_contract)
+ .gas_limit(5000000000)
+ .exec_input(
+ ExecutionInput::new(Selector::new(ink::selector_bytes!("PSP22Mintable::mint")))
+ .push_arg(caller)
+ .push_arg(amount),
+ )
+ .returns::<()>()
+ .try_invoke();
+
+ //---- snip ----
+ }
+}
+```
+Let's go over the code line by line:
+* The `build_call().call()` method is called passing Oxygen contract address as an argument.
+* The `gas_limit` method is invoked with a value of 5000000000 to set the gas limit for the contract execution.
+* The `exec_input` method is used to specify the execution input for the contract call.
+* An `ExecutionInput` instance is created with the selector `PSP22Mintable::mint` with the arguments of `caller` address and `amount`.
+* The `returns` method is called to specify the expected return type of the contract call, which in this case is ().
+* The `try_invoke` method is called to execute the contract call and capture any potential errors.
+
+:::note
+
+To learn more about the `Call Builder` and other methods for cross contract call please refer to the [ink! documentation](https://use.ink/basics/cross-contract-calling).
+
+:::
+
+
+## Cargo Update
+Update `Cargo.toml` dependency with the following content:
+```toml
+[dependencies]
+ink = { version = "4.1.0", default-features = false }
+```
+
+Since we will use PSP22 by Openbrush in our e2e test we need to add it under `dev-dependencies`
+```toml
+[dev-dependencies]
+ink_e2e = "4.1.0"
+openbrush = { tag = "3.1.0", git = "https://github.com/727-Ventures/openbrush-contracts", default-features = false, features = ["psp34", "ownable"] }
+oxygen = { path = "../oxygen", default-features = false, features = ["ink-as-dependency"] }
+```
+
+## Oxygen Contract Update
+Since we are using Oxygen contract for our testing we need to update it to be able to use it as a dependency. The code is already provided in the previous chapter, but please note that
+1. `Cargo.toml` needs to be updated to become a library:
+```toml
+crate-type = [
+ "rlib",
+]
+```
+2. At the top of the `lib.rs` file for Oxygen contract add `ref`
+
+```rust
+pub use self::oxygen::OxygenRef;
+```
+3. Under the `features` section of the `Cargo.toml` file add the following:
+```toml
+ink-as-dependency = []
+```
+:::note
+* This is a prerequisite for ManicMinter contract to import the Oxygen library in the `Cargo.toml` file with feature `ink-as-dependency`:
+```rust
+oxygen = { path = "../oxygen", default-features = false, features = ["ink-as-dependency"] }
+```
+:::
+
+## Summary of the ManicMinter Contract Chapter
+* The ManicMinter contract mints new fungible tokens.
+* The ManicMinter contract mints Oxygen tokens by invoking cross contract call to Oxygen contract.
+* The Oxygen contract needs to be set as library with `ink-as-dependency` feature to be used as a dependency in the ManicMinter contract.
+
+
+The full code for this example is available [here](https://github.com/swanky-dapps/manic-minter).
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/manic-e2e.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/manic-e2e.md
new file mode 100644
index 0000000..c5b9a3f
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/manic-e2e.md
@@ -0,0 +1,147 @@
+---
+sidebar_position: 4
+---
+
+# ManicMinter e2e Test
+In this chapter we will write e2e tests for the ManicMinter contract. The e2e tests will be written in Rust using the ink! e2e framework. The e2e tests will be executed on a local substrate node.
+Just like in previous chapter we will not include complete code from the contract to keep it short and focused on the e2e tests.
+## Import Crates
+Let's create a new module `e2e_tests` within the body of the `mod manicminter` and import the following crates:
+```rust
+#[cfg(all(test, feature = "e2e-tests"))]
+mod e2e_tests {
+ use super::*;
+ use crate::manicminter::ManicMinterRef;
+ use ink::primitives::AccountId;
+ use ink_e2e::build_message;
+ use openbrush::contracts::ownable::ownable_external::Ownable;
+ use openbrush::contracts::psp22::psp22_external::PSP22;
+ use oxygen::oxygen::OxygenRef;
+
+ type E2EResult = std::result::Result>;
+
+ const AMOUNT: Balance = 100;
+ const PRICE: Balance = 10;
+}
+```
+You will notice that we import Openbrush traits to invoke methods from the Oxygen contract, which is implemented using Openbrush's version of PSP22.
+
+## Instantiate Contracts
+We will use the `ink_e2e::Client` to instantiate the contracts. The `ink_e2e::Client` is a wrapper around the `ink_env::test` environment. The `ink_e2e::Client` provides a convenient way to instantiate contracts and invoke contract methods.
+
+In the declarative macro add our contracts as `additional contracts`:
+```rust
+#[ink_e2e::test(additional_contracts = "manicminter/Cargo.toml oxygen/Cargo.toml")]
+async fn e2e_minting_works(mut client: ink_e2e::Client) -> E2EResult<()> {
+ let initial_balance: Balance = 1_000_000;
+
+ // Instantiate Oxygen contract
+ let token_constructor = OxygenRef::new(initial_balance);
+ let oxygen_account_id = client
+ .instantiate("oxygen", &ink_e2e::alice(), token_constructor, 0, None)
+ .await
+ .expect("token instantiate failed")
+ .account_id;
+
+ // Instantiate ManicMinter contract
+ let manic_minter_constructor = ManicMinterRef::new(oxygen_account_id);
+ let manic_minter_account_id = client
+ .instantiate(
+ "manic-minter",
+ &ink_e2e::alice(),
+ manic_minter_constructor,
+ 0,
+ None,
+ )
+ .await
+ .expect("ManicMinter instantiate failed")
+ .account_id;
+}
+```
+
+## Set ManicMinter as Owner of Oxygen
+We will use the `build_message` macro to compose the `transfer_ownership` method of the Oxygen contract. The `client.call()` executes the contract call. The `call_dry_run` method with `owner()` message verifies the result of the contract call.
+
+```rust
+// Set ManicMinter contract to be the owner of Oxygen contract
+let change_owner = build_message::(oxygen_account_id.clone())
+ .call(|p| p.transfer_ownership(manic_minter_account_id));
+client
+ .call(&ink_e2e::alice(), change_owner, 0, None)
+ .await
+ .expect("calling `transfer_ownership` failed");
+
+// Verify that ManicMinter is the Oxygen contract owner
+let owner = build_message::(oxygen_account_id.clone()).call(|p| p.owner());
+let owner_result = client
+ .call_dry_run(&ink_e2e::alice(), &owner, 0, None)
+ .await
+ .return_value();
+assert_eq!(owner_result, manic_minter_account_id);
+```
+
+## Set Price for Oxygen Tokens
+
+We use the `build_message` macro to compose the `set_price` method of the ManicMinter contract. The `client.call()` executes the contract call.
+
+```rust
+// Contract owner sets price
+let price_message = build_message::(manic_minter_account_id.clone())
+ .call(|manicminter| manicminter.set_price(PRICE));
+client
+ .call(&ink_e2e::alice(), price_message, 0, None)
+ .await
+ .expect("calling `set_price` failed");
+
+```
+## Mint Oxygen Tokens
+We are now ready to execute `manic_mint` method of the ManicMinter contract. We use the `build_message` macro to compose the `manic_mint` method of the ManicMinter contract. The `client.call()` executes the contract call. The `call_dry_run` method with `balance_of()` message verifies the result of the contract call on the Oxygen contract.
+
+```rust
+// Bob mints AMOUNT of Oxygen tokens by calling ManicMinter contract
+let mint_message = build_message::(manic_minter_account_id.clone())
+ .call(|manicminter| manicminter.manic_mint(AMOUNT));
+client
+ .call(&ink_e2e::bob(), mint_message, PRICE * AMOUNT, None)
+ .await
+ .expect("calling `pink_mint` failed");
+
+// Verify that tokens were minted on Oxygen contract
+let bob_account_id = get_bob_account_id();
+let balance_message = build_message::(oxygen_account_id.clone())
+ .call(|p| p.balance_of(bob_account_id));
+let token_balance = client
+ .call_dry_run(&ink_e2e::bob(), &balance_message, 0, None)
+ .await
+ .return_value();
+assert_eq!(token_balance, AMOUNT);
+```
+
+## Execute e2e Test
+The e2e tests are invoking the node which is running on the local machine.
+Before running the test we need to setup the environment variable `CONTRACT_NODE` to the executable local node. The node can be Swanky-node or any other node that implements pallet-contracts.
+```bash
+export CONTRACTS_NODE="YOUR_CONTRACTS_NODE_PATH"
+```
+As an example it can be set to the following value:
+```bash
+export CONTRACTS_NODE="/home/p/Documents/astar/target/release/astar-collator"
+```
+After setting your node path, run the following command to execute the e2e tests:
+```bash
+cargo test --features e2e-tests
+```
+## Debugging e2e Test
+If you want to print some variables and messages during the e2e test execution you can use the `println!` macro. The output will be printed in the terminal where the test is executed. To be able to see the printed output you need to run the test with `--nocapture` flag:
+```bash
+cargo test --features e2e-tests -- --nocapture
+```
+
+## Summary of the e2e Test Chapter:
+* We imported the required crates for e2e tests.
+* We instantiated the ManicMinter and Oxygen contracts.
+* We set the ManicMinter contract to be the owner of the Oxygen contract.
+* We set the price for Oxygen tokens.
+* We minted Oxygen tokens using the ManicMinter contract.
+
+The full code for this example is available [here](https://github.com/swanky-dapps/manic-minter).
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/manic-minter.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/manic-minter.md
new file mode 100644
index 0000000..7cc9ab6
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/manic-minter.md
@@ -0,0 +1,35 @@
+---
+sidebar_position: 1
+---
+
+# Prerequisites
+This tutorial is suitable for developers with **intermediate** knowledge of ink! and basic understanding of Rust. Previous experience compiling and deploying an ink! smart contract will be beneficial, such as from following the previous Flipper and NFT contract tutorials:
+
+| Tutorial | Difficulty |
+|----------------------------------------------------------------------------|--------------------------------|
+| [Your First Flipper Contract](../flipper-contract/flipper-contract.md) | Basic ink! - Basic Rust |
+| [NFT contract with PSP34](../nft/nft.md) | Intermediate ink! - Basic Rust |
+
+
+## How to Start
+To follow this tutorial you will need:
+- To [set up your ink! environment](/docs/build/build-on-layer-1/environment/ink_environment.md).
+- Basic Rust knowledge. [Learn Rust](https://www.rust-lang.org/learn)
+- Prior knowledge about ERC20 is helpful but not mandatory.
+
+## What will be used?
+- [ink! v4.1.0](https://github.com/paritytech/ink/tree/v4.0.0)
+- [Openbrush 3.1.0](https://github.com/727-Ventures/openbrush-contracts/tree/3.0.0)
+- cargo-contract 2.1.0
+
+## What will you learn?
+- Creation of a fungible token with PSP22 standard.
+- Use Openbrush wizard to create PSP22 smart contract.
+- Use Rust trait and implement it in same file.
+- Calling cross contract method with Builder.
+- ink! e2e test for cross contract calls.
+
+## Summary
+[I. Contract Setup](./manic-setup.md)
+[II. ManicMinter Contract](./manic-contract.md)
+[III ManicMinter e2e Test](./manic-e2e.md)
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/manic-setup.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/manic-setup.md
new file mode 100644
index 0000000..f209ada
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/manic-minter/manic-setup.md
@@ -0,0 +1,159 @@
+---
+sidebar_position: 2
+---
+
+# ManicMinter Setup
+This is a tutorial on creating a minter contract and a token contract using the ink! smart contract framework. In this tutorial, you will learn how to develop two contracts, use cross contract call and test it with ink! e2e framework.
+
+The minter contract will handle the minting of new fungible tokens, while the token contract will adhere to the PSP22 standard. Our chosen name for the fungible token smart contract is "Oxygen," and the minter contract will be called "ManicMinter."
+
+Once the contracts are created, the ManicMinter contract will become the owner of the Oxygen contract. Only the ManicMinter contract will have the ability to mint Oxygen tokens, and users will acquire these tokens by paying native tokens to the ManicMinter contract at a price determined by its owner.
+
+Let's help Willy to mint some Oxygen tokens through the ManicMinter contract!
+
+## Prerequisites
+Please refer to the [previous section](./manic-minter.md) for the list of prerequisites.
+
+## ManicMinter and Oxygen Smart Contracts
+### Initial Setup
+In a new project folder, execute the following:
+
+```bash
+$ cargo contract new manicminter
+$ cargo contract new oxygen
+```
+Create the root `Cargo.toml` file with the workspace content:
+```toml
+[workspace]
+members = [
+ "oxygen",
+ "manicminter",
+]
+```
+
+### Oxygen Contract Setup
+Let's create a new ink! smart contract for fungible tokens using Brushfam library for PSP22. In the `oxygen/` folder, add the following to the `Cargo.toml` file:
+```toml
+[package]
+name = "oxygen"
+version = "1.0.0"
+edition = "2021"
+authors = ["The best developer ever"]
+
+[dependencies]
+
+ink = { version = "4.1.0", default-features = false }
+
+scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] }
+scale-info = { version = "2.3", default-features = false, features = ["derive"], optional = true }
+
+# Include brush as a dependency and enable default implementation for PSP22 via brush feature
+openbrush = { tag = "3.1.0", git = "https://github.com/727-Ventures/openbrush-contracts", default-features = false, features = ["psp22", "ownable"] }
+
+[lib]
+path = "lib.rs"
+crate-type = [
+ "rlib",
+]
+
+[features]
+default = ["std"]
+std = [
+ "ink/std",
+ "scale/std",
+ "scale-info/std",
+
+ "openbrush/std",
+]
+ink-as-dependency = []
+```
+
+In the same `oxygen/` folder, add the following to the `lib.rs` file:
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+#![feature(min_specialization)]
+
+pub use self::oxygen::OxygenRef;
+
+#[openbrush::contract]
+pub mod oxygen {
+
+ use openbrush::contracts::ownable::*;
+ use openbrush::contracts::psp22::extensions::mintable::*;
+ use openbrush::traits::Storage;
+
+ #[ink(storage)]
+ #[derive(Default, Storage)]
+ pub struct Oxygen {
+ #[storage_field]
+ psp22: psp22::Data,
+ #[storage_field]
+ ownable: ownable::Data,
+ }
+
+ impl PSP22 for Oxygen {}
+ impl Ownable for Oxygen {}
+ impl PSP22Mintable for Oxygen {}
+
+ impl Oxygen {
+ #[ink(constructor)]
+ pub fn new(initial_supply: Balance) -> Self {
+ let mut instance = Self::default();
+ instance
+ ._mint_to(instance.env().caller(), initial_supply)
+ .expect("Should mint");
+ instance._init_with_owner(instance.env().caller());
+ instance
+ }
+ }
+}
+```
+
+This tutorial uses ink! version 4.1.0. If you are using a different version, please update the `Cargo.toml` file accordingly.
+```toml
+ink = { version = "4.1.0", default-features = false }
+```
+
+Use Openbrush version `3.1.0` with ink! version 4.1.0 and add features "psp22" and "ownable".
+
+```toml
+openbrush = { tag = "3.1.0", git = "https://github.com/727-Ventures/openbrush-contracts", default-features = false, features = ["psp22", "ownable"] }
+```
+Since Openbrush 3.1.0 uses feature `min_specialization` which is not supported by Rust stable, we need to use nightly Rust compiler. Create a file `rust-toolchain.toml` in the root of the project with the following content:
+```toml
+[toolchain]
+channel = "nightly-2023-01-10"
+components = [ "rustfmt", "clippy" ]
+targets = [ "wasm32-unknown-unknown"]
+profile = "minimal"
+```
+
+### ManicMinter Contract Setup
+This tutorial uses ink! version 4.1.0. If you are using a different version, please update the `Cargo.toml` file accordingly.
+```toml
+[dependencies]
+ink = { version = "4.1.0", default-features = false }
+[dev-dependencies]
+ink_e2e = "4.1.0"
+```
+
+
+### Verify the Contracts
+The ManicMinter contract for now contains the boilerplate code. In the next step we will add the ManicMinter contract code, but for now let's just verify that our setup configuration is correct.
+Let's verify the setup so far by building the contracts. At the root of the project, execute the following:
+```bash
+cargo check
+cargo test
+```
+
+The folder structure for your contract should now look like this:
+```bash
+Cargo.lock
+Cargo.toml
+manicminter/
+oxygen/
+rust-toolchain.toml
+target/
+```
+
+The full code for this example is available [here](https://github.com/swanky-dapps/manic-minter).
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/CustomTrait/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/CustomTrait/_category_.json
new file mode 100644
index 0000000..e9fd52d
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/CustomTrait/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Custom Trait",
+ "position": 3
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/CustomTrait/customtrait.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/CustomTrait/customtrait.md
new file mode 100644
index 0000000..8bb2b59
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/CustomTrait/customtrait.md
@@ -0,0 +1,208 @@
+# Custom Trait
+
+Next, we will expand the contract with more utility methods to have more control over the NFT creation, minting, payments and all that most of the NFT projects will need.
+To start with we will move `mint()` from contract `lib.rs` to a custom trait `PayableMint`.
+
+## Folder Structure for Custom Trait
+Before starting to add code we need to prepare the scene for the external trait. Create new `logics` folder with following empty files:
+```bash
+.
+├── Cargo.toml
+├── contracts
+│ └── shiden34
+│ ├── Cargo.toml
+│ └── lib.rs
+└── logics
+ ├── Cargo.toml
+ ├── impls
+ │ ├── mod.rs
+ │ └── payable_mint
+ │ ├── mod.rs
+ │ └── payable_mint.rs
+ ├── lib.rs
+ └── traits
+ ├── mod.rs
+ └── payable_mint.rs
+```
+
+## Module Linking
+With the extended structure we need to link all new modules. Let's start from `logics` folder.
+The crate's `lib.rs` needs to point to `impls` and `trait` folders and since it is top module for this crate it needs a few macros:
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+
+pub mod impls;
+pub mod traits;
+```
+
+The crate's `Cargo.toml` will import all ink! and Openbrush crates and it will be used by the contract's `Cargo.toml` to import all methods. We will name this package `payable_mint_pkg`.
+```toml
+[package]
+name = "payable_mint_pkg"
+version = "3.1.0"
+authors = ["Astar builder"]
+edition = "2021"
+
+[dependencies]
+ink = { version = "4.2.1", default-features = false }
+scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] }
+scale-info = { version = "2.6", default-features = false, features = ["derive"], optional = true }
+openbrush = { tag = "v4.0.0-beta", git = "https://github.com/Brushfam/openbrush-contracts", default-features = false, features = ["psp34", "ownable"] }
+
+[lib]
+path = "lib.rs"
+crate-type = ["rlib"]
+
+[features]
+default = ["std"]
+std = [
+ "ink/std",
+ "scale/std",
+ "scale-info",
+ "openbrush/std",
+]
+```
+Add same `mod.rs` file in folders: traits, impls, and impls/payable_mint.
+```rust
+pub mod payable_mint;
+```
+As a last step add link to `payable_mint` in contract's `Cargo.toml`.
+```toml
+payable_mint_pkg = { path = "../../logics", default-features = false }
+
+[features]
+default = ["std"]
+std = [
+ // ...
+ "payable_mint_pkg/std",
+]
+```
+
+## Define Custom Trait
+In the `logics/traits/payable_mint.rs` file, add a trait_definition for PayableMint.
+```rust
+use openbrush::{
+ contracts::{
+ psp34::PSP34Error,
+ psp34::extensions::enumerable::*
+ },
+ traits::{
+ AccountId,
+ },
+};
+
+#[openbrush::wrapper]
+pub type PayableMintRef = dyn PayableMint;
+
+#[openbrush::trait_definition]
+pub trait PayableMint {
+ #[ink(message, payable)]
+ fn mint(&mut self, account: AccountId, id: Id) -> Result<(), PSP34Error>;
+}
+```
+
+You may have noticed some unusual macro commands in these examples. They will be explained in greater detail in the next section as we go over the process of building a DEX.
+
+## Move `mint()` Function to Custom Trait Implementation
+Let's move the `mint()` function from the contract's `lib.rs` to the newly created `logics/impls/payable_mint.rs` file, as we do not want any duplicated calls in the contract.
+
+```rust
+use openbrush::traits::DefaultEnv;
+use openbrush::{
+ contracts::psp34::*,
+ traits::{AccountId, String},
+};
+
+#[openbrush::trait_definition]
+pub trait PayableMintImpl: psp34::InternalImpl {
+ #[ink(message, payable)]
+ fn mint(&mut self, account: AccountId, id: Id) -> Result<(), PSP34Error> {
+ if Self::env().transferred_value() != 1_000_000_000_000_000_000 {
+ return Err(PSP34Error::Custom(String::from("BadMintValue")));
+ }
+
+ psp34::InternalImpl::_mint_to(self, account, id)
+ }
+}
+
+```
+
+The last remaining step is to import and implement `PayableMint` in our contract:
+
+```rust
+use payable_mint_pkg::impls::payable_mint::*;
+
+...
+
+impl payable_mint::PayableMintImpl for Shiden34 {}
+```
+
+The contract with all its changes should now appear as something like this:
+
+```rust
+#![cfg_attr(not(feature = "std"), no_std, no_main)]
+
+#[openbrush::implementation(PSP34, PSP34Enumerable, PSP34Metadata, PSP34Mintable, Ownable)]
+#[openbrush::contract]
+pub mod shiden34 {
+ use openbrush::traits::Storage;
+ use payable_mint_pkg::impls::payable_mint::*;
+
+ #[ink(storage)]
+ #[derive(Default, Storage)]
+ pub struct Shiden34 {
+ #[storage_field]
+ psp34: psp34::Data,
+ #[storage_field]
+ ownable: ownable::Data,
+ #[storage_field]
+ metadata: metadata::Data,
+ #[storage_field]
+ enumerable: enumerable::Data,
+ }
+
+ #[overrider(PSP34Mintable)]
+ #[openbrush::modifiers(only_owner)]
+ fn mint(&mut self, account: AccountId, id: Id) -> Result<(), PSP34Error> {
+ psp34::InternalImpl::_mint_to(self, account, id)
+ }
+
+ impl payable_mint::PayableMintImpl for Shiden34 {}
+
+ impl Shiden34 {
+ #[ink(constructor)]
+ pub fn new() -> Self {
+ let mut _instance = Self::default();
+ ownable::Internal::_init_with_owner(&mut _instance, Self::env().caller());
+ psp34::Internal::_mint_to(&mut _instance, Self::env().caller(), Id::U8(1))
+ .expect("Can mint");
+ let collection_id = psp34::PSP34Impl::collection_id(&_instance);
+ metadata::Internal::_set_attribute(
+ &mut _instance,
+ collection_id.clone(),
+ String::from("name"),
+ String::from("Shiden34"),
+ );
+ metadata::Internal::_set_attribute(
+ &mut _instance,
+ collection_id,
+ String::from("symbol"),
+ String::from("SH34"),
+ );
+ _instance
+ }
+ }
+}
+
+```
+Format your code with:
+```bash
+cargo fmt --all
+```
+
+Check if code compiles:
+```bash
+cargo check
+````
+
+At this stage, your code should look something like [this](https://github.com/swanky-dapps/nft/tree/tutorial/trait-step3).
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Events/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Events/_category_.json
new file mode 100644
index 0000000..ceba74c
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Events/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Events",
+ "position": 6
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Events/events.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Events/events.md
new file mode 100644
index 0000000..1c59802
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Events/events.md
@@ -0,0 +1,85 @@
+# Events
+The last thing our contract will need at this point is an event handler.
+
+## What are Events for Smart Contracts?
+Events are important for smart contracts because they facilitate communication between the contract itself and the user interface. In traditional Web2 development, a server response is provided in a callback to the frontend. In Web3, when a transaction is executed, smart contracts emit events to the blockchain that the frontend is able to process.
+
+## Minting Event
+In our contract, an event should be emitted when a token is minted.
+One could expect that by calling the Openbrush `_mint_to()` function an event will be emitted, but upon closer examination we can see that `_emit_transfer_event()` has an empty default [implementation](https://github1s.com/Supercolony-net/openbrush-contracts/blob/main/contracts/src/token/psp34/psp34.rs#L151-L152). This grants developers flexibility to create events that are suitable for their own needs.
+
+```rust
+default fn _emit_transfer_event(&self, _from: Option, _to: Option, _id: Id) {}
+```
+
+Let's define two events that are required for token handling, *Transfer* and *Approve*, in the contracts's `lib.rs` file. Please note that there is no `Mint` event, as it's covered by the *Transfer* event, in which case `from` will be the contract address.
+```rust
+use ink::codegen::{EmitEvent, Env};
+
+/// Event emitted when a token transfer occurs.
+#[ink(event)]
+pub struct Transfer {
+ #[ink(topic)]
+ from: Option,
+ #[ink(topic)]
+ to: Option,
+ #[ink(topic)]
+ id: Id,
+}
+
+/// Event emitted when a token approve occurs.
+#[ink(event)]
+pub struct Approval {
+ #[ink(topic)]
+ from: AccountId,
+ #[ink(topic)]
+ to: AccountId,
+ #[ink(topic)]
+ id: Option,
+ approved: bool,
+}
+```
+
+Override the default event emission function:
+```rust
+#[overrider(psp34::Internal)]
+fn _emit_transfer_event(&self, from: Option, to: Option, id: Id) {
+ self.env().emit_event(Transfer { from, to, id });
+}
+
+#[overrider(psp34::Internal)]
+fn _emit_approval_event(&self, from: AccountId, to: AccountId, id: Option, approved: bool) {
+ self.env().emit_event(Approval {
+ from,
+ to,
+ id,
+ approved,
+ });
+}
+```
+
+## Update Unit Test
+As a final check let's add an event check at the end of our unit test. Since our test minted 5 tokens, we should expect 5 events to be emitted.
+```rust
+assert_eq!(5, ink::env::test::recorded_events().count());
+```
+Format your code with:
+```bash
+cargo fmt --all
+```
+
+Run unit test:
+```bash
+cargo test
+```
+
+At this stage, your code should look something like [this](https://github.com/swanky-dapps/nft/tree/tutorial/events).
+
+## Next Step
+Congratulations! You've made it through all the steps required to build your NFT Contract!
+
+As a next step, review the code in the [main](https://github.com/swanky-dapps/nft/) branch for the repository used for this tutorial. There you can enhance your knowledge about:
+- Improving the unit test coverage.
+- Adding new useful functions.
+- End-to-end testing.
+- Improving error handling.
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Override/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Override/_category_.json
new file mode 100644
index 0000000..d0fb005
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Override/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Override mint()",
+ "position": 2
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Override/override.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Override/override.md
new file mode 100644
index 0000000..756fc44
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Override/override.md
@@ -0,0 +1,72 @@
+# Override `mint()` Method
+
+## Mint allowed only for Owner
+
+You may have noticed while using the Openbrush wizard, that prior to adding the Security -> Ownable trait, the contract will not have the `mint()` function overridden, so anyone is able to mint new tokens, by default.
+
+
+However, after including the *Ownable* trait, the default `mint()` function will be overridden, and restricted to being called by the contract owner, only.
+
+```rust
+#[overrider(PSP34Mintable)]
+#[openbrush::modifiers(only_owner)]
+fn mint(&mut self, account: AccountId, id: Id) -> Result<(), PSP34Error>{
+ psp34::InternalImpl::_mint_to(self, account, id)
+}
+```
+
+The wizard also creates a line in the `new()` constructor that sets the initial owner of the contract to the account address used to deploy it:
+
+```rust
+_instance._init_with_owner(_instance.env().caller());
+```
+
+At this stage we will make a few changes:
+* We do not want tokens to be mintable by the contract owner, only. We would like anyone who paid a fee to be able to mint tokens, as well.
+* We would like to charge a fee of 1 SDN token per token minted (or any other native token, depending on the network).
+* The constructor should not call the mint function.
+
+
+## Make the mint() Function Payable
+Making a function payable in an ink! contract is relatively straightforward. Simply add `payable` to the ink! macro as follows:
+
+```rust
+#[ink(message, payable)]
+```
+However, since `PSP34Mintable` is an imported trait, the `payable` attribute can't be overridden in the current state of Openbrush. Therefore, we will need to introduce a new trait, and implement it in our contract.
+The trait `PSP34Mintable` and it's `mint()` function will no longer be needed, so can be removed from the contract.
+
+Let's introduce a new `mint()` function to the contract, and add it after the constructor.
+
+```rust
+#[ink(message, payable)]
+pub fn mint(&mut self, account: AccountId, id: Id) -> Result<(), PSP34Error> {
+ psp34::InternalImpl::_mint_to(self, account, id)
+}
+```
+
+This will make the function payable, which means `mint()` will be able to receive native tokens, however, we still need to check the amount of funds transferred by the call.
+If the value transferred is not 1 native token, the `mint()` method will return an error message that can be customized to suit your needs.
+
+```rust
+#[ink(message, payable)]
+pub fn mint(&mut self, account: AccountId, id: Id) -> Result<(), PSP34Error> {
+ if self.env().transferred_value() != 1_000_000_000_000_000_000 {
+ return Err(PSP34Error::Custom(String::from("BadMintValue")));
+ }
+
+ psp34::InternalImpl::_mint_to(self, account, id)
+}
+```
+
+Format your code with:
+```bash
+cargo fmt --all
+```
+
+Check if code compiles:
+```bash
+cargo check
+````
+
+At this stage, your code should look something like [this](https://github.com/swanky-dapps/nft/tree/tutorial/mint-step2).
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/PayableMintImpl/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/PayableMintImpl/_category_.json
new file mode 100644
index 0000000..9e45234
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/PayableMintImpl/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "PayableMint Trait Implementation",
+ "position": 5
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/PayableMintImpl/payablemintimpl.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/PayableMintImpl/payablemintimpl.md
new file mode 100644
index 0000000..b521a28
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/PayableMintImpl/payablemintimpl.md
@@ -0,0 +1,268 @@
+# PayableMint Trait Implementation
+In this section we will:
+* Define a new data type.
+* Implement functions defined in the `PayableMint` trait from the previous section in file `logics/impl/payable_mint.rs`.
+* Update the contract's constructor to accept new parameters.
+* Write a unit test for `mint()`.
+
+## New Type Definition
+Since the contract is able to accept new parameters, we will need storage to log them. Let's create a new file called `logics/impls/payable_mint/types.rs` and add new type `Data`:
+
+```rust
+use openbrush::traits::Balance;
+
+#[derive(Default, Debug)]
+#[openbrush::storage_item]
+pub struct Data {
+ pub last_token_id: u64,
+ pub max_supply: u64,
+ pub price_per_mint: Balance,
+}rice_per_mint: Balance,
+}
+```
+
+Don't forget to update the `logics/impls/payable_mint/mod.rs` file with:
+
+```rust
+pub mod types;
+```
+
+Since we introduced data storage, we will need to add a trait bound `Storage` in `logics/impls/payable_mint/payable_mint.rs`:
+
+```rust
+use crate::impls::payable_mint::types::Data;
+
+#[openbrush::trait_definition]
+pub trait PayableMintImpl:
+ Storage
+ + Storage
+ + Storage
+ + Storage
+ {...}
+```
+
+## `mint()` Implementation
+There are several checks that need to be performed before the token mint can proceed. To keep our `mint()` function easy to read, let's introduce an `Internal` trait with helper functions in our implementation file `logics/impls/payable_mint/payable_mint.rs` and add two helper functions `check_value()` and `check_amount()` by defining traits and implementing them in the same file:
+
+```rust
+pub trait Internal: Storage + psp34::Internal {
+ /// Check if the transferred mint values is as expected
+ fn check_value(&self, transferred_value: u128, mint_amount: u64) -> Result<(), PSP34Error> {
+ if let Some(value) = (mint_amount as u128).checked_mul(self.data::().price_per_mint) {
+ if transferred_value == value {
+ return Ok(());
+ }
+ }
+ return Err(PSP34Error::Custom(String::from("BadMintValue")));
+ }
+
+ /// Check amount of tokens to be minted
+ fn check_amount(&self, mint_amount: u64) -> Result<(), PSP34Error> {
+ if mint_amount == 0 {
+ return Err(PSP34Error::Custom(String::from("CannotMintZeroTokens")));
+ }
+ if let Some(amount) = self.data::().last_token_id.checked_add(mint_amount) {
+ if amount <= self.data::().max_supply {
+ return Ok(());
+ }
+ }
+ return Err(PSP34Error::Custom(String::from("CollectionIsFull")));
+ }
+}
+```
+Using these helper functions our `mint()` implementation will look like this:
+```rust
+#[ink(message, payable)]
+fn mint(&mut self, to: AccountId, mint_amount: u64) -> Result<(), PSP34Error> {
+ self.check_value(Self::env().transferred_value(), mint_amount)?;
+ self.check_amount(mint_amount)?;
+
+ let next_to_mint = self.data::().last_token_id + 1; // first mint id is 1
+ let mint_offset = next_to_mint + mint_amount;
+
+ for mint_id in next_to_mint..mint_offset {
+ psp34::InternalImpl::_mint_to(self, to, Id::U64(mint_id))?;
+ self.data::().last_token_id += 1;
+ }
+
+ Ok(())
+}
+```
+## `withdraw()` Implementation
+This trait allows the contract owner to initiate withdrawal of funds from the contract by implementing a withdraw function:
+
+```rust
+/// Withdraws funds to contract owner
+#[ink(message)]
+#[openbrush::modifiers(only_owner)]
+fn withdraw(&mut self) -> Result<(), PSP34Error> {
+ let balance = Self::env().balance();
+ let current_balance = balance
+ .checked_sub(Self::env().minimum_balance())
+ .unwrap_or_default();
+ let owner = self.data::().owner.get().unwrap().unwrap();
+ Self::env()
+ .transfer(owner, current_balance)
+ .map_err(|_| PSP34Error::Custom(String::from("WithdrawalFailed")))?;
+ Ok(())
+}
+```
+## `set_base_uri()` and `token_uri()` Implementation
+
+To make the code cleaner, let's create additional helper function `token_exist()` and add it to the `Internal` trait:
+
+```rust
+pub trait Internal: Storage + psp34::Internal {
+ ...
+
+ /// Check if token is minted
+ fn token_exists(&self, id: Id) -> Result<(), PSP34Error> {
+ self._owner_of(&id).ok_or(PSP34Error::TokenNotExists)?;
+ Ok(())
+ }
+```
+
+Now the implementation of `set_base_uri()` and `token_uri()` will look like this:
+```rust
+...
+/// Set new value for the baseUri
+#[ink(message)]
+#[openbrush::modifiers(only_owner)]
+fn set_base_uri(&mut self, uri: String) -> Result<(), PSP34Error> {
+ let id = PSP34Impl::collection_id(self);
+ metadata::Internal::_set_attribute(self, id, String::from("baseUri"), uri);
+
+ Ok(())
+}
+
+/// Get URI from token ID
+#[ink(message)]
+fn token_uri(&self, token_id: u64) -> Result {
+ self.token_exists(Id::U64(token_id))?;
+ let base_uri = PSP34MetadataImpl::get_attribute(
+ self,
+ PSP34Impl::collection_id(self),
+ String::from("baseUri"),
+ );
+ let token_uri = base_uri.unwrap() + &token_id.to_string() + &String::from(".json");
+ Ok(token_uri)
+```
+
+## Update Shiden34 Contract
+Since we have added a new type `Data`, let's import it into our `Shiden34` contract:
+```rust
+use payable_mint_pkg::impls::payable_mint::*;
+```
+
+Add a new element in the `struct Shiden34`:
+```rust
+...
+#[storage_field]
+payable_mint: types::Data,
+```
+
+Update the constructor to accept new parameters:
+```rust
+...
+#[ink(constructor)]
+pub fn new(
+ name: String,
+ symbol: String,
+ base_uri: String,
+ max_supply: u64,
+ price_per_mint: Balance,
+) -> Self {
+ let mut instance = Self::default();
+ let caller = instance.env().caller();
+ ownable::InternalImpl::_init_with_owner(&mut instance, caller);
+ let collection_id = psp34::PSP34Impl::collection_id(&instance);
+ metadata::InternalImpl::_set_attribute(
+ &mut instance,
+ collection_id.clone(),
+ String::from("name"),
+ name,
+ );
+ metadata::InternalImpl::_set_attribute(
+ &mut instance,
+ collection_id.clone(),
+ String::from("symbol"),
+ symbol,
+ );
+ metadata::InternalImpl::_set_attribute(
+ &mut instance,
+ collection_id,
+ String::from("baseUri"),
+ base_uri,
+ );
+ instance.payable_mint.max_supply = max_supply;
+ instance.payable_mint.price_per_mint = price_per_mint;
+ instance.payable_mint.last_token_id = 0;
+ instance
+}
+```
+
+## Compose Unit Test
+Let's write a simple unit test to check the `mint()` function. In ink! contracts, the unit test needs to be placed inside the contract module, and by default, Alice creates the contract.
+After all imports, let's write a helper method to initiate the contract:
+```rust
+#[cfg(test)]
+#[cfg(test)]
+mod tests {
+ use super::*;
+ use crate::shiden34::PSP34Error::*;
+ use ink::env::test;
+
+ const PRICE: Balance = 100_000_000_000_000_000;
+
+ fn init() -> Shiden34 {
+ const BASE_URI: &str = "ipfs://myIpfsUri/";
+ const MAX_SUPPLY: u64 = 10;
+ Shiden34::new(
+ String::from("Shiden34"),
+ String::from("SH34"),
+ String::from(BASE_URI),
+ MAX_SUPPLY,
+ PRICE,
+ )
+ }
+}
+```
+
+Test minting 5 tokens to Bob's account. Call to `mint()` will be from Bob's account:
+```rust
+#[ink::test]
+fn mint_multiple_works() {
+ let mut sh34 = init();
+ let accounts = test::default_accounts::();
+ set_sender(accounts.bob);
+ let num_of_mints: u64 = 5;
+
+ assert_eq!(PSP34Impl::total_supply(&sh34), 0);
+ test::set_value_transferred::(
+ PRICE * num_of_mints as u128,
+ );
+ assert!(payable_mint::PayableMintImpl::mint(&mut sh34, accounts.bob, num_of_mints).is_ok());
+ assert_eq!(PSP34Impl::total_supply(&sh34), num_of_mints as u128);
+ assert_eq!(PSP34Impl::balance_of(&sh34, accounts.bob), 5);
+ assert_eq!(PSP34EnumerableImpl::owners_token_by_index(&sh34, accounts.bob, 0), Ok(Id::U64(1)));
+ assert_eq!(PSP34EnumerableImpl::owners_token_by_index(&sh34, accounts.bob, 1), Ok(Id::U64(2)));
+ assert_eq!(PSP34EnumerableImpl::owners_token_by_index(&sh34, accounts.bob, 2), Ok(Id::U64(3)));
+ assert_eq!(PSP34EnumerableImpl::owners_token_by_index(&sh34, accounts.bob, 3), Ok(Id::U64(4)));
+ assert_eq!(PSP34EnumerableImpl::owners_token_by_index(&sh34, accounts.bob, 4), Ok(Id::U64(5)));
+ assert_eq!(
+ PSP34EnumerableImpl::owners_token_by_index(&sh34, accounts.bob, 5),
+ Err(TokenNotExists)
+ );
+}
+
+fn set_sender(sender: AccountId) {
+ ink::env::test::set_caller::(sender);
+}
+```
+
+Run unit test:
+```bash
+cargo test
+```
+
+At this stage, your code should look something like [this](https://github.com/swanky-dapps/nft/tree/tutorial/payablemint-step5).
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/PayableMintTrait/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/PayableMintTrait/_category_.json
new file mode 100644
index 0000000..0a31cc6
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/PayableMintTrait/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "PayableMint Trait",
+ "position": 4
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/PayableMintTrait/payableminttrait.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/PayableMintTrait/payableminttrait.md
new file mode 100644
index 0000000..f9bf886
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/PayableMintTrait/payableminttrait.md
@@ -0,0 +1,60 @@
+# PayableMint Trait
+So far, our `mint()` function is quite generic, giving freedom to a caller to mint any token, but at the same time not allowing insight into which tokens have already been minted. In this section we will more clearly define `mint()`, and add several utility functions commonly found in popular NFT projects, that will make this example contract more suitable for production release.
+
+## Extending the Trait with Utility Functions
+Changes are applied in the `logics/traits/payable_mint.rs` file.
+
+### `mint(to: AccountId, mint_amount: u64)`
+The `mint()` function will now accept an NFT receiver account, and amount of tokens to be minted.
+This will allow the contract to control which token will be minted next, and minting of more than one token at a time.
+
+### `withdraw()`
+Since our contract accepts native token fees for minting, the owner needs to be able to withdraw the funds, otherwise they'll be locked in the contract forever. This function is set with an `only_owner` modifier, that restricts the ability to withdraw funds to the contract owner, only, which are then transferred to the owner's address.
+
+### `set_base_uri(uri: PreludeString)`
+First we need to import `String` from `ink_prelude` and rename it so as to not be confused with the Openbrush String Implementation. The difference is that Openbrush String is in fact a vector of u8 elements, and since we expect users to use `utf-8` strings, we will use String from prelude.
+```rust
+use ink_prelude::string::String as PreludeString;
+```
+This function is able to change the `base_uri` for our collection. This function is not used frequently, but will come in handy if the collection metadata becomes corrupted and requires updating. The initial `base_uri` will be set during contract creation, which is described in next section.
+
+### `token_uri(token_id: u64) -> PreludeString`
+Given the `token_id` this method will return the full `uri` for token's metadata.
+
+### `max_supply() -> u64;`
+Read the max supply of tokens for this collection.
+### `price() -> Balance;`
+Read the token price.
+
+## Full Trait Definition
+At this stage, your `logics/traits/payable_mint.rs` file should look something like this:
+```rust
+use ink::prelude::string::String as PreludeString;
+
+use openbrush::{
+ contracts::psp34::PSP34Error,
+ traits::{
+ AccountId,
+ Balance,
+ },
+};
+
+#[openbrush::wrapper]
+pub type PayableMintRef = dyn PayableMint;
+
+#[openbrush::trait_definition]
+pub trait PayableMint {
+ #[ink(message, payable)]
+ fn mint(&mut self, to: AccountId, mint_amount: u64) -> Result<(), PSP34Error>;
+ #[ink(message)]
+ fn withdraw(&mut self) -> Result<(), PSP34Error>;
+ #[ink(message)]
+ fn set_base_uri(&mut self, uri: PreludeString) -> Result<(), PSP34Error>;
+ #[ink(message)]
+ fn token_uri(&self, token_id: u64) -> Result;
+ #[ink(message)]
+ fn max_supply(&self) -> u64;
+ #[ink(message)]
+ fn price(&self) -> Balance;
+}
+```
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Wizard/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Wizard/_category_.json
new file mode 100644
index 0000000..bb573a2
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Wizard/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Wizard",
+ "position": 1
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Wizard/index.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Wizard/index.md
new file mode 100644
index 0000000..d625e19
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/Wizard/index.md
@@ -0,0 +1,191 @@
+# Openbrush Wizard
+
+## Use the Wizard to generate generic PSP34 code
+
+To create a smart contract which follows PSP34 standard use Openbrush Wizard:
+1. Open [Openbrush.io](https://openbrush.io/) website and go to bottom of the page.
+2. Select PSP34.
+3. Select the version to match the rest of the tutorial. Check *What will be used* in the [opening chapter](/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/nft.md#what-will-be-used).
+4. Name your contract. In this tutorial we will use `Shiden34`.
+5. Add your symbol name. In this tutorial we will use `SH34`.
+6. Select extensions: *Metadata*, *Mintable*, *Enumerable*.
+7. Under Security pick *Ownable*.
+8. Copy `lib.rs` and `Cargo.toml`.
+
+:::note
+At the time of writing this tutorial, Openbrush wizard does not properly generate contract. Use code from this tutorial.
+:::
+
+Your `lib.rs` file should look like this:
+```rust
+#![cfg_attr(not(feature = "std"), no_std, no_main)]
+
+#[openbrush::implementation(PSP34, Ownable, PSP34Enumerable, PSP34Metadata, PSP34Mintable)]
+#[openbrush::contract]
+pub mod shiden34 {
+ use openbrush::traits::Storage;
+
+ #[ink(storage)]
+ #[derive(Default, Storage)]
+ pub struct Shiden34 {
+ #[storage_field]
+ psp34: psp34::Data,
+ #[storage_field]
+ ownable: ownable::Data,
+ #[storage_field]
+ metadata: metadata::Data,
+ #[storage_field]
+ enumerable: enumerable::Data,
+ }
+
+ #[overrider(PSP34Mintable)]
+ #[openbrush::modifiers(only_owner)]
+ fn mint(&mut self, account: AccountId, id: Id) -> Result<(), PSP34Error> {
+ psp34::InternalImpl::_mint_to(self, account, id)
+ }
+
+ impl Shiden34 {
+ #[ink(constructor)]
+ pub fn new() -> Self {
+ let mut _instance = Self::default();
+ ownable::Internal::_init_with_owner(&mut _instance, Self::env().caller());
+ psp34::Internal::_mint_to(&mut _instance, Self::env().caller(), Id::U8(1))
+ .expect("Can mint");
+ let collection_id = psp34::PSP34Impl::collection_id(&_instance);
+ metadata::Internal::_set_attribute(
+ &mut _instance,
+ collection_id.clone(),
+ String::from("name"),
+ String::from("Shiden34"),
+ );
+ metadata::Internal::_set_attribute(
+ &mut _instance,
+ collection_id,
+ String::from("symbol"),
+ String::from("SH34"),
+ );
+ _instance
+ }
+ }
+}
+```
+
+Your `Cargo.toml` should now look like this:
+```toml
+[package]
+name = "shiden34"
+version = "3.1.0"
+authors = ["Astar builder"]
+edition = "2021"
+
+[dependencies]
+ink = { version = "4.2.1", default-features = false }
+scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] }
+scale-info = { version = "2.6", default-features = false, features = ["derive"], optional = true }
+openbrush = { tag = "v4.0.0-beta", git = "https://github.com/Brushfam/openbrush-contracts", default-features = false, features = ["psp34", "ownable"] }
+
+[lib]
+path = "lib.rs"
+
+[features]
+default = ["std"]
+std = [
+ "ink/std",
+ "scale/std",
+ "scale-info/std",
+
+ "openbrush/std",
+]
+ink-as-dependency = []
+```
+
+Make the folder structure or use `Swanky-cli` like this:
+```bash
+.
+└── contracts
+ └── shiden34
+ ├── Cargo.toml
+ └── lib.rs
+```
+
+Add another `Cargo.toml` with workspace definition to your project's root folder:
+```toml
+[workspace]
+members = [
+ "contracts/**",
+]
+
+exclude = [
+]
+```
+And your folder structure will look like:
+```cargo
+.
+├── Cargo.toml
+└── contracts
+ └── shiden34
+ ├── Cargo.toml
+ └── lib.rs
+```
+You are now ready to check if all is set.
+Run in root project folder:
+```bash
+cargo check
+```
+
+## Examine Openbrush Traits
+Let's examine what we have inside module `shiden34` (lib.rs) so far:
+* Defined structure `Shiden34` for contract storage.
+* Implemented constructor `new()` for `Shiden34` structure.
+* Implemented Openbrush traits *PSP34, PSP34Metadata, PSP34Mintable, PSP34Enumberable, Ownable* for structure `Shiden34`.
+* Overridden `mint()` method from trait *PSP34Mintable*. More about this in next section.
+
+Each of implemented traits will enrich `shiden34` contract with a set of methods. To examine which methods you now have available check:
+* Openbrush [PSP34 trait](https://github.com/Supercolony-net/openbrush-contracts/blob/main/contracts/src/traits/psp34/psp34.rs) brings all familiar functions from ERC721 plus a few extra:
+ * `collection_id()`
+ * `balance_of()`
+ * `owner_of()`
+ * `allowance()`
+ * `approve()`
+ * `transfer()`
+ * `total_supply()`
+* Openbrush [Metadata](https://github.com/Supercolony-net/openbrush-contracts/blob/main/contracts/src/traits/psp34/extensions/metadata.rs)
+ * `get_attribute()`
+* Openbrush [Mintable](https://github.com/Supercolony-net/openbrush-contracts/blob/main/contracts/src/traits/psp34/extensions/mintable.rs)
+ * `mint()`
+* Openbrush [Enumerable](https://github.com/Supercolony-net/openbrush-contracts/blob/main/contracts/src/traits/psp34/extensions/enumerable.rs)
+ * `owners_token_by_index()`
+ * `token_by_index()`
+* Openbrush [Ownable](https://github.com/Supercolony-net/openbrush-contracts/blob/main/contracts/src/access/ownable/mod.rs)
+ * `renounceOwnership ()`
+ * `transferOwnership()`
+ * `owner()`
+
+Major differences when compared with ERC721 are:
+1. `Metadata` trait brings possibility to define numerous attributes
+2. `PSP34` trait brings collection_id() which can be used or ignored in contracts
+
+We could have used `Burnable` trait as well but for simplicity it is skipped in this tutorial since burning can be performed by sending a token to address 0x00.
+
+After this step your code should look like [this](https://github.com/swanky-dapps/nft/tree/tutorial/wizard-step1).
+
+## Build, Deploy and Interact with the Contract
+Build your contract:
+```bash
+cd contracts/shiden34
+cargo contract build --release
+```
+Use ***shiden34.contract*** target to deploy contract.
+The file is located in this folder:
+```
+ls target/ink/shiden34/
+```
+
+To deploy your contract using the Polkadot.js apps portal, follow the previous guide, or use the [contracts-ui](https://contracts-ui.substrate.io/?rpc=wss://rpc.shibuya.astar.network).
+
+You can start interacting with your contract. You will notice that one token is already minted. This is due to the `mint()` call in the contract's constructor `new()`.
+* Try minting another token by calling `mint()`.
+* Read the token `ownerOf()` for your newly minted token.
+* Check that `totalSupply()` has increased.
+
+
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/_category_.json
new file mode 100644
index 0000000..21c1afa
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "NFT Contract with PSP34",
+ "position": 2
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/nft.md b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/nft.md
new file mode 100644
index 0000000..4f77e51
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/from-zero-to-ink-hero/nft/nft.md
@@ -0,0 +1,42 @@
+---
+sidebar_position: 1
+---
+
+# NFT Contract with PSP34
+
+Using the examples provided, you will build and deploy a NFT smart contract using ink! with functions commonly seen in NFT projects.
+The standard for NFT smart contract will be [PSP34](https://github.com/w3f/PSPs/blob/master/PSPs/psp-34.md) which is very similar to [ERC721](https://docs.openzeppelin.com/contracts/4.x/erc721) and it is written in ink!.
+## Prerequisites
+This tutorial is suitable for developers with **intermediate** knowledge of ink! and basic understanding of Rust. Previous experience compiling and deploying an ink! smart contract will be beneficial, such as from following the previous Flipper contract tutorial:
+
+| Tutorial | Difficulty |
+|----------------------------------------------------------------------------|--------------------------------|
+| [Your First Flipper Contract](../flipper-contract/flipper-contract.md) | Basic ink! - Basic Rust |
+| [Implement Uniswap V2 core DEX](../dex/dex.md) | Advanced ink! - Basic Rust |
+
+## How to Start
+To follow this tutorial you will need:
+- To [set up your ink! environment](/docs/build/build-on-layer-1/environment/ink_environment.md).
+- Basic Rust knowledge. [Learn Rust](https://www.rust-lang.org/learn)
+- Prior knowledge about ERC721 is helpful but not mandatory.
+
+## What will be used?
+- [ink! v4.2.1](https://github.com/paritytech/ink/tree/v4.2.1)
+- [Openbrush 4.0.0-beta](https://github.com/Brushfam/openbrush-contracts/releases/tag/4.0.0-beta)
+- cargo-contract 3.0.1
+
+## What will you learn?
+- Full implementation of NFT project in ink!.
+- Use Openbrush wizard to create PSP34 smart contract.
+- File structure for a smart contract with an additional trait.
+- Trait and generic implementation in separate files.
+- Unit test for smart contract.
+- Event handling.
+
+## Summary
+[I. OpenBrush wizard](./Wizard/wizard.md)
+[II. Override mint() method](./Override/override.md)
+[III Custom Trait for mint()](./CustomTrait/customtrait.md)
+[IV. PayableMint Trait definition](./PayableMintTrait/payableminttrait.md)
+[V. PayableMint Trait implementation](./PayableMintImpl/payablemintimpl.md)
+[VI. Events](./Events/events.md)
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/09a.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/09a.png
new file mode 100644
index 0000000..77858b6
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/09a.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/10.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/10.png
new file mode 100644
index 0000000..e89540e
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/10.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/11.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/11.png
new file mode 100644
index 0000000..6218f2c
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/11.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/12.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/12.png
new file mode 100644
index 0000000..c061e1b
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/12.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/SwankySuiteAstar.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/SwankySuiteAstar.png
new file mode 100644
index 0000000..b58bfc9
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/SwankySuiteAstar.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/ink-ce.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/ink-ce.png
new file mode 100644
index 0000000..067996f
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/ink-ce.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture01.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture01.png
new file mode 100644
index 0000000..1b50aba
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture01.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture02.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture02.png
new file mode 100644
index 0000000..1e26f5f
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture02.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture03.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture03.png
new file mode 100644
index 0000000..a3a3e5a
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture03.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture04.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture04.png
new file mode 100644
index 0000000..c49eac6
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture04.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture05.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture05.png
new file mode 100644
index 0000000..7bc43bb
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture05.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture06.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture06.png
new file mode 100644
index 0000000..e081f65
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/inkredible_architecture06.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/acc-create.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/acc-create.png
new file mode 100644
index 0000000..153ab10
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/acc-create.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/check.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/check.png
new file mode 100644
index 0000000..ca40af8
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/check.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/compile.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/compile.png
new file mode 100644
index 0000000..41e776b
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/compile.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-commands.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-commands.png
new file mode 100644
index 0000000..39d959f
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-commands.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-explain.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-explain.png
new file mode 100644
index 0000000..8ca10af
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-explain.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-new.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-new.png
new file mode 100644
index 0000000..7b8c5e1
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-new.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-query.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-query.png
new file mode 100644
index 0000000..a894f21
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-query.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-tx.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-tx.png
new file mode 100644
index 0000000..9d9cec7
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/contract-tx.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/deploy.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/deploy.png
new file mode 100644
index 0000000..1a695f5
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/deploy.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/folder-structure.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/folder-structure.png
new file mode 100644
index 0000000..39d40ec
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/folder-structure.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/help.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/help.png
new file mode 100644
index 0000000..d60277a
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/help.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/init-convert-confirm.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/init-convert-confirm.png
new file mode 100644
index 0000000..df103a3
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/init-convert-confirm.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/init-convert.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/init-convert.png
new file mode 100644
index 0000000..ffb0c19
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/init-convert.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/init.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/init.png
new file mode 100644
index 0000000..6bd1a4c
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/init.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/node-start.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/node-start.png
new file mode 100644
index 0000000..4958bac
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/node-start.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/test-report.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/test-report.png
new file mode 100644
index 0000000..bd91db7
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/test-report.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/test.png b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/test.png
new file mode 100644
index 0000000..d041da7
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/img/swanky/test.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/index.md b/docs/build/build-on-layer-1/smart-contracts/wasm/index.md
new file mode 100644
index 0000000..e05e93d
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/index.md
@@ -0,0 +1,14 @@
+# Wasm Smart Contracts
+
+![Wasm Smart Contracts]https://docs.astar.network/build/img/wasm.png)
+
+The **Wasm** section covers the Wasm stack on Astar/Shiden, some more advanced topics, and contains a few tutorials to help you build and deploy Wasm smart contracts.
+
+If you would like to start building right away, we encourage you to check out [**Swanky Suite**](./swanky-suite) - The all-in-one tool for Wasm smart contract developers within the Polkadot ecosystem.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/ink-dev.md b/docs/build/build-on-layer-1/smart-contracts/wasm/ink-dev.md
new file mode 100644
index 0000000..9214e2f
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/ink-dev.md
@@ -0,0 +1,29 @@
+---
+sidebar_position: 4
+---
+
+# Ink! Development
+
+Ink! is a Rust eDSL developed by Parity. It specifically targets smart contract development for Substrate’s `pallet-contracts`.
+
+Ink! offers Rust [procedural macros](https://doc.rust-lang.org/reference/procedural-macros.html#procedural-macro-hygiene) and a list of crates to facilitate development and allows developers to avoid writing boilerplate code.
+
+It is currently the most widely supported eDSL, and will be highly supported in the future. (by Parity and builders community).
+
+Ink! offers a broad range of features such as:
+
+- idiomatic Rust code
+- Ink! Macros & Attributes - [#[ink::contract]](https://use.ink/macros-attributes/contract)
+- [`Trait` support](https://use.ink/3.x/basics/trait-definitions)
+- Upgradeable contracts - [Delegate Call](https://use.ink/3.x/basics/upgradeable-contracts)
+- [Chain Extensions](https://use.ink/macros-attributes/chain-extension/) (interact with Substrate pallets inside a contract)
+- Off-chain Testing - `#[ink(test)]`
+
+Installation procedures are available in [ink! Environment](/docs/build/build-on-layer-1/environment/ink_environment.md) section.
+
+## Documentation
+
+- [Ink! Github repo](https://github.com/paritytech/ink)
+- [Ink! Intro repo](https://paritytech.github.io/ink/)
+- [Ink! Official Documentation](https://use.ink/)
+- [Ink! Rust doc](https://docs.rs/ink/4.0.0-rc/ink/index.html)
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/ink-rediblenft-architecture.md b/docs/build/build-on-layer-1/smart-contracts/wasm/ink-rediblenft-architecture.md
new file mode 100644
index 0000000..952a37c
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/ink-rediblenft-architecture.md
@@ -0,0 +1,89 @@
+# ink!redible NFT architecture
+
+This guide will give a brief overview of the !inkredible NFT on Astar.
+
+## What are ink!redible NFTs?
+
+Astar brings you ink!redible NFTs - a new way for users and builders to engage with, and create, NFTs. Originally derived from the RMRK standard, ink!redible NFTs brings the latest in NFT technology to the Astar Network, and in the dedicated, more efficient, domain-specific language that is ink!
+This document wil provide a brief overview of the system architecture, while details can be found by looking into [source code](https://github.com/AstarNetwork/ink-redible-nft)
+
+This document will cover the following:
+- NFT collection creation and deployment to a node.
+- NFT token indexing used to track tokens ownership
+- UI
+ - A token minting
+ - Adding to a parent's token inventory
+ - Token equipping and unequipping
+
+## Architecture
+![inkredible_architecture01](img/inkredible_architecture01.png)
+
+### Collection creation
+
+The ink!redible NFT [repository](https://github.com/AstarNetwork/ink-redible-nft) also contains set of scripts that can be used to create a NFT collection and deploy it to a node. More details on how prepare a collection for deployment and use scripts can be found in [readme](https://github.com/AstarNetwork/ink-redible-nft/blob/main/scripts/README.md) file.
+
+The scripts deployed the following contracts:
+
+- [RMRK contract v0.6.0](https://github.com/rmrk-team/rmrk-ink/tree/main/examples/equippable-lazy). The main contract that hold tokens, enables token nesting and equipping
+- [RMRK catalog contract v0.6.0](https://github.com/rmrk-team/rmrk-ink/tree/main/examples/catalog) The catalog contract contains all parts (graphics) that can be used as tokens assets.
+- [RMRK minting proxy contract](https://github.com/swanky-dapps/rmrk-lazy-mint) used to enable RMRK lazy minting. The contract's `mint` function mints token, adds random asset to the token and transfers the token ownership to a caller.
+
+### A collection indexing
+The indexer was implemented using subsquid and uset to track token ownership. Compared to other indexing services, the development environment is faster, simpler, and works well without major bugs. Indexer source code is available [here](https://github.com/sirius651/sqd-nft-viewer)
+
+To modify the indexer or create a new on you need to meet pre-requisites `Node.js, Subsquid CLI, Docker` and follow [subsquid document](https://docs.subsquid.io/quickstart/quickstart-substrate/).
+
+In short do the following:
+- Write down typeorm model like [this](https://github.com/sirius651/sqd-nft-viewer/tree/main/src/model/generated)
+- Write down indexing script like [this](https://github.com/sirius651/sqd-nft-viewer/blob/main/src/processor.ts)
+- Migrating db : `$ yarn db:migrate`
+- Run building script : `$ yarn build`
+
+To run the subsquid in local machine
+
+- Open the docker container.
+- by using subsquid-cli, it can run simply as command line
+ `$ sqd down`
+ `$ sqd codegen`
+ `$ sqd build`
+ `$ sqd up`
+ `$ sqd process`
+
+Finally, you can see the graphql query `$ sqd serve`
+
+![inkredible_architecture02](img/inkredible_architecture02.png)
+
+To deploy the indexing script to aquarium, which is hosted by subsquid team, just following to [this document](https://docs.subsquid.io/deploy-squid/quickstart/).
+
+`$ sqd deploy .`
+
+Once deploy has successful, browse to [aquarium console.](https://app.subsquid.io/)
+
+After status is available, we can query it where the endpoint is opened (like [this](https://squid.subsquid.io/sqd-nft-viewer/v/v1/graphql))
+
+**Troubleshooting**
+
+QueryFailedError: relation "owner" does not exist
+
+- Check this : https://docs.subsquid.io/basics/db-migrations/ , and running `$ sqd migration:generate`
+
+Sometimes, when trying to update during deployment on aquarium console, do not reflect the update well.
+
+- Remove the existing squad and creates a new one. it works perfectly
+
+### UI
+**Assets**
+Displays all tokens owner by a connected user. The information about token ownership comes from the indexer above. List of all mintable collections is fetched from Polkaverse.
+![inkredible_architecture03](img/inkredible_architecture03.png)
+
+**Parent page**
+Shows inventory (children) for a selected token. The page also enables adding a new tokens to the inventory.
+![inkredible_architecture04](img/inkredible_architecture04.png)
+
+**Child page**
+Shows details of selected inventory item with options to: equip, un-equip or bond (accept child) to a parent. All earlier mentioned operations are executed on a parent NFT RMRK contract.
+![inkredible_architecture05](img/inkredible_architecture05.png)
+
+**Minting page**
+Minting page enables users to mint their tokens. Mint page is generic and works for any RMRK proxy contract address. Information about collection are fetched from a Polkaverse post.
+![inkredible_architecture06](img/inkredible_architecture06.png)
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/interact/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/interact/_category_.json
new file mode 100644
index 0000000..aac6353
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/interact/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Interact",
+ "position": 11
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/interact/astarjs.md b/docs/build/build-on-layer-1/smart-contracts/wasm/interact/astarjs.md
new file mode 100644
index 0000000..f2a312c
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/interact/astarjs.md
@@ -0,0 +1,230 @@
+---
+sidebar_position: 1
+---
+
+# Astar.js for Wasm Smart Contracts
+
+[Astar.js](https://github.com/AstarNetwork/astar.js/wiki) is a library for interacting with the with the Astar/Shiden/Shibuya chains using Javascript/Typescript. It is a collection of modules that allow you to interact with the Astar blockchain through a local or remote node. It can be used in the browser or in Node.js.
+
+## Installation
+
+The ` @polkadot/api` and `@polkadot/api-contract` package will be used alongside the `@astar-network/astar-api` and `@astar-network/astar-sdk-core` package. With that in mind, we can install from npm:
+
+`yarn add @polkadot/api@9.13.6 @polkadot/api-contract@9.13.6 @astar-network/astar-api@0.1.17 @astar-network/astar-sdk-core@0.1.17`
+
+## Examples
+
+You can find a working examples here:
+
+- Flipper contract [flipper](https://github.com/AstarNetwork/wasm-flipper). This is a simple contract that allows users to flip a boolean value.
+
+- Lottery contract [lottery](https://github.com/astarNetwork/wasm-lottery). Here is another dapp example that uses Astar.js to interact with WASM smart contract. This is a simple lottery dapp that allows users to enter and draw the lottery.
+
+## Usage
+
+### Contract build artifacts
+
+The contract metadata and the wasm code are generated by building the contract with [Swanky](/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/cli.md) CLI.
+
+### Connecting to API
+
+The API provides application developers the ability to send transactions and recieve data from blockchain node.
+Here is an example to create an API instance:
+
+```js
+import { ApiPromise } from "@polkadot/api";
+import { WsProvider } from "@polkadot/rpc-provider";
+import { options } from "@astar-network/astar-api";
+import { sendTransaction } from "@astar-network/astar-sdk-core";
+
+async function main() {
+ const provider = new WsProvider("ws://localhost:9944");
+ // OR
+ // const provider = new WsProvider('wss://shiden.api.onfinality.io/public-ws');
+ const api = new ApiPromise(options({ provider }));
+ await api.isReady;
+
+ // Use the api
+ // For example:
+ console.log((await api.rpc.system.properties()).toHuman());
+
+ process.exit(0);
+}
+
+main();
+```
+
+### Initialise ContractPromise Class
+
+The `ContractPromise` interface allows us to interact with a deployed contract. In the previous Blueprint example this instance was created via `createContract`. In general use, we can also create an instance via `new`, i.e. when we are attaching to an existing contract on-chain:
+
+```js
+import { Abi, ContractPromise } from "@polkadot/api-contract";
+
+// After compiling the contract a ABI json is created in the artifacts. Import the ABI:
+import ABI from "./artifacts/lottery.json";
+
+const abi = new Abi(ABI, api.registry.getChainProperties());
+
+// Initialise the contract class
+const contract = new ContractPromise(api, abi, address); // address is the deployed contract address
+```
+
+### Query Contract Messages
+
+```js
+// Get the gas WeightV2 using api.consts.system.blockWeights['maxBlock']
+const gasLimit = api.registry.createType(
+ "WeightV2",
+ api.consts.system.blockWeights["maxBlock"]
+);
+
+// Query the contract message
+const { gasRequired, result, output } = await contract.query.pot(
+ account.address,
+ {
+ gasLimit,
+ }
+);
+```
+
+Underlying the above `.query.` is using the `api.rpc.contracts.call` API on the smart contracts pallet to retrieve the value. For this interface, the format is always of the form `messageName(, , , <...additional params>)`.
+
+### Send Contract Transaction the easy way
+
+Sending contract transaction is normally two steps process. The first step is to dry-run the transaction and check for errors. Astar.js has a helper function to do this. This helper function will return the transaction object which you can use to sign and send the transaction.
+
+```js
+import { sendTransaction } from "@astar-network/astar-sdk-core";
+
+try {
+ const result = await sendTransaction(
+ api, // The api instance of type ApiPromise
+ contract, // The contract instance of type ContractPromise
+ "enter", // The message to send or transaction to call
+ account.address, // The sender address
+ new BN("1000000000000000000") // 1 TOKEN or it could be value you want to send to the contract in title
+ // The rest of the arguments are the arguments to the message
+ );
+
+ // Sign and send the transaction
+ // The result is a promise that resolves to unsubscribe function
+ const unsub = await result.signAndSend(account.address, (res) => {
+ if (res.status.isInBlock) {
+ console.log("in a block");
+ }
+ if (res.status.isFinalized) {
+ console.log("finalized");
+ console.log("Successfully entered in lottery!");
+ unsub();
+ }
+ });
+} catch (error) {
+ // If there is an error, it will be thrown here
+ console.log(error);
+}
+```
+
+### Send Contract Transaction the hard way
+
+If you want to have more control over the transaction, you can use the two steps process. The first step is to dry-run the transaction and check for errors. The second step is to sign and send the transaction.
+
+```js
+
+// Get the initial gas WeightV2 using api.consts.system.blockWeights['maxBlock']
+const gasLimit = api.registry.createType(
+ 'WeightV2',
+ api.consts.system.blockWeights['maxBlock']
+)
+
+// Query the contract message
+// This will return the gas required and storageDeposit to execute the message
+// and the result of the message
+const { gasRequired, storageDeposit, result } = await contract.query.enter(
+ account.address,
+ {
+ gasLimit: gasLimit,
+ storageDepositLimit: null,
+ value: new BN('1000000000000000000')
+ }
+)
+
+// Check for errors
+if (result.isErr) {
+ let error = ''
+ if (result.asErr.isModule) {
+ const dispatchError = api.registry.findMetaError(result.asErr.asModule)
+ error = dispatchError.docs.length ? dispatchError.docs.concat().toString() : dispatchError.name
+ } else {
+ error = result.asErr.toString()
+ }
+
+ console.error(error)
+ return
+}
+
+// Even if the result is Ok, it could be a revert in the contract execution
+if (result.isOk) {
+ const flags = result.asOk.flags.toHuman()
+ // Check if the result is a revert via flags
+ if (flags.includes('Revert')) {
+ const type = contract.abi.messages[5].returnType // here 5 is the index of the message in the ABI
+ const typeName = type?.lookupName || type?.type || ''
+ const error = contract.abi.registry.createTypeUnsafe(typeName, [result.asOk.data]).toHuman()
+
+ console.error(error ? (error as any).Err : 'Revert')
+ return
+ }
+}
+
+// Gas require is more than gas returned in the query
+// To be safe, we double the gasLimit.
+// Note, doubling gasLimit will not cause spending more gas for the Tx
+const estimatedGas = api.registry.createType(
+ 'WeightV2',
+ {
+ refTime: gasRequired.refTime.toBn().mul(BN_TWO),
+ proofSize: gasRequired.proofSize.toBn().mul(BN_TWO),
+ }
+) as WeightV2
+
+setLoading(true)
+
+const unsub = await contract.tx
+ .enter({
+ gasLimit: estimatedGas,
+ storageDepositLimit: null,
+ value: new BN('1000000000000000000') // 1 TOKEN or it could be value you want to send to the contract in title
+ })
+ .signAndSend(account.address, (res) => {
+ // Send the transaction, like elsewhere this is a normal extrinsic
+ // with the same rules as applied in the API (As with the read example,
+ // additional params, if required can follow)
+ if (res.status.isInBlock) {
+ console.log('in a block')
+ }
+ if (res.status.isFinalized) {
+ console.log('Successfully sent the txn')
+ unsub()
+ }
+ })
+```
+
+## Weights V2
+
+The above is the current interface for estimating the gas used for a transaction. However, the Substrate runtime has a new interface for estimating the weight of a transaction. This is available on the `api.tx.contracts.call` interface. The interface is the same as the above, however the `gasLimit` is now specified as a `refTime` and `proofSize`. `refTime` is the time it takes to execute a transaction with a proof size of 1. `proofSize` is the size of the proof in bytes. The `gasLimit` is calculated as `refTime * proofSize`. The `refTime` and `proofSize` can be retrieved from the `api.consts.system.blockWeights` interface.
+
+## Events
+
+On the current version of the API, any event raised by the contract will be transparently decoded with the relevant ABI and will be made available on the `result` (from `.signAndSend(alicePair, (result) => {...}`) as `contractEvents`.
+
+When no events are emitted this value would be `undefined`, however should events be emitted, the array will contain all the decoded values.
+
+## Best Practice
+
+One thing you need to remember is that `#[ink(payable)]` added to an `#[ink(message)]` prevents `ink_env::debug_println!` messages to be logged in console when executing the smart contract call. Debug messages are only emitted during a dry run (query), not during the actual transaction (tx)(Source). When you're calling the contract, first query it, then perform your transaction if there are no error messages.
+
+## References
+
+- [Ink! official documentation](https://use.ink/)
+- [Astar.js](https://github.com/AstarNetwork/astar.js/wiki)
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/interact/index.md b/docs/build/build-on-layer-1/smart-contracts/wasm/interact/index.md
new file mode 100644
index 0000000..5587629
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/interact/index.md
@@ -0,0 +1,10 @@
+# Interact with Wasm Smart Contract
+
+In this chapter you can find out how to interact with Wasm smart contracts.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/nodes-clients.md b/docs/build/build-on-layer-1/smart-contracts/wasm/nodes-clients.md
new file mode 100644
index 0000000..d934043
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/nodes-clients.md
@@ -0,0 +1,48 @@
+---
+sidebar_position: 6
+---
+
+# Nodes Supporting Contracts
+
+## Local Development Nodes
+
+### Swanky Node
+
+Swanky Node is a local development node tracking the Shiden network.
+
+Swanky Node is the best choice if you would like to develop your contract & test it in your local environment, prior to deploying it on Astar/Shiden mainnet.
+
+Features:
+
+- Consensus: `instant-seal` and `manual-seal`
+- dApp staking enabled
+- Chain Extensions
+
+You can find the Github repo [here](https://github.com/AstarNetwork/swanky-node).
+
+### Substrate Contract Node
+
+Substrate contract node targets Substrate master. It is the best choice if you would like to try the latest (or unstable) features of ink! and/or pallet-contracts.
+
+Features:
+
+- Targets the latest Substrate master
+- Consensus: `instant-seal`
+
+The Github repository can be found [here](https://github.com/paritytech/substrate-contracts-node).
+
+## Testnet Node: Shibuya
+
+Shibuya has nearly the same chain specifications as Shiden & Astar mainnets, and provides an ideal environment for developers to test and debug their smart contracts, prior to launching their dApp on mainnet.
+
+Shibuya's `pallet-contracts` has `unstable-feature` so you can use features from ink! that are flagged unstable in `pallet-contracts`.
+
+To get the latest information and test tokens from the Faucet, consult Shibuya's official docs.
+
+## Mainnet Node: Shiden
+
+Wasm contracts are live on Shiden. You can interact with them in the same way as you would on Shibuya.
+
+## Mainnet Node: Astar
+
+At the moment, Wasm smart contracts are not available on Astar. They should go live during 2023 Q1.
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/psps.md b/docs/build/build-on-layer-1/smart-contracts/wasm/psps.md
new file mode 100644
index 0000000..334312e
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/psps.md
@@ -0,0 +1,33 @@
+---
+sidebar_position: 13
+---
+
+# Polkadot Standards Proposals
+
+The Polkadot ecosystem has its own set of standards to fulfill ecosystem needs. Visit [Polkadot Standards Proposals (PSPs) Github][PSPs] to learn more.
+
+These standards go through several rounds of approvals before being accepted, and engagement of the entire community is required in order to build valuable, resilient, future-proof standards. All teams that benefit from shared standards should be aware of them, and agree on the scope of what they cover.
+
+## PSP22 - Fungible Token Standard
+
+The [PSP22 Fungible Token standard][PSP22] is inspired by the ERC20 on Ethereum. PSP22 targets every parachain that integrates `pallet-contracts` and supports Wasm smart contracts. It is defined at ABI level, so it can be used for any language that compiles to Wasm (and is not restricted to ink! specifically).
+
+PSP22 will have a double impact:
+
+- On the parachain level, it will ensure the PSP22 standard is used to facilitate true interoperability.
+- In the multi-chain future, it will secure interoperability of all token standards (PSP22 and the next iteration) between different parachains or other Substrate based chains.
+
+It also helps to have a predefined interface for specific token standards to ensure exhaustive logic is implemented. It will also encourage sharing of the highest performance & most secure implementation. For a reference implementation, refer to [PSP22 - OpenBrush](https://github.com/Supercolony-net/openbrush-contracts/blob/main/contracts/src/traits/psp22/psp22.rs).
+
+This standard was the first to be accepted by the community. Refer to the official [PSPs repo][PSPs] to learn about all the latest standards.
+
+## PSP34 - NFT Standard
+
+Without a standard interface for Non-Fungible Token, every contract would have different signatures and types. Hence, no interoperability. This proposal aims to resolve that by defining one interface that shares the same ABI of permissionless methods between all implementations.
+
+The goal is to have a standard contract interface that allows tokens deployed on Substrate's `contracts` pallet to be re-used by other applications: from wallets to decentralized exchanges.
+
+[Link to PSP34](https://github.com/w3f/PSPs/blob/master/PSPs/psp-34.md)
+
+[PSPs]: https://github.com/w3f/PSPs
+[PSP22]: https://github.com/w3f/PSPs/blob/master/PSPs/psp-22.md
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/quickstart-wasm.md b/docs/build/build-on-layer-1/smart-contracts/wasm/quickstart-wasm.md
new file mode 100644
index 0000000..2c90d62
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/quickstart-wasm.md
@@ -0,0 +1,114 @@
+---
+title: Quickstart Guide
+---
+
+import Figure from '/src/components/figure'
+
+# Quickstart Guide for Astar Substrate Native Network
+Everything required to start deploying dApps on Astar Substrate Native Network (hereafter referred to as **Astar Substrate**), and nothing more.
+
+## Connecting to Astar Substrate Network
+
+:::info
+Although the free endpoints listed below are intended for end users, they can still be used in limited ways to interact with dApps or deploy/call smart contracts. It's worth noting however, that they rate-limit API calls, so are not suitable for testing high demand applications, such as dApp UIs that scrape users' blockchain history. For that, developers should run their own [archive node](/docs/build/build-on-layer-1/nodes/archive-node/index.md) **or** obtain an API key from one of our [infrastructure providers](/docs/build/build-on-layer-1/integrations/node-providers/index.md).
+:::
+
+
+
+
+| | Public endpoint Astar |
+| --- | --- |
+| Network | Astar |
+| Parent chain | Polkadot |
+| ParachainID | 2006 |
+| HTTPS | Astar Team: https://evm.astar.network |
+| | Alchemy: Get started [here](https://www.alchemy.com/astar) |
+| | BlastAPI: https://astar.public.blastapi.io |
+| | Dwellir: https://astar-rpc.dwellir.com |
+| | OnFinality: https://astar.api.onfinality.io/public |
+| | Pinknode: Get started [here](https://www.pinknode.io/) |
+| | Automata 1RPC: https://1rpc.io/astr, get started [here](https://www.1rpc.io) |
+| Websocket | Astar Team: wss://rpc.astar.network |
+| | Alchemy: Get started [here](https://www.alchemy.com/astar) |
+| | BlastAPI: wss://astar.public.blastapi.io |
+| | Dwellir: wss://astar-rpc.dwellir.com |
+| | OnFinality: wss://astar.api.onfinality.io/public-ws |
+| | Pinknode: Get started [here](https://www.pinknode.io/) |
+| | Automata 1RPC: wss://1rpc.io/astr, get started [here](https://www.1rpc.io) |
+| chainID | 592 |
+| Symbol | ASTR |
+
+
+
+
+
+| | Public endpoint Shiden |
+| --- | --- |
+| Network | Shiden |
+| Parent chain | Kusama |
+| ParachainID | 2007 |
+| HTTPS | Astar Team: https://evm.shiden.astar.network |
+| | BlastAPI: https://shiden.public.blastapi.io |
+| | Dwellir: https://shiden-rpc.dwellir.com |
+| | OnFinality: https://shiden.api.onfinality.io/public |
+| | Pinknode: Get started [here](https://www.pinknode.io/) |
+| Websocket | Astar Team: wss://rpc.shiden.astar.network |
+| | BlastAPI: wss://shiden.public.blastapi.io |
+| | Dwellir: wss://shiden-rpc.dwellir.com |
+| | OnFinality: wss://shiden.api.onfinality.io/public-ws |
+| | Pinknode: Get started [here](https://www.pinknode.io/) |
+| chainID | 336 |
+| Symbol | SDN |
+
+
+
+
+
+| | Public endpoint Shibuya |
+| --- | --- |
+| Network | Shibuya (parachain testnet) |
+| Parent chain | Tokyo relay chain (hosted by Astar Team) |
+| ParachainID | 1000 |
+| HTTPS | Astar Team: https://evm.shibuya.astar.network (only EVM/Ethereum RPC available) |
+| | BlastAPI: https://shibuya.public.blastapi.io |
+| | Dwellir: https://shibuya-rpc.dwellir.com |
+| Websocket | Astar Team: wss://rpc.shibuya.astar.network |
+| | BlastAPI: wss://shibuya.public.blastapi.io |
+| | Dwellir: wss://shibuya-rpc.dwellir.com |
+| chainID | 81 |
+| Symbol | SBY |
+
+
+
+
+
+## Obtaining tokens from the faucet
+
+[INSERT FAUCET INSTRUCTIONS]
+
+## Block Explorer
+
+[INSERT BLOCK EXPLORER]
+
+## Deploying Smart Contracts
+
+The development experience on Astar EVM is seamless and nearly identical to the Ethereum Virtual Machine. Developers can use existing code and tools on Astar EVM and users benefit from high transaction throughput and low fees. Read more about deploying smart contracts on Astar EVM [here.](/docs/build/build-on-layer-1/smart-contracts/EVM/index.md)
+
+## Metamask setup for Shibuya testnet
+To add Shibuya testnet to MetaMask, use the link at the bottom of the [block explorer](https://zkatana.blockscout.com/), or fill in the following details manually:
+
+
+
+## Astar EVM Support for Developers
+
+Developers requiring support can join the [Astar Discord server](https://discord.gg/astarnetwork).
+
+
+Astar Discord server
+
+1. Join the **Astar Discord** server [here](https://discord.gg/astarnetwork).
+2. Accept the invite.
+3. Take the **Developer** role under **#roles**.
+4. Navigate to the **Builder/#-astar-polkadot** channel.
+
+
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/smart-contract-wasm.md b/docs/build/build-on-layer-1/smart-contracts/wasm/smart-contract-wasm.md
new file mode 100644
index 0000000..c1f9d16
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/smart-contract-wasm.md
@@ -0,0 +1,72 @@
+---
+sidebar_position: 1
+---
+
+# Smart Contract Stack
+
+## Smart Contract Runtime Environment
+
+Astar & Shiden runtimes are based on Substrate, and both networks incorporate `pallet-contracts`, a sandboxed environment used to deploy and execute WebAssembly smart contracts. Any language that compiles to Wasm may be deployed and run on this Wasm Virtual Machine, however, the code should be compatible with the `pallet-contracts` [API](https://docs.rs/pallet-contracts/latest/pallet_contracts/api_doc/trait.Current.html).
+
+To avoid unnecessary complexity, and writing boilerplate code, the most appropriate method of building will involve the use of an eDSL specifically targeting `pallet-contracts`, such as [ink!] (based on Rust), or [ask!] (based on AssemblyScript), or possibly others as the ecosystem grows.
+
+After compilation, a Wasm blob can then be deployed and stored on-chain.
+
+### Execution Engine
+
+Pallet-contracts uses [wasmi](https://github.com/paritytech/wasmi) as a Wasm interpreter to execute Wasm smart contract blobs. Although there is a faster JIT interpreter such as [wasmtime](https://github.com/bytecodealliance/wasmtime) available in the native runtime, smart contracts are an untrusted environment which require a higher degree of correctness of interpretation, which makes wasmi a more suitable option.
+
+### Two-step Deployment of Contracts
+
+The contract code (Wasm blob), and contract address and storage are decoupled from one another other, so require two steps to deploy a new contract on-chain:
+
+1. First, upload the Wasm contract on-chain (every contract Wasm code has a `code_hash` as an identifier).
+2. Second, instantiate the contract - it will create an address and storage for that contract.
+3. Anyone can instantiate a contract based on its `code_hash`.
+
+There are several benefits of decoupling the contract code from the address/storage:
+
+- To save space on-chain. Since a contract can have several constructors and instantiations, a redeployment will create a new instance based on the same underlying code. Think about standardized tokens, like [PSP22][PSP22] & [PSP34][PSP34], that will have one `code_hash` & `blob` living on-chain, and as many instantiations as are needed, rather than having to upload code with each new instantiation (for example, on Ethereum).
+- To instantiate a new contract using code within an existing smart contract (see the delegator example), `code_hash` is all that is needed.
+- Some standard contracts such as ([PSP22][PSP22] and [PSP34][PSP34]) will only be uploaded on-chain once, preventing users from having to pay gas costs for uploading new code.
+- Update contract code for an address: replace the contract code at the specified address with new code (see [set_code_hash][set_code_hash]). Storage and balances will be preserved.
+
+### For More Information About `pallet-contracts`
+
+- [`pallet-contracts` in Rust docs](https://docs.rs/pallet-contracts/14.0.0/pallet_contracts/index.html)
+- [`pallet-contracts` on Github](https://github.com/paritytech/substrate/tree/master/frame/contracts)
+
+## Client APIs
+
+The only library available to communicate with smart contracts is [Polkadot.js API](https://github.com/polkadot-js/api).
+
+:::info
+This API provides application developers the ability to query a node and interact with the Polkadot or Substrate chains using Javascript.
+:::
+
+Parity also developed a web application to interact with contracts called [contracts-ui](https://github.com/paritytech/contracts-ui).
+
+## The Wasm Stack vs. Ethereum
+
+| | Ethereum | Astar |
+| --- | --- | --- |
+| L1 Architecture | [Ethereum clients](https://ethereum.org/en/developers/docs/nodes-and-clients/) | [Substrate](https://substrate.io/)
+Smart Contract Runtime Environment | [EVM] | Wasm [pallet-contract] + EVM [frontier]
+Gas Model | [fixed price per instruction] | [weight] + [storage fees][storage] + [loading fees]
+Smart Contract DSLs | Solidity and Vyper | [ink!] (Rust) and [ask!] (AssemblyScript)
+Standards | [EIPs] | [PSPs]
+
+[weight]: https://docs.substrate.io/reference/how-to-guides/weights/
+[PSP22]: https://github.com/w3f/PSPs/blob/master/PSPs/psp-22.md
+[PSP34]: https://github.com/w3f/PSPs/blob/master/PSPs/psp-34.md
+[set_code_hash]: https://docs.rs/ink_env/4.0.0-rc/ink_env/fn.set_code_hash.html
+[ink!]: https://github.com/paritytech/ink
+[ask!]: https://github.com/ask-lang/ask
+[EVM]: https://ethereum.org/en/developers/docs/evm/
+[pallet-contract]: https://github.com/paritytech/substrate/tree/master/frame/contracts
+[fixed price per instruction]: https://ethereum.github.io/yellowpaper/paper.pdf
+[frontier]: https://github.com/paritytech/frontier
+[storage]: https://github.com/paritytech/substrate/blob/c00ed052e7cd72cfc4bc0e00e38722081b789ff5/frame/contracts/src/lib.rs#L351
+[loading fees]: https://github.com/paritytech/substrate/blob/97ae6be11b0132224a05634c508417f048894670/frame/contracts/src/lib.rs#L331-L350
+[EIPs]: https://eips.ethereum.org/
+[PSPs]: https://github.com/w3f/PSPs
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/_category_.json
new file mode 100644
index 0000000..fd4ba0d
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Swanky Suite",
+ "position": 7
+}
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/cli.md b/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/cli.md
new file mode 100644
index 0000000..a9ffcb4
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/cli.md
@@ -0,0 +1,371 @@
+---
+sidebar_position: 1
+# Display h2 to h5 headings
+toc_min_heading_level: 2
+toc_max_heading_level: 4
+---
+
+import Figure from '/src/components/figure'
+import Tabs from '@theme/Tabs'
+import TabItem from '@theme/TabItem'
+
+# Swanky CLI
+
+Swanky CLI is a Node.js based CLI application that abstracts away and extends the functionality of Polkadot.js, `cargo contract`, and other ink! based smart contract developer tools.
+It aims to ease development of and interaction with ink! smart contracts and provides simple tools to bootstrap contract environment (project) with contract and integration tests templates, local node and accounts management, smart contracts interaction on both local and remote networks, compatibility checks between the contract pallet and compiler...
+
+It also comes with a preconfigured Docker image and .codespaces configuration for easy dev environment setup.
+
+With all of the features mentioned above, even more is in active or planned development. The whole project is public, and everyone is welcome to contribute or suggest features:
+
+- [Swanky CLI repo](https://github.com/AstarNetwork/swanky-cli)
+- [Swanky CLI project](https://github.com/orgs/AstarNetwork/projects/3)
+
+:::info
+Templates provided in the current version of swanky-cli, as well as environment in the swanky-base image, and supported tools target ink! v4, and use `cargo contract` v2.
+
+Cargo contract v3 introduced breaking changes to how artifacts are stored and you'll have to move them manually if you wish to use it.
+:::
+
+## Installing
+
+The CLI can be installed and used in different ways:
+
+- using a preconfigured environment of a dev-container locally with VS Code
+- using the dev-container in a cloud-based environment such as GitHub Codespaces or Gitpod
+- using the swanky-base container image locally (same is used in the dev-container)
+- downloading a precompiled binary
+- as an npm package (installed globally, or via the `npx` command)
+
+:::caution
+Note that using the precompiled binaries, NPM, or compiling it yourself requires you to have the [local environment set up](/docs/build/build-on-layer-1/environment/ink_environment.md) correctly
+:::
+
+### Dev container
+
+Using [dev container](/docs/build/build-on-layer-1/environment/dev-container.md) is the easiest way to use `swanky-cli`, it includes all the environment setup and will support auto-updates in the future.
+
+To run your project in the dev container follow the steps on [swanky-dev-container Github](https://github.com/AstarNetwork/swanky-dev-container).
+
+### Cloud based environments
+
+Similar to using the dev container locally, GitHub will detect the `.devcontainer` config in your project and let you run the project in a cloud-based IDE.
+
+You'll have to sign up for an account to use [Gitpod](https://www.gitpod.io/), but the process is the same.
+
+:::caution
+Currently it is not possible to forward ws:// ports from GitHub Codespaces, so if you'd like to interact with the swanky-node from contracts-ui or a similar service, use Gitpod or one of the other methods.
+:::
+
+### Download the precompiled binaries
+
+1. Download the correct archive for your platform from the [releases section of swanky-cli github page](https://github.com/AstarNetwork/swanky-cli/releases).
+
+2. Extract the archive to the appropriate location, for example the `software` directory.
+
+3. Add the `swanky` executable to your path variable by creating a symbolic link to it from a common `bin` directory or somewhere similar.
+
+
+
+
+```sh
+ln -s /Users/my_name/software/swanky-cli/bin/swanky /usr/local/bin
+```
+
+
+
+
+```sh
+ln -s /home/my_name/swanky-cli/bin/swanky /usr/local/bin
+```
+
+
+
+
+### Globally with npm
+
+This approach may seem simpler, but due to the nature of `Node.js` dependency management, may result in version inconsistency or other errors.
+
+```sh-session
+$ npm install -g @astar-network/swanky-cli
+```
+
+or
+
+```sh-session
+$ npx @astar-network/swanky-cli [command]
+```
+
+## Using swanky-cli
+
+If you're using a dev container, or have followed the installation instructions, you should have `swanky` command available in your terminal.
+
+Running it without any arguments (or with `-h`/`--help`) will provide you with a list of top-level commands and the app version.
+
+Passing `help` as an argument and providing it `-n`/`--nested-commands` flag will show full list of commands, including nested ones:
+
+```bash
+swanky help --nested-commands
+```
+
+
+
+Note that every command and subcommand also supports `-h`/`--help` flags to display their usage instructions.
+
+Likewise, most of the commands support `-v` /`--verbose` flag, which you can use to get more detailed output (useful for debugging and reporting errors).
+
+### Bootstrap a new project
+
+Using the `swanky init` command, you'll be prompted for a series of answers to define your project and the first smart contract within it.
+
+After gathering all the required information, the app will proceed to check your environment, scaffold the project, download and install (optionally) swanky node and the project dependencies.
+
+```
+swanky init PROJECT_NAME
+```
+
+
+
+The resulting folder structure should look something like this:
+
+
+
+If you want to start from a more complete example like those in the swanky-dapps repo, or rmrk-ink, or you want to convert your existing contract to a swanky project, you can use `swanky init --convert` command.
+
+It will prompt you for locations of your contract files, as well as additional crates and tests.
+
+In the last step, you'll be provided a list of files to be copied over and you'll be able to deselect any of them that are maybe not needed.
+
+
+
+
+
+:::note
+Swanky will look for a common ink! configuration, and will do it's best to copy everything to equivalent paths, but it is likely that you'll have to adjust some configs and import paths manually after conversion.
+:::
+
+_Resources:_
+
+- [_`swanky init` command usage manual_](https://github.com/AstarNetwork/swanky-cli#swanky-init-projectname)
+- [_available templates_](https://github.com/AstarNetwork/swanky-cli/tree/master/src/templates/contracts)
+
+### Check dependencies and compatibility
+
+You can quickly check the presence and versions of required dependencies by running `swanky check` command.
+
+
+
+:::info
+For now, you will need to be be in a project folder to run this command.
+
+This command will be updated to fix that, and provide more useful information.
+:::
+
+_Resources:_
+
+- [_`swanky check` command usage manual_](https://github.com/AstarNetwork/swanky-cli#swanky-check)
+
+### Manage accounts
+
+Create and list accounts used for contract interaction.
+
+These are the accounts stored inside your `swanky.config.json`, so the command needs to be ran from within the project directory.
+
+During account creation you'll have an option of passing your own mnemonic, or have Swanky generate one for you by passing `-g` flag.
+
+You can also mark the account as "production" which will require you to set a password and encrypt the mnemonic.
+
+Be careful not to use a dev account on live networks, as their mnemonic is stored in plain text in the config!
+
+
+
+:::tip
+Newly generated accounts that are not the preconfigured dev accounts (Alice, Bob, Charlie...) will have no funds initially, so you'll have to transfer some manually.
+:::
+
+_Resources:_
+
+- [_`swanky account` command usage manual_](https://github.com/AstarNetwork/swanky-cli#swanky-account-create)
+
+### Interact with contracts
+
+`swanky contract` command offers several subcommands for different interactions with your contracts.
+
+
+
+The command names are self explanatory, and to get more detailed information on using a specific command, you can use the help flag with it:
+
+```
+swanky contract SUB_COMMAND --help
+```
+
+#### Compile
+
+Your contracts are listed in `swanky.config.json`, and can be referred to by `name` field. Calling `swanky contract compile CONTRACT_NAME` will run cargo-contract compiler, generate TS types using [Typechain](https://github.com/Brushfam/typechain-polkadot), and move the artifacts and types to appropriate locations for later usage.
+
+If you have multiple contracts and wish to compile them all at once, you can pass the `--all` flag instead of the contract name.
+
+Likewise, if you're compiling for production, you need to pass the `--prod` flag.
+
+
+
+_Resources:_
+
+- [_`contract compile` command usage manual_](https://github.com/AstarNetwork/swanky-cli#swanky-contract-compile-contractname)
+
+#### Get detailed contract description
+
+Compiling the contract will generate it's metadata too.
+
+Swanky provides `contract explain CONTRACT_NAME` command to get a more human friendly version of that metadata:
+
+
+
+_Resources:_
+
+- [_`contract explain` command usage manual_](https://github.com/AstarNetwork/swanky-cli#swanky-contract-explain-contractname)
+
+#### Run E2E tests
+
+You can test your contracts using [Mocha](https://mochajs.org/) framework and [Chai](https://www.chaijs.com/) assertions.
+
+:::note
+Please note these tests are not ink! E2E tests, but are written in TypeScript, and require a local node to be running.
+
+You can get more information on ink! E2E test framework in the [ink! documentation](https://use.ink/basics/contract-testing/#end-to-end-e2e-tests).
+:::
+A contract template will provide you with a simple test as well, which you can use as a starting point.
+
+The tests utilize [@polkadot/api](https://polkadot.js.org/docs/api/) library, and contract types generated by [typechain-polkadot](https://github.com/727-Ventures/typechain-polkadot).
+The types are generated during the compile step and copied to `typedContract/contract_nae` directory, and in the `tests/*/artifacts/` directory. If you need only the types generated
+(if you for example deleted or edited them), you can do that without going through the whole compilation step by using `swanky contract typegen` command.
+
+Running `swanky contract test CONTRACT_NAME` will detect all `*.test.ts` files in the `tests/contract_name/` directory, and run them sequentially, or in all directories inside `tests/` if you pass the `-a`/`--all` flag.
+
+
+
+:::tip
+Running the tests programmatically may throw warnings about duplicate dependencies on `@polkadot/*` libraries.
+This occurs because those libraries are included in swanky app itself, as well as in the test files.
+Apart from the warning text spamming, no negative consequence of this has been observed.
+
+If you want to avoid the warnings anyway, you can run tests as a yarn/npm command:
+
+`yarn test` or
+
+`npm run test`
+:::
+
+Web based report will be generated and stored in `tests/*/testReports` directory. You can copy the path of the reports and use the `serve` app to view them in browser. (`serve` is included in swanky-dev-container)
+
+```
+serve PATH_TO_REPORTS
+```
+
+
+
+_Resources:_
+
+- [_`swanky contract test` command usage manual_](https://github.com/AstarNetwork/swanky-cli#swanky-contract-test-contractname)
+
+#### Deploy your contract
+
+When your contract is compiled and tested, you can deploy it to a local node or a remote network.
+
+You will need to supply account you wish to deploy the contract from (`--account`), and any arguments required by your contract's constructor (`-a`).
+
+By default, your contract will be deployed to a local node, but you can pass a custom network via `-n`/`--network` flag. Available networks are configured in `swanky.config.json` file.
+
+
+
+Successfully running the `deploy` command will print out the address your contract is deployed to, as well as save it into `swanky.config.json`
+
+_Resources:_
+
+- [_`swanky contract deploy` command usage manual_](https://github.com/AstarNetwork/swanky-cli#swanky-contract-deploy-contractname)
+
+#### Run queries and transactions
+
+Once your contract is deployed, you can call it from the CLI using `query` or `tx` commands.
+
+Use `query` for read-only calls, and `tx` for the calls that change the chain state and require signing (and a gas fee).
+
+Both commands require `CONTRACT_NAME` and `MESSAGE_NAME` parameters, and for `tx` a caller account needs to be provided too. (`-a`/`--account`).
+
+If the message you're calling requires arguments to be passed, you can do that using `-p`/`--param` flag.
+
+
+
+
+
+Result of a `query` is straight forward, `OK` followed by what ever the response is.
+
+The transaction (`tx`) is a bit more raw though. Important to note are the `dispatchError` and `internalError` fields, plus the `status` field.
+If the errors are `undefined`, and the status `finalized`, your transaction has been successful.
+
+:::tip
+Gas fee for `tx` is calculated and applied automatically, but you can provide a gas limit manually by using the `-g`/`--gas` flag.
+
+Keep in mind that the transaction will fail if you provide too low a value.
+:::
+
+_Resources:_
+
+- [_`swanky contract query` command usage manual_](https://github.com/AstarNetwork/swanky-cli#swanky-contract-query-contractname-messagename)
+- [_`swanky contract tx` command usage manual_](https://github.com/AstarNetwork/swanky-cli#swanky-contract-tx-contractname-messagename)
+
+#### Add a new contract from template
+
+You can create additional contracts in the same project, using the `contract new` command and selecting from predefined templates.
+
+The contract will be referred by `name` when using the relevant contract commands, and you can check the details in `swanky.config.json`
+
+
+
+_Resources:_
+
+- [_`swanky contract new` command usage manual_](https://github.com/AstarNetwork/swanky-cli#swanky-contract-new-contractname)
+
+### Interact with a local node
+
+If you have chosen to download and use the Swanky Node during init step, you can use `swanky node` commands to start and manage it.
+
+Simply running `swanky node start` will start the node, and the node will preserve the state across restarts.
+
+If you want to reset the node state, use the `swanky node purge` command.
+
+
+
+:::info
+Note that node needs to be running if you are using a default local network with `deploy`, `query` and `tx` commands.
+:::
+
+:::info
+If you chose not to download the swanky-node during the `init`, or you changed the OS environment (for example switched to inside a dev container after running `init` on host OS, or vice-versa), you can run
+`swanky node install`
+to download the node for currently running platform.
+:::
+
+:::caution
+If you want to use an external UI to interact with the node, you might run into some CORS issues.
+
+This can be solved by passing a custom array of whitelisted urls using the `--rpcCors` flag.
+:::
+
+_Resources:_
+
+- [_`swanky node` commands usage manual_](https://github.com/AstarNetwork/swanky-cli#swanky-node-install)
+
+## Using plugins
+
+Swanky CLI's functionality can be extended by the use of plugins, and it's a way to add new, case specific commands without modifying the core codebase.
+
+One WIP example is the [Phala plugin](https://github.com/AstarNetwork/swanky-plugin-phala)
+
+:::info
+If you are interested in developing a plugin, you can refer to the Phala example, and the [Oclif plugin documentation](https://oclif.io/docs/plugins), or you can post a request in [swanky-cli repo](https://github.com/AstarNetwork/swanky-cli/issues)'s issues.
+:::
+
+_Resources:_
+
+- [_`swanky plugin` commands usage manual_](https://github.com/AstarNetwork/swanky-cli/tree/master/packages/cli#swanky-plugins)
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/index.md b/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/index.md
new file mode 100644
index 0000000..d378598
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/index.md
@@ -0,0 +1,46 @@
+import Figure from '/src/components/figure'
+
+# Swanky Suite
+
+Swanky Suite aims to be an "all-in-one" tool for Wasm smart contract developers. It is based on existing tools like` cargo contract CLI` and `polkadot.js` but extends their functionality with many additional features such as smart contract templates, and an instant finality (Swanky) node, which reduces the contract development lifecycle.
+
+Swanky Suite is a tool that provides Web3 Wasm dapps developers with an experience that is more in-line with what they're familiar with, compared to popular tooling for EVM.
+
+Swanky Suite offers an extensible set of features, allowing developers to:
+
+- Quickly spin up a local contract development node with instant finality (Swanky Node).
+- provide a ready dev environment via prebuilt Docker image and dev-container configuration
+- Easily scaffold new projects using templates for both smart contracts and (soon) front-end dApps.
+- Compile ink! projects and generate TS types.
+- provide Typescript based integration testing simulating interaction from the client-side.
+- Handle and manage network accounts.
+- Deploy smart contracts within the Polkadot ecosystem to networks that support `pallet-contracts`.
+- Make arbitrary calls to deployed smart contracts.
+
+## Architecture Overview
+
+The Swanky Suite consists of three parts, Swanky CLI, Swanky Node, and the Docker image (Dockerfile is in the swanky-cli repo, and the built image is [hosted on github](https://github.com/AstarNetwork/swanky-cli/pkgs/container/swanky-cli%2Fswanky-base)).
+
+Source code for both Swanky CLI and Swanky Node are hosted on GitHub:
+
+- [Swanky CLI](https://github.com/AstarNetwork/swanky-cli).
+- [Swanky Node](https://github.com/AstarNetwork/swanky-node).
+
+## Documentation and resources
+
+This documentation's sub-sections on usage of [Swanky CLI](/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/index.md) and [Swanky Node](/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/node.md) have great instructions on how to setup the tool and start using it right away.
+
+[`swanky` CLI Github repo] with the latest documentation.
+
+[`swanky-node` Github repo] with the latest documentation.
+
+[`pallet-contracts`] documentation on Parity Github
+
+[ink! language] repo and specification
+
+[`pallet-contracts`]: https://github.com/paritytech/substrate/tree/master/frame/contracts
+[`pallet-dapps-staking`]: https://github.com/AstarNetwork/Astar/tree/polkadot-v0.9.27/frame/dapps-staking
+[`pallet-assets`]: https://github.com/paritytech/substrate/tree/master/frame/assets
+[`swanky-node` github repo]: https://github.com/AstarNetwork/swanky-node
+[`swanky` cli github repo]: https://github.com/AstarNetwork/swanky-cli
+[ink! language]: https://github.com/paritytech/ink
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/node.md b/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/node.md
new file mode 100644
index 0000000..1e6bd08
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/swanky-suite/node.md
@@ -0,0 +1,181 @@
+---
+sidebar_position: 2
+---
+
+# Swanky Node
+
+Swanky Node is a Substrate based blockchain configured to enable `pallet-contracts` (a smart contract module), and other features that assist local development of Wasm smart contracts.
+
+### Features
+
+- [pallet-contracts](https://github.com/paritytech/substrate/tree/master/frame/contracts)
+- `grandpa` & `aura` consensus were removed. Instead, `instant-seal` & `manual-seal` are used.
+ Blocks are authored & finalized (1) as soon as a transaction get in the pool (2) when `engine_createBlock` `engine_finalizeBlock` RPC called respectively.
+- `pallet-dapps-staking`
+- `pallet-assets`
+- `pallet-assets` chain extension
+- `pallet-dapps-staking` chain extension
+
+Swanky Node is optimized for local development, while removing unnecessary components such as P2P. Additional features and pallets, such as to interact between (Contract <-> Runtime), will be added in the future.
+
+### Compatible ink! version
+
+ink! `v4.0.0` or lower is supported.
+
+### Installation
+
+#### Download Binary
+
+The easiest method of installation is by downloading and executing a precompiled binary from the [Release Page](https://github.com/AstarNetwork/swanky-node/releases)
+
+#### Build Locally
+
+If you would like to build the source locally, you should first complete the [basic Rust setup instructions](/docs/build/build-on-layer-1/environment/ink_environment.md#rust-and-cargo).
+Once Rust is installed and configured, you will be able to build the node with:
+
+```bash
+cargo build --release
+```
+
+### Embedded Docs :book:
+
+Once the project has been built, the following command can be used to explore all parameters and
+subcommands:
+
+```bash
+./target/release/swanky-node -h
+```
+
+### Usage
+
+This command will start the single-node development chain with a persistent state.
+
+```bash
+./target/release/swanky-node
+```
+
+If you would prefer to run the node in non-persistent mode, use tmp option.
+
+```
+./target/release/swanky-node --tmp
+# or
+./target/release/swanky-node --dev
+```
+
+Purge the development chain's state.
+
+```bash
+./target/release/swanky-node purge-chain
+```
+
+### Development Accounts
+
+The **alice** development account will be the authority and sudo account as declared in the
+[genesis state](https://github.com/AstarNetwork/swanky-node/blob/main/node/src/chain_spec.rs#L44).
+While at the same time, the following accounts will be pre-funded:
+
+- Alice
+- Bob
+- Charlie
+- Dave
+- Eve
+- Ferdie
+- Alice//stash
+- Bob//stash
+- Charlie//stash
+- Dave//stash
+- Eve//stash
+- Ferdie//stash
+
+### Show only Errors and Contract Debug Output
+
+To print errors and contract debug output to the console log, supply `-lerror,runtime::contracts=debug` when starting the node.
+
+```
+./target/release/swanky-node -lerror,runtime::contracts=debug
+```
+
+Important: Debug output is only printed for RPC calls or off-chain tests ‒ not for transactions.
+
+See the ink! [FAQ](https://ink.substrate.io/faq/#how-do-i-print-something-to-the-console-from-the-runtime) for more details: How do I print something to the console from the runtime?.
+
+### Connect with Polkadot.js Apps Portal
+
+Once the Swanky Node is running locally, you will be able to connect to it from the **Polkadot-JS Apps** front-end,
+in order to interact with your chain. [Click
+here](https://polkadot.js.org/apps/#/explorer?rpc=ws://localhost:9944) connecting the Apps to your
+local Swanky Node.
+
+### Run in Docker
+
+First, install [Docker](https://docs.docker.com/get-docker/) and
+[Docker Compose](https://docs.docker.com/compose/install/).
+
+Then run the following command to start a single node development chain.
+
+```bash
+mkdir .local # this is mounted by container
+./scripts/docker_run.sh
+```
+
+This command will compile the code, and then start a local development network. You can
+also replace the default command
+(`cargo build --release && ./target/release/swanky-node --dev --ws-external`)
+by appending your own. A few useful commands are shown below:
+
+```bash
+# Run Substrate node without re-compiling
+./scripts/docker_run.sh ./target/release/swanky-node --ws-external
+
+# Purge the local dev chain
+./scripts/docker_run.sh ./target/release/swanky-node purge-chain
+
+# Check whether the code is compilable
+./scripts/docker_run.sh cargo check
+```
+
+### Consensus (Manual Seal & Instant Seal)
+
+Unlike other blockchains, Swanky Node adopts block authoring and finality gadgets referred to as Manual Seal and Instant Seal, consensus mechanisms suitable for contract development and testing.
+
+Manual seal - Blocks are authored whenever RPC is called.
+Instant seal - Blocks are authored as soon as transactions enter the pool, most often resulting in one transaction per block.
+
+Swanky Node enables both Manual seal and Instant seal.
+
+#### Manual Sealing via RPC call
+
+We can tell the node to author a block by calling the `engine_createBlock` RPC.
+
+```bash
+$ curl http://localhost:9944 -H "Content-Type:application/json;charset=utf-8" -d '{
+ "jsonrpc":"2.0",
+ "id":1,
+ "method":"engine_createBlock",
+ "params": [true, false, null]
+ }'
+```
+
+#### Params
+
+- **Create Empty**
+ `create_empty` is a Boolean value indicating whether empty blocks may be created. Setting `create-empty` to true does not mean that an empty block will necessarily be created. Rather, it means that the engine should go ahead creating a block even if no transactions are present. If transactions are present in the queue, they will be included regardless of the value of `create_empty`.
+
+- **Finalize**
+ `finalize` is a Boolean value indicating whether the block (and its ancestors, recursively) should be finalized after creation.
+
+- **Parent Hash**
+ `parent_hash` is an optional hash of a block to use as a parent. To set the parent, use the format `"0x0e0626477621754200486f323e3858cd5f28fcbe52c69b2581aecb622e384764"`. To omit the parent, use `null`. When the parent is omitted the block will be built on the current best block. Manually specifying the parent is useful for constructing fork scenarios, and demonstrating chain reorganizations.
+
+#### Finalizing Blocks Manually
+
+In addition to finalizing blocks at the time of creating them, they may also be finalized later by using the RPC call `engine_finalizeBlock`.
+
+```bash
+$ curl http://localhost:9944 -H "Content-Type:application/json;charset=utf-8" -d '{
+ "jsonrpc":"2.0",
+ "id":1,
+ "method":"engine_finalizeBlock",
+ "params": ["0x0e0626477621754200486f323e3858cd5f28fcbe52c69b2581aecb622e384764", null]
+ }'
+```
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/_category_.json b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/_category_.json
new file mode 100644
index 0000000..50d4585
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Tools and Libraries",
+ "position": 15
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/awasome.md b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/awasome.md
new file mode 100644
index 0000000..ca90445
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/awasome.md
@@ -0,0 +1,8 @@
+---
+sidebar_position: 4
+---
+
+# aWASoMe
+
+An aWASoMe list of all things related to Wasm smart contract development in the Polkadot ecosystem.
+[aWASoMe Github link](https://github.com/AstarNetwork/aWASoMe)
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/01a-folder_structure.png b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/01a-folder_structure.png
new file mode 100644
index 0000000..28e2a08
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/01a-folder_structure.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/1.png b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/1.png
new file mode 100644
index 0000000..47d1cd8
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/1.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/2.png b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/2.png
new file mode 100644
index 0000000..3e9eedf
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/2.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/3.png b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/3.png
new file mode 100644
index 0000000..3545f42
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/3.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/4.png b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/4.png
new file mode 100644
index 0000000..4f9c270
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/4.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/5.png b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/5.png
new file mode 100644
index 0000000..974d5a7
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/5.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/6.png b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/6.png
new file mode 100644
index 0000000..cb9e77c
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/6.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/7.png b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/7.png
new file mode 100644
index 0000000..cd8e786
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/7.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/8.png b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/8.png
new file mode 100644
index 0000000..1e17088
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/8.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/SwankySuiteAstar.png b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/SwankySuiteAstar.png
new file mode 100644
index 0000000..b58bfc9
Binary files /dev/null and b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/img/SwankySuiteAstar.png differ
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/index.md b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/index.md
new file mode 100644
index 0000000..9b8a0b3
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/index.md
@@ -0,0 +1,8 @@
+# Tools and libraries
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/openbrush.md b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/openbrush.md
new file mode 100644
index 0000000..7e3d33c
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/openbrush.md
@@ -0,0 +1,51 @@
+---
+sidebar_position: 1
+---
+
+# Use OpenBrush
+
+## OpenBrush
+
+[OpenBrush] is a library for smart contract development on ink! maintained by the [BrushFam] team, and is inspired by OpenZeppellin for Solidity.
+
+Openbrush provides standard contracts based on [PSPs], as well as other useful contracts and Rust macros that help developers build ink! smart contracts.
+
+Why use OpenBrush?
+
+- To create **interoperable** smart contracts, that perform **safe** cross-contracts calls (by maintaining consistent signatures across contracts).
+- To comply with [Polkadot Standards Proposals][PSPs].
+- To ensure usage of the **latest and most secure** implementation.
+- Templates provide customizable logic that can be implemented easily in smart contracts.
+- To **save time** by not having to write boilerplate code.
+
+Which token standards and contracts does OpenBrush provide?
+
+- **PSP22**: Fungible Token (*ERC20 equivalent*) with extensions.
+- **PSP34**: Non-Fungible Token (*ERC721 equivalent*) with extensions.
+- **PSP37**: *ERC1155 equivalent* with extensions.
+- **Ownable**: Restrict access to action for non-owners.
+- **Access Control**: Define a set of roles and restrict access to an action by roles.
+- **Reentrancy Guard**: Prevent reentrant calls to a function.
+- **Pausable**: Pause/Unpause the contract to disable/enable some operations.
+- **Timelock Controller**: Execute transactions with some delay.
+- **Payment Splitter**: Split the amount of native tokens between participants.
+
+### Generic Trait Implementation
+
+More importantly, OpenBrush adds support for generic Trait implementation, so you can split Trait and Impls into different files, which will increase the readability and maintainability of your smart contract code base (see detailed description [here](https://github.com/727-Ventures/openbrush-contracts/blob/main/docs/docs/smart-contracts/example/setup_project.md)).
+
+### Wrapper around Traits
+
+Defining a Trait definition is sufficient enough (a contract that implements that Trait is not needed anymore) to call methods of that Trait from another contract on the network (perform a cross-contract call). This makes cross-contract calls easier.
+
+### Documentation
+
+- [OpenBrush Github repo](https://github.com/727-Ventures/openbrush-contracts)
+- [Official Docs](https://docs.openbrush.io/)
+- [OpenBrush website](https://openbrush.io/)
+- [Substrate Seminar (Youtube)](https://www.youtube.com/watch?v=I5OFGNVvzOc)
+- [How to use modifiers](https://medium.com/supercolony/how-to-use-modifiers-for-ink-smart-contracts-using-openbrush-7a9e53ba1c76)
+
+[OpenBrush]: https://github.com/727-Ventures/openbrush-contracts
+[PSPs]: https://github.com/w3f/PSPs
+[Brushfam]: https://www.brushfam.io/
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/polkadotjs.md b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/polkadotjs.md
new file mode 100644
index 0000000..5406bb9
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/polkadotjs.md
@@ -0,0 +1,46 @@
+---
+sidebar_position: 2
+---
+
+# Polkadot.js Apps UI
+
+## Deploy a Wasm Smart Contract with Polkadot.js
+
+This is a step by step tutorial that will demonstrate how to deploy a Wasm smart contract with Polkadot.js apps, onto Shibuya testnet.
+
+You can deploy the Wasm blob separately from the metadata, but in this example we’ll use the `.contract` file which combines both Wasm and metadata files. If you used ink! and `cargo contract build` you will find the `.contract` file under:
+
+`./target/ink/myProg.contract`
+
+## Contract Page on Polkadot.js
+
+First, we will deploy the contract:
+
+1. Open PolkadotJS Apps in your browser and connect to Shibuya testnet. For connectivity instructions check the Integration chapter within this doc.
+2. Go to page `Developers —> Contracts`
+
+![1](img/1.png)
+
+3. Upload the contract
+
+![2](img/2.png)
+
+4. From the pop-up window upload the `.contract` file:
+
+![3](img/3.png)
+
+5. Set values for the constructor and deploy the contract:
+
+![4](img/4.png)
+
+6. Now you can interact with the contract:
+
+![5](img/5.png)
+
+## Deploy a contract from an existing `code hash`
+
+To deploy from an existing `code hash`, you will need to have the `code hash` on hand, then click `Add an existing code hash`.
+
+![6](img/6.png)
+![7](img/7.png)
+![8](img/8.png)
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/tools.md b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/tools.md
new file mode 100644
index 0000000..f827adc
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/tooling/tools.md
@@ -0,0 +1,31 @@
+---
+sidebar_position: 3
+---
+
+# Other
+
+## Sol2Ink
+[Sol2ink] is another tool maintained by the [BrushFam] team, used for easy migration of smart contracts from Solidity to ink! and Rust, that helps developers migrate smart contracts from EVM platforms, to Polkadot.
+
+How does it work? Simply input your Solidity code, and in a few seconds Sol2Ink will convert it to an ink! smart contract. Since the contracts are transcoded automatically, it is then a good idea to check them over (and build them) to see if they still perform as expected. Even still, most of the heavy lifting will be done for you, using the Sol2Ink tool.
+
+## Typechain-Polkadot
+[Typechain-Polkadot] is another [BrushFam]-maintained tool, designed to improve developers’ experience with frontend usage of ink! smart contracts, and deployment and integration testing by providing TypeScript types for ink! smart contracts.
+
+This tool will build contracts, create the artifcats, and then create the TypeScript classes which can then be integrated into your UI or TypeScript tests.
+
+## Solang
+[Solang](https://solang.readthedocs.io/en/latest/) is a Solidity Compiler for Solana and Substrate. Using Solang, you can compile smart contracts written in Solidity for Solana and [Parity Substrate](https://substrate.io/). Solang uses the [llvm](https://www.llvm.org/) compiler framework to produce WebAssembly (Wasm) or BPF contract code. As result, the output is highly optimized, which saves you in gas costs or compute units.
+
+## parity-common
+
+[`parity-common`](https://github.com/paritytech/parity-common) is a collection of crates that you can use in your ink! contracts.
+
+It offers all Ethereum types and is useful if you would like to port Solidity code to ink!.
+
+[OpenBrush]: https://github.com/727-Ventures/openbrush-contracts
+[PSPs]: https://github.com/w3f/PSPs
+
+[Brushfam]: https://www.brushfam.io/
+[Sol2Ink]: https://github.com/727-Ventures/sol2ink
+[Typechain-Polkadot]: https://github.com/727-Ventures/typechain-polkadot
diff --git a/docs/build/build-on-layer-1/smart-contracts/wasm/transaction-fees.md b/docs/build/build-on-layer-1/smart-contracts/wasm/transaction-fees.md
new file mode 100644
index 0000000..84ed5cf
--- /dev/null
+++ b/docs/build/build-on-layer-1/smart-contracts/wasm/transaction-fees.md
@@ -0,0 +1,152 @@
+---
+sidebar_position: 12
+---
+
+# Transaction Fees
+
+## Weight
+
+As is also the case with Substrate, `pallet-contracts` uses [weightV2][weight] to charge execution fees. It is composed of `refTime` and `proofSize` :
+- refTime: The amount of computational time that can be used for execution, in picoseconds.
+- proofSize: The size of data that needs to be included in the proof of validity in order for relay chain to verify transaction's state changes, in bytes. So access storage assume that it will grow the gas fees.
+
+:::info
+Gas = Weight = (refTime, proofSize)
+:::
+
+[Transaction Weight in Substrate Documentation][weight]
+
+## Storage Rent
+
+Storage rent, also called as *Automatic Deposit Collection* is a **mechanism** that ensure security of the chain by preventing on-chain storage spamming.
+It prevents malicious actors from spamming the network with low-value transactions and to ensure that callers have a financial stake when storing data on-chain.
+
+Users will be charged for every byte stored on-chain and the call will transfer this fee from the free balance of the user to the reserved balance of the contract. Note that the contract itself is unable to spend this reserved balance (but it can expose a function that remove on-chain storage and the caller will get the funds) .
+It also incentives users to remove unused data from the chain by getting rent fees back. Any user can get back the rent fees if they remove on-chain data (not specifically the user that was first charged for). It's up to the contract developers and users to understand how and if they can get their storage deposit back.
+
+### Storage Rent Calculation
+
+This fee is calculated with the price set for each storage item `DepositPerItem`, and for each byte of storage `DepositPerByte`. In Astar, the deposit fee are defined as follows (more detail is this [Astar forum post](https://forum.astar.network/t/revising-rent-fee-on-astar-shiden/4309/1)):
+
+| Deposit Type | Shiden | Astar |
+|--------------|----------------------|--------------------|
+| Per Storage Byte | 0.00002 SDN | 0.002 ASTR |
+| Per Storage Item | 0.004 SDN | 0.4 ASTR |
+
+##### Calculation
+
+When a user stores a new key/value in a `Mapping` field, one `DepositPerItem` is charged. The length in bytes of the value is added also added to the fee (so bytes length x `DepositPerByte`).
+For example, if a user store a new entry in a `Mapping` (`AccountId` is 32 bytes) it will be charged `DepositPerItem` + 32 x `DepositPerByte`.
+
+### What does it mean ?
+
+#### For users
+
+The first call to a dApp (one or several's smart contracts) will usually be more expensive than the following ones.
+This is because the first call will create a lot of new entries for the user (most of the time it is data related to the user `AccountId` like a Mapping of Balances). From the second call it should be way cheaper (or free) because it will just update those items.
+
+If the consecutive calls only modify the existing database entry, the caller is only charged for the extra bytes they add to the entry. In the case they reduce the size of the DB entry, they will get storage rent back. What this means in practice is that user can increase their free balance after interacting with a smart contract!
+
+If a user want to get it back, it should remove on-chain data. It is only possible if the smart-contract expose a function that remove data from chain (like `remove_mapping_entry` in the example below).
+
+#### For smart-contracts developers
+
+As the only way for users to get back their reserved balance is to remove on-chain data, it is important to make sure that the smart-contract expose functions that allow users to do so.
+If the contracts don't expose such functions, there will be no way to remove on-chain data used by the contract and the
+users will not be able to get back their reserved balance back (as it will be reserved balance on the contract account).
+
+### StorageDepositLimit
+
+When doing a contract call one of the argument is `StorageDepositLimit`. This value is the maximum amount of storage rent that can be charged for a single call.
+:::important
+If `StorageDepositLimit` is set to `None`, it allows contracts to charge arbitrary amount of funds to be drained from the caller's account.
+:::
+So it is necessary to set a limit (first dry-run the call to get the storage deposit amount) to prevent malicious contracts from draining funds from a user's account.
+This especially applies for front end applications that triggers contracts calls or for calls send from contracts UI (like [contracts-UI](https://contracts-ui.substrate.io/) or [polkadot-js UI](https://polkadotjs-apps.web.app/?rpc=wss%3A%2F%2Frpc.astar.network#/contracts)).
+
+Users are responsible for ensuring gas limit & storage deposit limit. This is same as for EVM smart contracts, but instead of having only non-refundable gas, you also have to take note of `StorageDepositLimit`.
+
+### Contract example on Astar
+
+```rust
+#[ink::contract]
+mod rent {
+ use ink::storage::Mapping;
+
+ #[ink(storage)]
+ pub struct Rent {
+ map: Mapping,
+ int: u32,
+ bool: bool,
+ }
+
+ impl Rent {
+ #[ink(constructor)]
+ pub fn new() -> Self {
+ Self { map: Default::default(), int: 0, bool: false }
+ }
+
+ #[ink(message)]
+ pub fn update_32(&mut self, i: u32) {
+ self.int = i
+ }
+
+ #[ink(message)]
+ pub fn flip_bool(&mut self) {
+ self.bool = !self.bool
+ }
+
+ #[ink(message)]
+ pub fn add_mapping_entry(&mut self) {
+ let caller = self.env().caller();
+ // Insert one item to storage. fee = 1 * PricePerItem (0.04ASTAR) =
+ // Value of the mapping is a u32, 4 bytes. fee = 4 * PricePerByte (0.002ASTAR) = 0.008ASTAR
+ // Total fee = 0.408ASTAR
+ self.map.insert(caller, &1u32);
+ }
+
+ #[ink(message)]
+ pub fn remove_mapping_entry(&mut self) {
+ let caller = self.env().caller();
+ // Clears the value at key from storage.
+ // Remove one item from storage. fee = 1 * PricePerItem (0.04ASTR) =
+ // Remove the value of the mapping u32, 4 bytes. fee = 4 * PricePerByte (0.002ASTR) = 0.008ASTR
+ // Total reserve repatriated by caller = 0.408ASTR
+ self.map.remove(caller);
+ }
+
+ #[ink(message)]
+ pub fn remove_entry_account_id(&mut self, who: AccountId) {
+ // Clears the value at key from storage.
+ // Remove one item from storage. fee = 1 * PricePerItem (0.04ASTR) =
+ // Remove the value of the mapping u32, 4 bytes. fee = 4 * PricePerByte (0.002ASTR) = 0.008ASTR
+ // Total reserve repatriated by caller = 0.408ASTR
+ self.map.remove(who);
+ }
+ }
+}
+```
+
+#### `add_mapping_entry`
+
+The fee (balance that is reserved (moved from free user balance to contract reserved balance)) will be:
+1. Insert one item to storage. fee = 1 * `PricePerItem` (0.04ASTR)
+2. Value of the mapping is u32, 4 bytes. fee = 4 * `PricePerByte` (0.002ASTR) = 0.008ASTR
+3. Total fee = 0.408ASTR
+
+#### `remove_mapping_entry`
+
+The balance repatriated (balance that is moved from the reserve of the contract account to the user account) will be:
+1. Remove one item from storage. fee = 1 * PricePerItem (0.04ASTR)
+2. Remove the value of the mapping u32, 4 bytes. fee = 4 * PricePerByte (0.002ASTR) = 0.008ASTR
+3. Total reserve repatriated by caller = 0.408ASTR
+
+#### `remove_entry_account_id`
+
+The caller will get balance repatriated (and not the user that was first charged for, because it is transferred from account reserved balance to free balance of caller). the caller will get the 0.408ASTR.
+
+#### `flip_bool` & `update_32`
+
+It will not have rent fees because it will not store new data on-chain (only updating value).
+
+[weight]: https://docs.substrate.io/reference/how-to-guides/weights/
diff --git a/docs/build/build-on-layer-2/bridge-to-zkevm.md b/docs/build/build-on-layer-2/bridge-to-zkevm.md
new file mode 100644
index 0000000..3ba385f
--- /dev/null
+++ b/docs/build/build-on-layer-2/bridge-to-zkevm.md
@@ -0,0 +1,64 @@
+---
+sidebar_position: 3
+title: Bridge to Astar zkEVM
+sidebar_label: Bridge to zkEVM
+---
+
+import bridge1 from '/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya1.jpg'
+import bridge2 from '/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya2.jpg'
+import bridge3 from '/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya3.jpg'
+import bridge4 from '/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya4.jpg'
+import network from '/docs/build/build-on-layer-2/img/zKatana-network1.jpg'
+import network1 from '/docs/build/build-on-layer-2/img/add_zkEVM_network1.jpg'
+import network2 from '/docs/build/build-on-layer-2/img/add_zkEVM_network2.jpg'
+import walletselect from '/docs/build/build-on-layer-2/img/wallet-select.jpg'
+
+## Overview
+
+Here you will find information about how to bridge assets to the Astar zkEVM. Presently, there are two options for bridging assets to the zkEVM:
+
+1. Ethereum L1 to Astar zkEVM -> Bridged ETH is the native token required for testing and deployment of dApps on the Astar zkEVM, so before using the network, developers need to bridge some ETH from Layer 1 to Layer 2. Accessible through the Astar Portal, which can take approximately 10-30 minutes, depending on network usage.
+2. _Astar Parachain to Astar zkEVM (currently under development) -> A 3rd-party asset bridge or message network facilitating locking and minting of synthetic (wrapped) assets between Astar Substrate EVM and Astar zkEVM. See the [integrations section](/docs/build/build-on-layer-2/integrations/bridges-relays/index.md) for more information about how to use 3rd-party bridge services and compatible assets._
+
+### Transfer ETH using the Astar Portal
+
+Visit the [Astar Portal](https://portal.astar.network) and connect MetaMask.
+
+
+
+
+
+
+
+Use the network selector and switch to zKatana network, or allow MetaMask to switch to zKatana network for you.
+
+
+
+
+
+
+
+Click on the Bridge tab on the left-hand side. Ensure Sepolia is selected as Bridge source, and zKatana is selected as destination. After you have entered the amount of ETH to transfer, press the Confirm button.
+
+
+
+
+
+
+
+Sign the MetaMask transaction.
+
+:::note
+Once the transaction shows as confirmed on the MetaMask Activity tab, it will take approximately 5-10 minutes for the Astar Portal and MetaMask to update your balance on the zKatana network side.
+:::
+
+
+
+
+
+
+
+
+
+
+ You should now see the bridged ETH within MetaMask for use on Astar zkEVM.
diff --git a/docs/build/build-on-layer-2/faq/_category_.json b/docs/build/build-on-layer-2/faq/_category_.json
new file mode 100644
index 0000000..3c3fb1c
--- /dev/null
+++ b/docs/build/build-on-layer-2/faq/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "FAQ",
+ "position": 9
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/faq/zkevm-eth-faq.md b/docs/build/build-on-layer-2/faq/zkevm-eth-faq.md
new file mode 100644
index 0000000..be34444
--- /dev/null
+++ b/docs/build/build-on-layer-2/faq/zkevm-eth-faq.md
@@ -0,0 +1,65 @@
+---
+sidebar_position: 2
+title: zkEVM and EVM Equivalence FAQs
+sidebar_label: EVM Equivalence
+---
+
+This document compiles some of the frequently asked questions related to the Astar zkEVM's equivalence with EVM. For more details, check out [Polygon zkEVM documentation](https://wiki.polygon.technology/docs/category/zkevm-protocol/).
+
+---
+
+### What is the difference between EVM Compatibility and EVM Equivalence?
+
+The ultimate goal is not **compatibility**. The ultimate goal is **equivalence**. **Solutions that are compatible enable most existing apps to work, but sometimes with code changes**. Additionally, compatibility may lead to the breaking of developer toolings.
+
+**zkEVM strives for EVM Equivalence because it means that most applications, tools, and infrastructure built on Ethereum can immediately port over to Astar zkEVM with limited to no changes needed**. Things are designed to work 100% on day one. This is critical because:
+
+1. **Development teams don't have to make changes to their code**, which could introduce security vulnerabilities.
+2. **No code changes are needed**. You don't need additional audits, which saves time and money.
+3. **zkEVM ultimately benefits from the security and decentralization of Ethereum**, since transactions are finalized on Ethereum.
+4. Astar zkEVM **benefits from the already vibrant and active Ethereum community**.
+5. Allows for **fast user onboarding**, since dApps built on Ethereum are already compatible.
+
+### Why is EVM Equivalence needed?
+
+Ethereum isn’t just a blockchain. It’s a rich ecosystem of smart contracts, developer tools, infrastructure, and wallets. It’s a vibrant community of developers, auditors, and users.
+
+The best way to scale Ethereum is to strive to maintain equivalence with this ecosystem. Astar zkEVM will give users and developers an almost identical experience to Ethereum L1 with significant scalability and user experience improvements.
+
+### What EVM opcodes are different on Astar zkEVM?
+
+The following EVM opcodes are different in Astar zkEVM: **SELFDESTRUCT**, **EXTCODEHASH**, **DIFFICULTY**, **BLOCKHASH**, and **NUMBER**.
+
+### What precompiled smart contract functions does Astar zkEVM support?
+
+The following precompiled contracts are supported in the zkEVM: **ecRecover** and **identity**.
+
+Other precompiled contracts have no effect on the zkEVM state tree and are treated as a `revert`, returning all gas to the previous context and setting the `success` flag to "0".
+
+### Which precompiled contracts are missing in the current zkEVM version?
+
+Astar zkEVM supports all precompiled contracts except **SHA256**, **BLAKE**, and **PAIRINGS**.
+
+### When will we get Type 2 EVM Equivalence?
+
+Currently, Astar zkEVM has Type 3 equivalence with EVM. It will reach Type 2 and full equivalence when all pre-compiled contracts are supported.
+
+### Can you explain the process of rollbacks and reverts in Astar zkEVM? Are they similar to EVM?
+
+The process of rollbacks and reverts is similar to regular EVMs. Whenever there is an error or a condition that triggers a revert, it uses the `REVERT` instruction to stop the execution and then returns an error message.
+
+Rollbacks can also happen sometimes because of an invalid zk-proof (this triggers something new to Astar zkEVM) which would cause the transaction to be aborted and all the state changes to be undone.
+
+### How does the Astar zkEVM handle events and logging?
+
+Astar zkEVM handles events and logging in a similar way to other EVMs, by emitting events and logging them on the blockchain for future reference.
+
+### How similar are Astar zkEVM error messages with Ethereum?
+
+Astar zkEVM has a high level of compatibility with Ethereum errors. You need to bear in mind that Astar zkEVM has more constraints than Ethereum and also uses different concepts (for example, batches instead of blocks). Therefore, it will give more types of errors with more precision (for example, the concept of gas in Astar zkEVM is more broken down).
+
+### Can Chainlink use their token (ERC677) in Astar zkEVM?
+
+You can deploy any smart contract on Astar zkEVM, just like you would on Ethereum, so you can deploy any token. If you want to send the token to Ethereum, the bridge will convert it to an ERC20 token (bi-directional bridge).
+
+The bridge also has **low-level message passing functionality** that can be used to bridge any type of value, including NFTs and other token standards.
diff --git a/docs/build/build-on-layer-2/faq/zkevm-general-faq.md b/docs/build/build-on-layer-2/faq/zkevm-general-faq.md
new file mode 100644
index 0000000..6c0f718
--- /dev/null
+++ b/docs/build/build-on-layer-2/faq/zkevm-general-faq.md
@@ -0,0 +1,93 @@
+---
+sidebar_position: 1
+title: General FAQs related to zkEVM
+sidebar_label: General FAQs
+---
+
+# General FAQ
+
+## Overview
+
+This document compiles some of the frequently asked questions related to the Astar zkEVM. For more details, check out [Polygon zkEVM documentation](https://wiki.polygon.technology/docs/category/zkevm-protocol/).
+
+
+### What is Astar zkEVM?
+
+Astar zkEVM is a layer 2 scaling solution for Ethereum that offers an EVM-equivalent smart contract environment. This means that most of the existing smart contracts, developer tools, and wallets for Ethereum also work with the Astar zkEVM.
+
+Astar zkEVM harnesses the power of Zero-Knowledge proofs to reduce transaction costs and increase throughput on L2, all while inheriting the security of Ethereum L1.
+
+### What are the main features of Astar zkEVM?
+
+- **EVM-equivalence**: Most Ethereum smart contracts, wallets, and tools work seamlessly on Astar zkEVM.
+- Inherits its **security from Ethereum.**
+- Lower cost compared to L1 and **faster finality compared to other L2 solutions** such as Optimistic Rollups
+- **Zero-Knowledge Proof-powered scalability** aiming for similar throughput to PoS.
+
+### What kind of gas fee reduction can users expect from Astar zkEVM?
+
+Compared to Ethereum Layer 1, users can expect a significant reduction in gas fees. Astar's layer 2 scaling solution batches transactions together, effectively spreading the cost of a single layer 1 transaction across multiple layer 2 transactions.
+
+### How do zk Rollups work?
+
+zk Rollups aggregate large batches of transactions and finalize them on the Ethereum network using zero-knowledge validity proofs.
+
+### What is so unique about zkEVMs?
+
+ZkEVMs were thought to be years away; not practical or competitive with other ZK L2s as there seemed to loom an unavoidable tradeoff - Full EVM equivalence or high performance, but not both.
+
+However, given the proving system breakthroughs pioneered by Polygon Labs, full EVM equivalence is now possible while at the same time offering higher performance and lower costs than alternative L1s, optimistic rollups, and other kinds of zk Rollups.
+
+### How do I connect Astar zkEVM to a Metamask Wallet?
+
+In order to add the Astar zkEVM network to your wallet, please check out the zkEVM quickstart guide [INSERT LINK] which contains the latest RPC details and videos demonstrating useful functionalities.
+
+### How does Astar zkEVM compare to other zkEVMs in terms of technology and performance? What are the technical advantages there?
+
+The best reference is Vitalik Buterin's comprehensive analysis of zkEVMs [published in his blog](https://vitalik.ca/general/2022/08/04/zkevm.html).
+
+However, the major difference between Astar zkEVM and others is the zkEVM's efficient prover and high Ethereum equivalence. Regarding the design of the prover/verification component: other projects use an arithmetic circuit approach while the Astar zkEVM zkProver uses the State Machine approach.
+
+### Is Astar zkEVM open source?
+
+Yes, [Astar zkEVM is fully open-source](https://polygon.technology/blog/polygon-zkevm-is-now-fully-open-source) and uses Polygon zkEVM solution with an AGPL v3 open-source license.
+
+### Does Astar zkEVM have a separate token?
+
+No. **ETH will be used for gas fees**. It is expected that ASTR will be used for staking and governance in Astar zkEVM in the future.
+
+It is also important to note that Astar zkEVM **natively supports Account Abstraction via ERC-4337**, which will allow users to pay fees with any token (bring your own gas).
+
+### What types of dApps can be deployed on Astar zkEVM?
+
+Any dApp that is compatible with EVM can be deployed, except for those which require a specific precompiled contract that is currently not supported by zkEVM. For more details related to supported precompiled contracts, check out the [Polygon zkEVM documentation](https://wiki.polygon.technology/docs/category/zkevm-protocol/).
+
+### Can this Layer 2 zkEVM work with other chains?
+
+**At the moment, the answer is No**. Aspirationally, the goal in the future is to build one of many chains that allow users' assets to move from layer 2 (L2) to layer 2. With that being said, users will not be able to utilize this functionality at launch, but L2 to L2 movement is included in our future roadmap.
+
+### What are some of the main use cases for Astar zkEVM?
+
+**DeFi Applications**: Because of Astar zkEVM’s high security and censorship resistance nature, it's a good fit for DeFi applications. zkRollups don’t have to wait for long periods for deposits and withdrawals; Astar zkEVM offers better capital efficiency for DeFi dApps/users.
+
+**NFT, Gamefi, and Enterprise Applications**: Low gas cost, high transaction speed, and a greater level of security coupled with Ethereum composability are attractive to blue chip NFTs, GameFi, and Enterprise applications.
+
+**Payments**: Users interested in transacting with each other in real-time within a near-instantaneous and low-fee environment will appreciate the value Astar zkEVM provides.
+
+### When Astar zkEVM publishes a proof on L1, how can someone trust that that proof is accurate and includes all the transactions it claims it does?
+
+Our zkRollup smart contract warranties it. It's trustworthy due to data availability and the fact that the published validity proofs are quick and easily verifiable SNARK proofs.
+
+### Does Astar zkEVM have support for both Solidity and Vyper?
+
+Yes, any language that gets compiled to EVM opcode should work with Astar zkEVM. In other words, if it can run on Ethereum, it can run on the Astar zkEVM.
+
+### What is an RPC node?
+
+**RPC (Remote Procedure Call)** is a JSON-RPC interface compatible with Ethereum. It enables the integration of Astar zkEVM with existing tools, such as Metamask, Etherscan, and Infura. It adds transactions to the pool and interacts with the state using read-only methods.
+
+Additionally, for a software application to interact with the Ethereum blockchain (by reading blockchain data and/or sending transactions to the network), it must connect to an Ethereum node. It works the same way as other nodes such as geth.
+
+### Do you support the JSON-RPC EVM query spec? What are the unsupported queries?
+
+All official queries are supported (`eth_*` endpoints). We are working on support from some "extra official endpoints" such as `debug_*`.
diff --git a/docs/build/build-on-layer-2/faq/zkevm-protocol-faq.md b/docs/build/build-on-layer-2/faq/zkevm-protocol-faq.md
new file mode 100644
index 0000000..d204d4d
--- /dev/null
+++ b/docs/build/build-on-layer-2/faq/zkevm-protocol-faq.md
@@ -0,0 +1,83 @@
+---
+sidebar_position: 3
+title: zkEVM Protocol FAQs
+sidebar_label: Protocol FAQs
+---
+This document compiles some of the frequently asked questions related to the Astar zkEVM protocol. For more details, check out [Polygon zkEVM documentation](https://wiki.polygon.technology/docs/category/zkevm-protocol/).
+
+---
+
+### How are transactions collected and ordered?
+
+- Transactions on the Astar zkEVM network are **created in users' wallets and signed with their private keys**.
+- Once generated and signed, the **transactions are sent to the Trusted Sequencer's node** via their JSON-RPC interface.
+- The transactions are then **stored in the pending transactions pool, where they await the Sequencer's selection**.
+- The **Trusted Sequencer reads transactions** from the pool and decides whether to discard them or order and execute them.
+- Lastly, the **Sequencer organizes the transactions into batches**, followed by the sequencing of the batches.
+
+### Are there any time or transaction intervals for a sequencer to wait before moving forward to make a Rollup batch?
+
+The sequencer always has an open batch. Transactions are added to this batch until this batch is full or a big timeout happens. Those batches are also accumulated until it reaches 128K of batches (or a big timeout) and then a sequencing transaction to L1 is sent.
+
+From the L2 user perspective, a new L2 block (different from the L2 batch) is closed and sent to the user. The user perceives the transaction finality even if the L2 batch is not closed. **One L2 Transaction is one L2 Block**.
+
+### What are the stages that a transaction goes through in order to be finalized on L1?
+
+The process of validating a specific transaction within the batch typically involves three steps:
+
+1. **Trusted State:** This state is given by the trusted sequencer almost instantaneously. No L1 transactions are required.
+
+2. **Virtual State:** Transactions are in L1. These transactions and their order cannot be modified as the state is final and anybody could calculate.
+
+3. **Verified State:** When the virtual state is verified by the smart contract, the funds can be withdrawn.
+
+### How does a Sequencer validate a specific transaction in order to generate proof?
+
+The Sequencer retrieves the transaction from the transaction pool and verifies that it is properly formatted and contains all the necessary information. The Sequencer does the following checks:
+
+- Checks that the transaction is valid by checking that the Sender has enough funds to cover the gas costs of the transaction and that the smart contract called, if any, is valid and has the correct bytecode.
+
+- Checks that the transaction is not a duplicate by checking the transaction nonce of the Sender to ensure that it is one greater than the last nonce used.
+
+- Checks that the transaction is not a double-spend by checking that the Sender's account balance has not been already spent in another transaction.
+
+Once the transaction is deemed valid, the Sequencer applies the transaction to the current state of the Astar zkEVM, updating the state of the smart contract and the account balances as necessary. Duration and cost vary depending on traffic and prevailing gas prices.
+
+### When do transactions achieve finality in Astar zkEVM?
+
+**If the user trusts the Sequencer**, transactions are considered final once the Sequencer sequences it (or Trusted State).
+
+**If the user trusts only the L1 state**, then the transaction will be final at the moment it reaches **Virtual State**. This means, once the data is available and the transaction is already on L1.
+
+**In case the user needs to withdraw funds**, he/she needs to wait for the Prover to convert the implicit state to an explicit state. We call this last state the **Consolidated or Verified State**.
+
+### Are Sequencers and Provers in-house or external? How do you ensure that your Sequencers and Provers maintain decentralization?
+
+Astar zkEVM's **Sequencer will be centralized during early stages**. We have a roadmap to decentralize the sequencer in future releases.
+
+Likewise, the **Prover is also centralized at the beginning** but the vision is to enable a Provers market. Provers cannot do much but generate proofs. To have a decentralized system of Provers is much more critical (and difficult) than the Sequencer.
+
+### Can a zkNode serve as both Sequencer and Aggregator? If not, how is it determined what role a node can play?
+
+A zkNode can potentially serve as both a sequencer and an aggregator, depending on the specific implementation of the zero-knowledge proof protocol.
+
+In some implementations, a node may only be able to perform one function or the other. The role a node can play is determined by the specific implementation of the protocol and the requirements of the network. For example, some protocols may require a certain number of nodes to perform the role of sequencer and a certain number to perform the role of aggregator in order to ensure the security and efficiency of the network.
+
+### How exactly do the state sync components do the syncing in L2 after a transaction batch and its validity proof is mined on L1?
+
+An easy way to summarize is that for each batch, one hash named `globalExitRoot` is transferred from **L1 → L2** and another hash is transferred from **L2 → L1** named `localExitRoot`.
+
+`globalExitRoot` mainly includes all the deposits and `localExitRoot` includes all the withdrawals.
+
+### What is Forced Batches?
+
+A Forced Batch is an L2 batch included in an L1 transaction. Trusted Sequencer is forced to include those batches. This is how a user guarantees that they can withdraw funds even if they are censored by Trusted Sequencer.
+
+This property is what memes the system censorship resistance.
+
+### What is an Emergency State, and when is it triggered?
+
+Emergency State halts the functionalities such as the sequencing of batches, verifying of batches, and forced batches.
+
+It can be triggered by the owner of a smart contract or, in the case of Astar zkEVM, by a Security Council multisig. This means that the Security Council can invoke the Emergency State if the pending state timeout is reached or a threatening vulnerability occurs.
+
diff --git a/docs/build/build-on-layer-2/fee.md b/docs/build/build-on-layer-2/fee.md
new file mode 100644
index 0000000..8440f44
--- /dev/null
+++ b/docs/build/build-on-layer-2/fee.md
@@ -0,0 +1,16 @@
+---
+sidebar_position: 7
+title: Astar zkEVM Fee Calculation
+sidebar_label: Fee Calculation
+---
+
+## How do network fees on Astar zkEVM work?
+In Astar zkEVM the gas fee is calculated by applying a fixed factor over L1 gas fee. That price factor is a fixed value and doesn't change often and it's value is based the rollup's cost to publish tx to L1. To Simply put, gas prices in L2 will linearly follow gas prices in L1.
+
+$$
+L2_{gas\_fee} = L1_{gas\_fee} * Factor
+$$
+
+The L1 fee will vary depending on the amount of transactions on the L1. If the timing of your transaction is flexible, you can save costs by submitting transactions during periods of lower gas on the L1 (for example, over the weekend)
+
+The support for congestion mechanism based EIP-1559 [here](https://eips.ethereum.org/EIPS/eip-1559) is planned for future and will make the L2 gas fee dynamic.
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/img/add_zkEVM_network1.jpg b/docs/build/build-on-layer-2/img/add_zkEVM_network1.jpg
new file mode 100644
index 0000000..18f92e9
Binary files /dev/null and b/docs/build/build-on-layer-2/img/add_zkEVM_network1.jpg differ
diff --git a/docs/build/build-on-layer-2/img/add_zkEVM_network2.jpg b/docs/build/build-on-layer-2/img/add_zkEVM_network2.jpg
new file mode 100644
index 0000000..0cc033a
Binary files /dev/null and b/docs/build/build-on-layer-2/img/add_zkEVM_network2.jpg differ
diff --git a/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya1.jpg b/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya1.jpg
new file mode 100644
index 0000000..4b0b73b
Binary files /dev/null and b/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya1.jpg differ
diff --git a/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya2.jpg b/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya2.jpg
new file mode 100644
index 0000000..07405c2
Binary files /dev/null and b/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya2.jpg differ
diff --git a/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya3.jpg b/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya3.jpg
new file mode 100644
index 0000000..f91a7aa
Binary files /dev/null and b/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya3.jpg differ
diff --git a/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya4.jpg b/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya4.jpg
new file mode 100644
index 0000000..749d82a
Binary files /dev/null and b/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya4.jpg differ
diff --git a/docs/build/build-on-layer-2/img/metamask-network.png b/docs/build/build-on-layer-2/img/metamask-network.png
new file mode 100644
index 0000000..8e1e348
Binary files /dev/null and b/docs/build/build-on-layer-2/img/metamask-network.png differ
diff --git a/docs/build/build-on-layer-2/img/metamask-sepolia-select.png b/docs/build/build-on-layer-2/img/metamask-sepolia-select.png
new file mode 100644
index 0000000..7ec74d6
Binary files /dev/null and b/docs/build/build-on-layer-2/img/metamask-sepolia-select.png differ
diff --git a/docs/build/build-on-layer-2/img/wallet-select.jpg b/docs/build/build-on-layer-2/img/wallet-select.jpg
new file mode 100644
index 0000000..8fce1c6
Binary files /dev/null and b/docs/build/build-on-layer-2/img/wallet-select.jpg differ
diff --git a/docs/build/build-on-layer-2/img/zKatana-network1.jpg b/docs/build/build-on-layer-2/img/zKatana-network1.jpg
new file mode 100644
index 0000000..2deb382
Binary files /dev/null and b/docs/build/build-on-layer-2/img/zKatana-network1.jpg differ
diff --git a/docs/build/build-on-layer-2/index.md b/docs/build/build-on-layer-2/index.md
index 4bb4fba..09f18d4 100644
--- a/docs/build/build-on-layer-2/index.md
+++ b/docs/build/build-on-layer-2/index.md
@@ -1,7 +1,33 @@
---
-title: Build on Astar zkEVM
+title: Build on Layer 2
---
import Figure from '/src/components/figure'
-# Why build on Astar zkEVM?
\ No newline at end of file
+# Build on Astar zkEVM, a Layer 2 scaling solution for Ethereum
+
+
+
+## What is Astar zkEVM?
+
+Astar zkEVM is an Ethereum Layer-2 scaling solution leveraging Polygon's Chain Development Kit and cutting edge zero-knowledge cryptography to enable off-chain transaction execution, with finality and security guarantees provided by Ethereum. In coordination with our key partners, Astar zkEVM is well-positioned to take advantage of the extensive developer base and well-established toolset existing in the Ethereum ecosystem, and boasts the following key features:
+
+- **Higher TPS than Ethereum or Astar Substrate EVM** - Leveraging zk rollup architecture, transactions are parallelized on Layer 2 and submitted on-chain to Layer 1 in batches, supercharging web3 games and DeFi applications requiring high performance.
+- **Lower Transaction Fees compared to Ethereum** - Due to the transaction batching, as explained above.
+- **Full EVM-equivalence** - Not only EVM compatibility; Equivalence. Smart contracts that work on Ethereum also work on Astar zkEVM. See the [smart contract section](/docs/build/build-on-layer-2/smart-contracts/index.md) for more information.
+- **Native Account Abstraction** - The Astar zkEVM provides native features designed to revolutionize the end-user experience, and make it seamless. See the [Account Abstraction section](/docs/build/build-on-layer-2/img/ (/docs/build/build-on-layer-2/integrations/account-abstraction/) to learn more about how to refine the end-user experience.
+- **Recognized Partners** - Established names and brands that developers trust power the Astar zkEVM. See the [integrations section](/docs/build/build-on-layer-2/integrations/index.md) for more information about 3rd party service providers.
+- **Interoperability and Exposure** - With Astar zkEVM, we are supporting interoperability between the Ethereum and Polkadot ecosystems, uniting communities, and empowering web3 accessibility through a common Multichain vision.
+- **Established Tools and Libraries** - Compatible with the tools web3 developers already know how to use, such as Remix, Hardhat, and Open Zeppelin.
+
+## Section Overview
+
+The following sections walk through the process of setting up a development environment and introduce common tools and partner services useful for powering highly scalable dApps and seamless user onboarding experiences on the Astar zkEVM.
+
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs/build/build-on-layer-2/integrations/account-abstraction/_category_.json b/docs/build/build-on-layer-2/integrations/account-abstraction/_category_.json
new file mode 100644
index 0000000..ea891e3
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/account-abstraction/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Account Abstraction",
+ "position": 1
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/integrations/account-abstraction/index.md b/docs/build/build-on-layer-2/integrations/account-abstraction/index.md
new file mode 100644
index 0000000..ae9dd13
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/account-abstraction/index.md
@@ -0,0 +1,6 @@
+# Account Abstraction
+:::info
+Coming soon...
+:::
+## Overview
+Here you will find all the information you need to refine the end-user experience and allow for seamless web2-like interactions with dApps and accounts on the Astar zkEVM.
diff --git a/docs/build/build-on-layer-2/integrations/bridges-relays/_category_.json b/docs/build/build-on-layer-2/integrations/bridges-relays/_category_.json
new file mode 100644
index 0000000..56eb5e4
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/bridges-relays/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Bridges & Relays",
+ "position": 2
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/integrations/bridges-relays/astar-bridge.md b/docs/build/build-on-layer-2/integrations/bridges-relays/astar-bridge.md
new file mode 100644
index 0000000..71881ef
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/bridges-relays/astar-bridge.md
@@ -0,0 +1,63 @@
+---
+title: Astar Bridge
+---
+
+import bridge1 from '/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya1.jpg'
+import bridge2 from '/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya2.jpg'
+import bridge3 from '/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya3.jpg'
+import bridge4 from '/docs/build/build-on-layer-2/img/astar-bridge-zKatana-Shibuya4.jpg'
+import network from '/docs/build/build-on-layer-2/img/zKatana-network1.jpg'
+import network1 from '/docs/build/build-on-layer-2/img/add_zkEVM_network1.jpg'
+import network2 from '/docs/build/build-on-layer-2/img/add_zkEVM_network2.jpg'
+import walletselect from '/docs/build/build-on-layer-2/img/wallet-select.jpg'
+
+## Overview
+
+Astar bridge is a canonical Layer 1 → Layer 2 bridge connecting Sepolia and Astar zKatana testnets that is trustless at the protocol level. This means that if the infrastructure on Layer 2 powering Astar zkEVM is somehow compromised or goes offline, the assets and data integrity on Layer 1 are still provided by Ethereum, and anyone can spin up a (zkNode) Prover to recompute the transaction data. This is currently the safest option for bridging assets to the zkEVM as it does not inherit any additional counterparty risk compared to using 3rd party asset bridges.
+
+## How to bridge ETH to the zkEVM using Astar Portal
+
+1. Visit the [Astar Portal](https://portal.astar.network) and connect MetaMask.
+
+
+
+
+
+
+
+2. Use the network selector and switch to zKatana network, or allow MetaMask to switch to zKatana network for you.
+
+
+
+
+
+
+
+3. Click on the Bridge tab on the left-hand side. Ensure Sepolia is selected as Bridge source, and zKatana is selected as destination. After you have entered the amount of ETH to transfer, press the Confirm button.
+
+
+
+
+
+
+
+4. Sign the MetaMask transaction.
+
+
+
+
+
+
+:::note
+Once the transaction shows as confirmed on the MetaMask Activity tab, it will take approximately 5-10 minutes for the Astar Portal and MetaMask to update your balance on the zKatana network side.
+:::
+
+
+
+
+
+
+ You should now see the bridged ETH within MetaMask for use on Astar zkEVM. Visit the [smart contracts section](/docs/build/build-on-layer-2/smart-contracts/index.md) to start building!
+
+
+
diff --git a/docs/build/build-on-layer-2/integrations/bridges-relays/index.md b/docs/build/build-on-layer-2/integrations/bridges-relays/index.md
new file mode 100644
index 0000000..7d4e5da
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/bridges-relays/index.md
@@ -0,0 +1,4 @@
+# Asset Bridges
+
+## Overview
+Here you will find all the information required to bridge assets to the Astar zkEVM, and set up simple cross-chain contracts using our supported partner solutions.
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/integrations/index.md b/docs/build/build-on-layer-2/integrations/index.md
new file mode 100644
index 0000000..4bd359a
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/index.md
@@ -0,0 +1,10 @@
+# Integrations
+
+Here you will find common services and tools available to developers building dApps on the Astar zkEVM, including sample configurations, and guides for many important elements of our infrastructure, including:
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs/build/build-on-layer-2/integrations/indexers/_category_.json b/docs/build/build-on-layer-2/integrations/indexers/_category_.json
new file mode 100644
index 0000000..02c0669
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/indexers/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Indexers",
+ "position": 3
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/integrations/indexers/index.md b/docs/build/build-on-layer-2/integrations/indexers/index.md
new file mode 100644
index 0000000..46a9ff1
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/indexers/index.md
@@ -0,0 +1,5 @@
+# Indexers
+
+## Overview
+
+Here you will find all the information required to use indexers on Astar zkEVM.
diff --git a/docs/build/build-on-layer-2/integrations/indexers/subquery.md b/docs/build/build-on-layer-2/integrations/indexers/subquery.md
new file mode 100644
index 0000000..8090340
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/indexers/subquery.md
@@ -0,0 +1,15 @@
+# SubQuery
+
+## What is SubQuery?
+
+SubQuery is an open-source and universal blockchain data indexer for developers that provides fast, flexible, reliable, and decentralised APIs to power leading multi-chain apps. Our goal is to save developers time and money by eliminating the need of building their own indexing solution and instead, fully focus on developing their applications.
+
+SubQuery's superior indexing capabilities support Astar smart contracts all out of the box. (In reality a Docker container!) Starter projects are provided allowing developers to get up and running and index blockchain data in minutes.
+
+Another one of SubQuery's competitive advantages is the ability to aggregate data not only within a chain but across blockchains all within a single project. This allows the creation of feature-rich dashboard analytics or multi-chain block scanners.
+
+Other advantages include superior performance with multiple RPC endpoint configurations, multi-worker capabilities and a configurable caching architecture. To find out more, visit our documentation.
+
+## SubQuery for Astar zkEVM
+
+Please visit the quickstart guide [here](https://academy.subquery.network/quickstart/quickstart_chains/astar-zkatana.html) or reference their existing documentation made for Astar [here.](/docs/build/build-on-layer-1/integrations/indexers/subquery.md)
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/integrations/node-providers/_category_.json b/docs/build/build-on-layer-2/integrations/node-providers/_category_.json
new file mode 100644
index 0000000..a4e5aa6
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/node-providers/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Node Providers",
+ "position": 4
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/integrations/node-providers/startale-labs.md b/docs/build/build-on-layer-2/integrations/node-providers/startale-labs.md
new file mode 100644
index 0000000..a039174
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/node-providers/startale-labs.md
@@ -0,0 +1,33 @@
+---
+title: Startale Labs
+---
+
+# Startale Web3 Service
+
+## Introduction
+
+[Startale Labs](https://sws.startale.com) is a web3 tech company developing multi-chain applications and infrastructure in collaboration with Astar Foundation and large enterprises. The company also conducts R&D and incubation based on industry experience and connections developed in collaboration with Astar Network.
+
+Startale Web3 Service (SWS) provides an Astar EVM/zkEVM Node RPC Endpoint, a powerful tool designed to enhance the Web3 experience that is now available for developers to utilize.
+
+
+### About Our Service
+
+Startale provides a standardized Blockchain API service that encompasses all facets of web3 development infrastructure. With respect to Astar EVM/zkEVM, users can create endpoints that grant access to the majority of RPC methods necessary for dApp development and interaction with the blockchain.
+
+Users of Startale Web3 Service have the ability to utilize the API for free within certain constraints. When more advanced features are required, paid subscription plans are available, or you can reach out to us to establish a customized plan that better suits your requirements.
+
+## Public Endpoint
+
+Startale provides a Public Endpoint for Astar zkEVM. Users can utilize the API for free within certain limitations.
+
+`https://rpc.startale.com/zkatana`
+
+## Getting started
+
+Here you should provide a step-by-step tutorial about how developers can use your solution on Astar.
+
+1. Visit the [Landing Page](https://sws.startale.com).
+2. Fill out the [Google form](https://forms.gle/7bfjxj1qpEW8gFxk7) and provide the required information.
+3. Receive your API Key by email.
+4. Check out the [technical docs](https://docs.startale.com/docs).
diff --git a/docs/build/build-on-layer-2/integrations/oracles/_category_.json b/docs/build/build-on-layer-2/integrations/oracles/_category_.json
new file mode 100644
index 0000000..7cca917
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/oracles/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Oracles",
+ "position": 5
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/integrations/oracles/acurast.md b/docs/build/build-on-layer-2/integrations/oracles/acurast.md
new file mode 100644
index 0000000..da47243
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/oracles/acurast.md
@@ -0,0 +1,156 @@
+---
+sidebar_position: 1
+---
+
+# Acurast
+
+[Acurast]: https://acurast.com/
+
+## Overview
+
+[Acurast](https://acurast.com/) is a platform and protocol designed to enable Web3 projects and enterprises to realize the full potential of Web3 by interconnecting worlds like Web2, Web3, AI, IOT through Acurast's Universal Interoperability.
+
+## Using Acurast
+
+Through Acurast developers can arbitrarly fetch data from public or permissioned APIs for the "Oracle" use case such as price feeds for DeFi platforms through a decentralized execution layer of off-chain workers. These [Processors](https://docs.acurast.com/acurast-processors), hosted by individuals, provide the resources of their Trusted Execution Environment that can be utilized to run computation yielding a verifiable output directly on chain. Developers can use the [Acurast Console](https://console.acurast.com/) to create new request and to get access to these interoperability resources.
+
+Acurast supports Astar's **WASM** and **EVM** environments. Contract Examples address can be found below:
+
+### Astar Destination Example
+
+WASM Smart Contract: b2o6ENagNWAxQT9f9yHFxfVMSpJA7kK6ouMhNN6veKXi3jw
+
+### Shiden Destination
+
+WASM Smart Contract: 0xDA7a001b254CD22e46d3eAB04d937489c93174C3
+
+## Obtain Data with Acurast on WASM and EVM
+
+### How to Get Started
+
+1. Deploy one of the example contracts to WASM or EVM
+1. Define your script detailing where to fetch data, computation etc.
+1. Create a Job on the [Acurast Console](https://console.acurast.com/)
+1. Processors will fulfill verifiable outputs in your defined interval to your contract
+
+### WASM Example
+
+The following example shows simple WASM smart contracts implemented with [ink!](https://use.ink/).
+
+Keep in mind that you can do much more with Acurast and get access to all interoperability modules besides these examples.
+
+```rust
+#![cfg_attr(not(feature = "std"), no_std)]
+
+use ink;
+
+#[ink::contract]
+mod receiver {
+ #[ink(storage)]
+ pub struct Receiver {
+ price: u128,
+ }
+
+ impl Receiver {
+ #[ink(constructor)]
+ pub fn default() -> Self {
+ Self {
+ price: Default::default(),
+ }
+ }
+
+ #[ink(message)]
+ pub fn fulfill(&mut self, price: u128) {
+ self.price = price;
+ }
+
+ #[ink(message)]
+ pub fn get_price(&self) -> u128 {
+ self.price
+ }
+ }
+}
+
+```
+
+### EVM Example
+
+```ts
+pragma solidity 0.8.10;
+
+/**
+ * @title Simple price feed contract
+ */
+contract PriceFeed {
+ // Account authorized to update the prices
+ address public provider = 0xF7498512502f90aA1ff299b93927417461EC7Bd5;
+
+ // Callable by other contracts
+ uint128 public price;
+
+ /**
+ * Provide the latest price
+ */
+ function fulfill(uint128 new_price) external {
+ require(msg.sender == provider, "NOT_PROVIDER");
+ price = new_price;
+ }
+}
+```
+
+### Script
+
+This example script shows how a "Price Feeds" is fetched from Binance and pushed to a WASM smart contract. You can view and test the your script on the Acurast Console.
+
+```js
+const callIndex = "0x4606"; // the call index for the 'call' extrinsic.
+const destination = "b2o6ENagNWAxQT9f9yHFxfVMSpJA7kK6ouMhNN6veKXi3jw"; // contract address that will receive the 'fulfill' call.
+_STD_.chains.substrate.signer.setSigner("SECP256K1"); // the type of signer used for sign the extrinsic call
+httpGET(
+ "https://api.binance.com/api/v3/ticker/price?symbol=AAVEBUSD",
+ {},
+ (response, _certificate) => {
+ const price = JSON.parse(response)["price"] * 10 ** 18;
+ const payload = _STD_.chains.substrate.codec.encodeUnsignedNumber(
+ price,
+ 128
+ );
+ _STD_.chains.substrate.contract.fulfill(
+ "https://rpc.astar.network",
+ callIndex,
+ destination,
+ payload,
+ {
+ refTime: "3951114240",
+ proofSize: "125952",
+ },
+ (opHash) => {
+ print("Succeeded: " + opHash);
+ },
+ (err) => {
+ print("Failed fulfill: " + err);
+ }
+ );
+ },
+ (err) => {
+ print("Failed get price: " + err);
+ }
+);
+```
+
+### Job Specification
+
+1. Go to the [Acurast Console](https://console.acurast.com/) and log in with your [Talisman Wallet](https://www.talisman.xyz/wallet) or your [PolkadotJS Extension](https://polkadot.js.org/extension/).
+1. Go to "Create Job" and select your destination, the ecosystem you're building in.
+1. Select an existing template, adapt it or write your own code that fits your needs. Test your code with "Test Code".
+1. Select your own Processor or use public ones.
+1. Define your execution schedule with the parameters such as start and endtime, interval etc.
+1. Specify your usage parameters.
+1. Specify your additional parameters such as the reward.
+1. Publish your Job and wait for your first fulfillment.
+
+Or check out [How to get started with the Acurast Console](https://console.acurast.com/developers/introduction#get-started) to register your Job.
+
+## Full Documentation
+
+You can find Acurast's official documentation [here](https://docs.acurast.com/).
diff --git a/docs/build/build-on-layer-2/integrations/oracles/index.md b/docs/build/build-on-layer-2/integrations/oracles/index.md
new file mode 100644
index 0000000..a1fca1f
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/oracles/index.md
@@ -0,0 +1,12 @@
+# Oracles
+
+## Overview
+
+Blockchain oracles are third-party services or agents that provide smart contracts with external information. They serve as bridges between blockchains and the external world. Because blockchains cannot access external data (outside of their network) due to their secure and deterministic nature, oracles are used to fetch, verify, and relay real-world data to smart contracts in a way that's trustworthy.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/integrations/oracles/pyth.md b/docs/build/build-on-layer-2/integrations/oracles/pyth.md
new file mode 100644
index 0000000..e1b6705
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/oracles/pyth.md
@@ -0,0 +1,41 @@
+---
+sidebar_position: 1
+---
+
+# Pyth Network
+
+[Pyth Network]: https://pyth.network/
+
+## Overview
+
+The [Pyth Network] is the largest first-party financial oracle network, delivering real-time market data to over 40 blockchains securely and transparently.
+
+The network comprises some of the world’s largest exchanges, market makers, and financial services providers publishing their proprietary price data on-chain for aggregation and distribution to smart contract applications.
+
+## Using Pyth Network
+
+The Pyth Network introduced an innovative low-latency [pull oracle design](https://docs.pyth.network/documentation/pythnet-price-feeds/on-demand), where users are empowered to “pull” price updates on-chain when needed, enabling everyone in that blockchain environment to access that data point.
+
+Developers on Astar zkEVM have permissionless access to any of Pyth’s 350+ price feeds for equities, ETFs, commodities, foreign exchange pairs, and cryptocurrencies.
+
+This [package](https://github.com/pyth-network/pyth-crosschain/tree/main/target_chains/ethereum/sdk/solidity) provides utilities for consuming prices from the Pyth Network Oracle using Solidity. Also, it contains the [Pyth Interface ABI](https://github.com/pyth-network/pyth-crosschain/blob/main/target_chains/ethereum/sdk/solidity/abis/IPyth.json) that you can use in your libraries to communicate with the Pyth contract.
+
+It is strongly recommended to follow the consumer [best practices](https://docs.pyth.network/documentation/pythnet-price-feeds/best-practices) when consuming Pyth data.
+
+For more information and details, please refer to the official documentation [here](https://docs.pyth.network/documentation).
+
+You can find more details on the various functions available to you when interacting with the Pyth smart contract in the [API Reference section](https://docs.pyth.network/evm).
+
+## Pyth on Astar zkEVM
+
+The Pyth Network smart contract is available at the following address: [0xA2aa501b19aff244D90cc15a4Cf739D2725B5729](https://zkatana.blockscout.com/address/0xA2aa501b19aff244D90cc15a4Cf739D2725B5729).
+
+You may also refer to this [page](https://docs.pyth.network/documentation/pythnet-price-feeds/evm) to find the various Pyth contracts.
+
+Additionally, you'll be able to find all the Pyth Price Feed IDs [here](https://pyth.network/developers/price-feed-ids). Be sure to select the correct environment as mainnet and testnet price feeds IDs differ.
+
+## Other
+
+The Pyth Network provides additional tools to developers like this [TradingView Integration](https://docs.pyth.network/guides/how-to-create-tradingview-charts) or the [Gelato Web3 Functions](https://docs.pyth.network/guides/how-to-schedule-price-updates-with-gelato).
+
+If you'd have any questions or issues, you can contact us on the following platforms: [Telegram](https://t.me/Pyth_Network), [Discord](https://discord.gg/invite/PythNetwork), [Website](https://pyth.network/contact).
diff --git a/docs/build/build-on-layer-2/integrations/wallets/_category_.json b/docs/build/build-on-layer-2/integrations/wallets/_category_.json
new file mode 100644
index 0000000..08d5404
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/wallets/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Wallets",
+ "position": 6
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/integrations/wallets/index.md b/docs/build/build-on-layer-2/integrations/wallets/index.md
new file mode 100644
index 0000000..4aa0164
--- /dev/null
+++ b/docs/build/build-on-layer-2/integrations/wallets/index.md
@@ -0,0 +1,5 @@
+# Wallets
+
+## Overview
+
+The majority of EVM wallets are compatible with Astar zkEVM.
diff --git a/docs/build/build-on-layer-2/quickstart.md b/docs/build/build-on-layer-2/quickstart.md
new file mode 100644
index 0000000..544b143
--- /dev/null
+++ b/docs/build/build-on-layer-2/quickstart.md
@@ -0,0 +1,89 @@
+
+---
+sidebar_position: 1
+title: Astar zkEVM Quickstart Guide
+sidebar_label: Quickstart
+---
+import Figure from '/src/components/figure'
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+
+Astar zkEVM is a zero-knowledge scaling solution for Ethereum that offers an **EVM-equivalent environment** on which existing EVM smart contracts, developer tools, and wallets can work seamlessly. Astar zkEVM harnesses the power of zero-knowledge proofs to reduce transaction costs and increase throughput, while inheriting the security of Ethereum.
+
+Solidity developers are right at home on Astar zkEVM. Simply switch to the zkEVM RPC, and start building!
+
+:::info Reminder
+No special tools or wallets are required to build or interact with Astar zkEVM.
+:::
+
+Developers can deploy existing contracts from other EVM chains to the zkEVM, and users are able to deposit assets from Ethereum to transact on the zkEVM in batches, which are ultimately finalized through novel use of zero-knowledge proofs. Native account abstraction means developers can craft user interfaces that are more intuitive and web2-like, that eliminate complexity and drastically simplify the onboarding process.
+
+## Connecting to zkEVM
+
+:::info Reminder
+**Astar zKatana testnet and its related documentation are under active development.**
+
+All feedback is welcome and highly appreciated, so please report errors or inconsistencies to a team member or as an issue on the [Astar Docs Github repo](https://github.com/AstarNetwork/astar-docs/issues), thank you.
+:::
+
+To add **Astar zkEVM** or any testnet networks to your wallet manually, enter the following details :
+
+
+
+| RPC URL | ChainID | Block Explorer URL | Currency |
+| ------------------------------- | ---------------- | ---------------- | ----- |
+| `https://rpc.startale.com/astar-zkevm` | `3776` | [https://astar-zkevm.explorer.startale.com/](https://astar-zkevm.explorer.startale.com/) | **ETH** |
+| `https://rpc.astar-zkevm.gelato.digital` | `3776` | | **ETH** |
+| `https://astar-zkevm-rpc.dwellir.com` | `3776` | | **ETH** |
+
+
+
+| RPC URL | ChainID | Block Explorer URL | Currency |
+| ------------------------------- | ---------------- | ---------------- | ----- |
+| `https://rpc.startale.com/zkyoto` | `6038361` | [https://zkyoto.explorer.startale.com/](https://zkyoto.explorer.startale.com/) | **ETH** |
+| `https://rpc.zkyoto.gelato.digital` | `6038361` | | **ETH** |
+
+
+
+
+To add the network to MetaMask you can either use the data above, or find a link to add the network at the bottom of the respective block explorer page.
+
+## Bridging Assets
+
+The next step is to [bridge assets](/docs/build/build-on-layer-2/bridge-to-zkevm.md) from Ethereum → Astar zkEVM.
+
+:::important
+Astar's canonical [zkEVM Bridge](https://portal.astar.network) does not inherit any counterparty risk compared to 3rd party bridge services, and is trustless at the protocol level.
+:::
+
+## Deploying Smart Contracts
+
+The development experience on zkEVM is seamless and identical to the Ethereum Virtual Machine. Developers building on zkEVM can use their existing code and tools to deploy on zkEVM, and dApp users will benefit from higher transaction throughput and lower fees. Read more about deploying smart contracts on the zkEVM [here.](/docs/build/build-on-layer-2/smart-contracts/index.md)
+
+## Astar zkEVM Support for Developers
+
+Developers requiring support can open an issue on [Ethereum StackExchange](https://ethereum.stackexchange.com/) and tag it with `Astar` (preferred) or join the [Astar Discord server](https://discord.gg/astarnetwork).
+
+
+Ethereum StackExchange
+
+1. Join the **Ethereum StackExchange** [here](https://ethereum.stackexchange.com/).
+
+2. Create a new issue.
+3. Make a detailed explanation of your issue.
+4. At the end add a tag `Astar` to trigger Astar team.
+
+
+
+Astar Discord server
+
+1. Join the **Astar Discord** server [here](https://discord.gg/astarnetwork).
+
+2. Accept the invite.
+3. Take the **Developer** role under **#roles**.
+4. Navigate to the **Builder/#zkevm-learning** channel.
+
+
diff --git a/docs/build/build-on-layer-2/smart-contracts/_category_.json b/docs/build/build-on-layer-2/smart-contracts/_category_.json
new file mode 100644
index 0000000..9a362c6
--- /dev/null
+++ b/docs/build/build-on-layer-2/smart-contracts/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Smart Contracts",
+ "position": 5
+}
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/smart-contracts/figures/end-product-nft-code.png b/docs/build/build-on-layer-2/smart-contracts/figures/end-product-nft-code.png
new file mode 100644
index 0000000..0edd6b3
Binary files /dev/null and b/docs/build/build-on-layer-2/smart-contracts/figures/end-product-nft-code.png differ
diff --git a/docs/build/build-on-layer-2/smart-contracts/figures/flatten-code-remix.png b/docs/build/build-on-layer-2/smart-contracts/figures/flatten-code-remix.png
new file mode 100644
index 0000000..ffde411
Binary files /dev/null and b/docs/build/build-on-layer-2/smart-contracts/figures/flatten-code-remix.png differ
diff --git a/docs/build/build-on-layer-2/smart-contracts/figures/hardhat-init.png b/docs/build/build-on-layer-2/smart-contracts/figures/hardhat-init.png
new file mode 100644
index 0000000..3dfed44
Binary files /dev/null and b/docs/build/build-on-layer-2/smart-contracts/figures/hardhat-init.png differ
diff --git a/docs/build/build-on-layer-2/smart-contracts/figures/input-object.png b/docs/build/build-on-layer-2/smart-contracts/figures/input-object.png
new file mode 100644
index 0000000..93f9020
Binary files /dev/null and b/docs/build/build-on-layer-2/smart-contracts/figures/input-object.png differ
diff --git a/docs/build/build-on-layer-2/smart-contracts/figures/json.png b/docs/build/build-on-layer-2/smart-contracts/figures/json.png
new file mode 100644
index 0000000..f87f66d
Binary files /dev/null and b/docs/build/build-on-layer-2/smart-contracts/figures/json.png differ
diff --git a/docs/build/build-on-layer-2/smart-contracts/figures/proj-created-outcome.png b/docs/build/build-on-layer-2/smart-contracts/figures/proj-created-outcome.png
new file mode 100644
index 0000000..05e1fbb
Binary files /dev/null and b/docs/build/build-on-layer-2/smart-contracts/figures/proj-created-outcome.png differ
diff --git a/docs/build/build-on-layer-2/smart-contracts/index.md b/docs/build/build-on-layer-2/smart-contracts/index.md
new file mode 100644
index 0000000..c5fc811
--- /dev/null
+++ b/docs/build/build-on-layer-2/smart-contracts/index.md
@@ -0,0 +1,8 @@
+# Smart Contracts
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs/build/build-on-layer-2/smart-contracts/using-hardhat.md b/docs/build/build-on-layer-2/smart-contracts/using-hardhat.md
new file mode 100644
index 0000000..9d7ff46
--- /dev/null
+++ b/docs/build/build-on-layer-2/smart-contracts/using-hardhat.md
@@ -0,0 +1,177 @@
+---
+sidebar_position: 3
+title: Deploy Smart Contracts Using Hardhat
+sidebar_label: Deploy Using Hardhat
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+Hardhat is a popular smart contract development frameworks. In this tutorial, we will be using Hardhat to deploy a simple Counter smart contract to the Astar zkEVM Testnet.
+We will explore the basics of creating a Hardhat project with a sample contract and a script to deploy it.
+
+For the full instruction on how to use Hardhat, please refer to the [official Hardhat documentation](https://hardhat.org/getting-started/).
+
+## Create New Project
+Start with creating an npm project by going to an empty folder, running `npm init`, and following its instructions. You can use another package manager, like yarn, but Hardhat recommends you use npm 7 or later, as it makes installing Hardhat plugins simpler.
+
+
+## Hardhat Smart Contract
+
+To create the sample project, run `npx hardhat init` in your project folder:
+
+![Hardhat init screen](figures/hardhat-init.png)
+
+- **Press** `` choose javascript, typescript or empty project
+- **Press** `` to set the project root
+- **Press** `` again to accept addition of `.gitignore`
+- **Press** `` to install `hardhat @nomicfoundation/hardhat-toolbox`
+
+## Create deployer account
+- Create the `.env` file in your project root folder and add the following line:
+
+```bash
+ACCOUNT_PRIVATE_KEY='my private key'
+```
+
+- Populate the `.env` file with your private key. You can get your private key from Metamask. See the section below on how to get your private key from Metamask.
+
+