diff --git a/website/pages/ar/about.mdx b/website/pages/ar/about.mdx index 7ac49dc47560..7660b0dfd54b 100644 --- a/website/pages/ar/about.mdx +++ b/website/pages/ar/about.mdx @@ -2,46 +2,66 @@ title: حول The Graph --- -هذه الصفحة ستشرح The Graph وكيف يمكنك أن تبدأ. - ## What is The Graph? -The Graph is a decentralized protocol for indexing and querying blockchain data. The Graph makes it possible to query data that is difficult to query directly. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -المشاريع ذات العقود الذكية المعقدة مثل [ Uniswap ](https://uniswap.org/) و NFTs مثل [ Bored Ape Yacht Club ](https://boredapeyachtclub.com/) تقوم بتخزين البيانات على Ethereum blockchain ، مما يجعل من الصعب قراءة أي شيء بشكل مباشر عدا البيانات الأساسية من blockchain. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +### How The Graph Functions -**إن فهرسة بيانات الـ blockchain أمر صعب.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## كيف يعمل The Graph +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph يفهرس بيانات الإيثيريوم بناء على أوصاف الـ subgraph ، والمعروفة باسم subgraph manifest. حيث أن وصف الـ subgraph يحدد العقود الذكية ذات الأهمية لـ subgraph ، ويحدد الأحداث في تلك العقود التي يجب الانتباه إليها ، وكيفية عمل map لبيانات الحدث إلى البيانات التي سيخزنها The Graph في قاعدة البيانات الخاصة به. +- When creating a subgraph, you need to write a subgraph manifest. -بمجرد كتابة `subgraph manifest` ، يمكنك استخدام Graph CLI لتخزين التعريف في IPFS وإخبار المفهرس ببدء فهرسة البيانات لذلك الـ subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) تدفق البيانات يتبع الخطوات التالية: -1. A dapp adds data to Ethereum through a transaction on a smart contract. -2. العقد الذكي يصدر حدثا واحدا أو أكثر أثناء معالجة الإجراء. -3. يقوم الـ Graph Node بمسح الـ Ethereum باستمرار بحثا عن الكتل الجديدة وبيانات الـ subgraph الخاص بك. -4. يعثر الـ Graph Node على أحداث الـ Ethereum لـ subgraph الخاص بك في هذه الكتل ويقوم بتشغيل mapping handlers التي قدمتها. الـ mapping عبارة عن وحدة WASM والتي تقوم بإنشاء أو تحديث البيانات التي يخزنها Graph Node استجابة لأحداث الـ Ethereum. -5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. A dapp adds data to Ethereum through a transaction on a smart contract. +2. العقد الذكي يصدر حدثا واحدا أو أكثر أثناء معالجة الإجراء. +3. يقوم الـ Graph Node بمسح الـ Ethereum باستمرار بحثا عن الكتل الجديدة وبيانات الـ subgraph الخاص بك. +4. يعثر الـ Graph Node على أحداث الـ Ethereum لـ subgraph الخاص بك في هذه الكتل ويقوم بتشغيل mapping handlers التي قدمتها. الـ mapping عبارة عن وحدة WASM والتي تقوم بإنشاء أو تحديث البيانات التي يخزنها Graph Node استجابة لأحداث الـ Ethereum. +5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## الخطوات التالية -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/ar/arbitrum/arbitrum-faq.mdx b/website/pages/ar/arbitrum/arbitrum-faq.mdx index 98346d82a41d..2cf8402a7718 100644 --- a/website/pages/ar/arbitrum/arbitrum-faq.mdx +++ b/website/pages/ar/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: الأسئلة الشائعة حول Arbitrum Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitrum Billing FAQs. -## لماذا يقوم The Graph بتطبيق حل L2؟ +## Why did The Graph implement an L2 Solution? -By scaling The Graph on L2, network participants can expect: +By scaling The Graph on L2, network participants can now benefit from: - Upwards of 26x savings on gas fees @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can expect: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers could open and close allocations to index a greater number of subgraphs with greater frequency, developers could deploy and update subgraphs with greater ease, Delegators could delegate GRT with increased frequency, and Curators could add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -41,27 +41,21 @@ Once you have GRT on Arbitrum, you can add it to your billing balance. ## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? -There is no immediate action required, however, network participants are encouraged to begin moving to Arbitrum to take advantage of the benefits of L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Core developer teams are working to create L2 transfer tools that will make it significantly easier to move delegation, curation, and subgraphs to Arbitrum. Network participants can expect L2 transfer tools to be available by summer of 2023. +All indexing rewards are now entirely on Arbitrum. -اعتبارًا من 10 أبريل 2023 ، تم سك 5٪ من جميع مكافآت الفهرسة على Arbitrum. مع زيادة المشاركة في الشبكة ، وموافقة المجلس عليها ، ستتحول مكافآت الفهرسة تدريجياً من Ethereum إلى Arbitrum ، وستنتقل في النهاية بالكامل إلى Arbitrum. - -## إذا كنت أرغب في المشاركة في اشبكة L2 ، فماذا أفعل؟ - -Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and report feedback about your experience in [Discord](https://discord.gg/graphprotocol). - -## هل توجد أي مخاطر مرتبطة بتوسيع الشبكة إلى L2؟ +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## هل ستستمر ال subgraphs الموجودة على Ethereum في العمل؟ +## Are existing subgraphs on Ethereum working? -نعم ، ستعمل عقود شبكة The Graph بالتوازي على كل من Ethereum و Arbitrum حتى الانتقال بشكل كامل إلى Arbitrum في وقت لاحق. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## هل سيكون لدى GRT عقد ذكي جديد يتم نشره على Arbitrum؟ +## Does GRT have a new smart contract deployed on Arbitrum? Yes, GRT has an additional [smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). However, the Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) will remain operational. diff --git a/website/pages/ar/arbitrum/l2-transfer-tools-faq.mdx b/website/pages/ar/arbitrum/l2-transfer-tools-faq.mdx index 22b4d7efd398..250f550bcacd 100644 --- a/website/pages/ar/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/pages/ar/arbitrum/l2-transfer-tools-faq.mdx @@ -46,7 +46,7 @@ If you have the L1 transaction hash (which you can find by looking at the recent 2. انتظر 20 دقيقة للتأكيد -3. قم بتأكيد نقل الـ subgraph على Arbitrum \\ \* +3. قم بتأكيد نقل الـ subgraph على Arbitrum \ \* 4. قم بإنهاء نشر الـ subgraph على Arbitrum @@ -200,11 +200,11 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans 1. ابدأ نقل الإشارة على شبكة Ethereum mainnet -2. حدد عنوان L2 للمنسق \\ \* +2. حدد عنوان L2 للمنسق \ \* 3. انتظر 20 دقيقة للتأكيد -\\ \* إذا لزم الأمر -أنت تستخدم عنوان عقد. +\ \* إذا لزم الأمر -أنت تستخدم عنوان عقد. ### كيف سأعرف ما إذا كان الرسم البياني الفرعي الذي قمت بعمل إشارة تنسيق عليه قد انتقل إلى L2؟ @@ -250,7 +250,7 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans ### كم من الوقت لدي لتأكيد تحويل حصتي إلى Arbitrum؟ -\\ _ \\ _ \\ \* يجب تأكيد معاملتك لإتمام تحويل الحصة على Arbitrum. يجب إكمال هذه الخطوة في غضون 7 أيام وإلا فقد يتم فقدان الحصة. +\ _ \ _ \ \* يجب تأكيد معاملتك لإتمام تحويل الحصة على Arbitrum. يجب إكمال هذه الخطوة في غضون 7 أيام وإلا فقد يتم فقدان الحصة. ### ماذا لو كان لدي تخصيصات مفتوحة؟ @@ -366,13 +366,13 @@ Yes! The process is a bit different, because vesting contracts can't forward the 3. امنح البروتوكول حق الوصول إلى عقد الاستحقاق (سيسمح لعقدك بالتفاعل مع أداة التحويل) -4. حدد عنوان المستفيد على L2 \\ \* وابدأ في تحويل الرصيد على Ethereum mainnet +4. حدد عنوان المستفيد على L2 \ \* وابدأ في تحويل الرصيد على Ethereum mainnet 5. انتظر 20 دقيقة للتأكيد 6. قم بتأكيد تحويل الرصيد على L2 -\\ \* إذا لزم الأمر -أنت تستخدم عنوان عقد. +\ \* إذا لزم الأمر -أنت تستخدم عنوان عقد. \*\*\*\*You must confirm your transaction to complete the balance transfer on Arbitrum. This step must be completed within 7 days or the balance could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). diff --git a/website/pages/ar/billing.mdx b/website/pages/ar/billing.mdx index 68ee9ca693bd..42aa104673bb 100644 --- a/website/pages/ar/billing.mdx +++ b/website/pages/ar/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. انقر على زر "توصيل المحفظة" في الزاوية اليمنى العليا من الصفحة. ستتم إعادة توجيهك إلى صفحة اختيار المحفظة. حدد محفظتك وانقر على "توصيل". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/ar/chain-integration-overview.mdx b/website/pages/ar/chain-integration-overview.mdx index 501143bfb88d..b8e41513fa9d 100644 --- a/website/pages/ar/chain-integration-overview.mdx +++ b/website/pages/ar/chain-integration-overview.mdx @@ -6,12 +6,12 @@ title: نظرة عامة حول عملية التكامل مع الشبكة ## المرحلة الأولى: التكامل التقني -- تعمل الفرق على تكامل نقطة الغراف وفايرهوز بالنسبة للسلاسل الغير مبنية على آلة الإيثيريوم الإفتراضية. إليك الطريقة(https://thegraph. com/docs/en/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - تستهل الفرق عملية التكامل مع البروتوكول من خلال إنشاء موضوع في المنتدى هنا(https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (الفئة الفرعية "مصادر البيانات الجديدة" تحت قسم "الحوكمة واقتراحات تحسين الغراف"). استخدام قالب المنتدى الافتراضي إلزامي. ## المرحلة الثانية: التحقق من صحة التكامل -- تتعاون الفرق مع المطورين الأساسيين، ومؤسسة الغراف، ومشغلي واجهات المستخدم الرسومية وبوابات الشبكة مثل سبغراف استوديو(https://thegraph.com/studio/) لضمان عملية تكامل سلسة. يتضمن ذلك توفير بنية تحتية للواجهة الخلفية، مثل إجراء الإستدعاء عن بعد -للترميز الكائني لجافاسكريبت- الخاص بالسلسلة المتكاملة أو نقاط نهاية فايرهوز. الفرق الراغبة في تجنب الإستضافة الذاتية مثل هذه البنية التحتية يمكنهم الإستفادة من مشغلي النقاط (المفهرسين) في مجتمع الغراف للقيام بذلك، والذي يمكن للمؤسسة المساعدة من خلاله. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - مفهرسو الغراف يختبرون التكامل على شبكة إختبار الغراف. - يقوم المطورون الأساسيون والمفهرسون بمراقبة استقرار، وأداء، وحتمية البيانات. @@ -38,12 +38,12 @@ Ready to shape the future of The Graph Network? [Start your proposal](https://gi هذا سيؤثر فقط على دعم البروتوكول لمكافآت الفهرسة على الغرافات الفرعية المدعومة من سبستريمز. تنفيذ الفايرهوز الجديد سيحتاج إلى الفحص على شبكة الاختبار، وفقًا للمنهجية الموضحة للمرحلة الثانية في هذا المقترح لتحسين الغراف. وعلى نحو مماثل، وعلى افتراض أن التنفيذ فعال وموثوق به، سيتتطالب إنشاء طلب سحب على [مصفوفة دعم الميزات] (https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) ("مصادر بيانات سبستريمز" ميزة للغراف الفرعي)، بالإضافة إلى مقترح جديد لتحسين الغراف، لدعم البروتوكول لمكافآت الفهرسة. يمكن لأي شخص إنشاء طلب السحب ومقترح تحسين الغراف؛ وسوف تساعد المؤسسة في الحصول على موافقة المجلس. -### 3. كم من الوقت ستستغرق هذه العملية؟ +### 3. How much time will the process of reaching full protocol support take? يُتوقع أن يستغرق الوصول إلى الشبكة الرئيسية عدة أسابيع، وذلك يعتمد على وقت تطوير التكامل، وما إذا كانت هناك حاجة إلى بحوث إضافية، واختبارات وإصلاحات الأخطاء، وكذلك توقيت عملية الحوكمة التي تتطلب ملاحظات المجتمع كما هو الحال دائمًا. -يعتمد دعم البروتوكول لمكافآت الفهرسة على قدرة أصحاب الحصص في المضي قدماً في عمليات الفحص وجمع الملاحظات ومعالجة المساهمات في قاعدة الكود الأساسية، إذا كان ذلك قابلاً للتطبيق. هذا مرتبط مباشرة بنضج عملية التكامل ومدى استجابة فريق التكامل (والذي قد يكون أو قد لا يكون نفس الفريق المسؤول عن تنفيذ إجراء الإستدعاء عن بعد\\الفايرهوز). المؤسسة هنا لمساعدة الدعم خلال العملية بأكملها. +يعتمد دعم البروتوكول لمكافآت الفهرسة على قدرة أصحاب الحصص في المضي قدماً في عمليات الفحص وجمع الملاحظات ومعالجة المساهمات في قاعدة الكود الأساسية، إذا كان ذلك قابلاً للتطبيق. هذا مرتبط مباشرة بنضج عملية التكامل ومدى استجابة فريق التكامل (والذي قد يكون أو قد لا يكون نفس الفريق المسؤول عن تنفيذ إجراء الإستدعاء عن بعد\الفايرهوز). المؤسسة هنا لمساعدة الدعم خلال العملية بأكملها. ### 4. كيف سيتم التعامل مع الأولويات؟ -كما في السؤال الثالث، سيتوقف ذلك على الجهوزية بشكل عام وعلى قدرات أصحاب الحصص المشاركين. على سبيل المثال، قد تستغرق سلسلة جديدة مع تطبيق فايرهوز جديد تمامًا وقتاً أطول من عمليات التكامل التي تم فحصها بالفعل أو التي قطعت شوطاً أطول في عملية الحوكمة. وينطبق هذا بشكل خاص على السلاسل المدعومة مسبقاً على الخدمة المستضافة (https://thegraph.com/hosted-service) أو تلك التي تعتمد على تقنيات تم اختبارها بالفعل. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/ar/cookbook/arweave.mdx b/website/pages/ar/cookbook/arweave.mdx index e2b25f673dfc..06fe4729bf4b 100644 --- a/website/pages/ar/cookbook/arweave.mdx +++ b/website/pages/ar/cookbook/arweave.mdx @@ -155,7 +155,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash -graph deploy --studio --access-token +graph deploy --access-token ``` ## Querying an Arweave Subgraph diff --git a/website/pages/ar/cookbook/avoid-eth-calls.mdx b/website/pages/ar/cookbook/avoid-eth-calls.mdx index 446b0e8ecd17..8897ecdbfdc7 100644 --- a/website/pages/ar/cookbook/avoid-eth-calls.mdx +++ b/website/pages/ar/cookbook/avoid-eth-calls.mdx @@ -99,4 +99,18 @@ Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0 ## Conclusion -We can significantly improve indexing performance by minimizing or eliminating `eth_calls` in our subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ar/cookbook/cosmos.mdx b/website/pages/ar/cookbook/cosmos.mdx index 49a2e8c52602..15fbf0537bca 100644 --- a/website/pages/ar/cookbook/cosmos.mdx +++ b/website/pages/ar/cookbook/cosmos.mdx @@ -203,7 +203,7 @@ $ graph build Visit the Subgraph Studio to create a new subgraph. ```bash -graph deploy --studio subgraph-name +graph deploy subgraph-name ``` **Local Graph Node (based on default configuration):** diff --git a/website/pages/ar/cookbook/derivedfrom.mdx b/website/pages/ar/cookbook/derivedfrom.mdx index 69dd48047744..09ba62abde3f 100644 --- a/website/pages/ar/cookbook/derivedfrom.mdx +++ b/website/pages/ar/cookbook/derivedfrom.mdx @@ -69,6 +69,20 @@ This will not only make our subgraph more efficient, but it will also unlock thr ## Conclusion -Adopting the `@derivedFrom` directive in subgraphs effectively handles dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. -To learn more detailed strategies to avoid large arrays, read this blog from Kevin Jones: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). +For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ar/cookbook/enums.mdx b/website/pages/ar/cookbook/enums.mdx index a10970c1539f..9508aa864b6c 100644 --- a/website/pages/ar/cookbook/enums.mdx +++ b/website/pages/ar/cookbook/enums.mdx @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## مصادر إضافية For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/pages/ar/cookbook/grafting-hotfix.mdx b/website/pages/ar/cookbook/grafting-hotfix.mdx index 4be0a0b07790..b7699bf2bc85 100644 --- a/website/pages/ar/cookbook/grafting-hotfix.mdx +++ b/website/pages/ar/cookbook/grafting-hotfix.mdx @@ -1,12 +1,12 @@ --- -Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment --- ## TLDR Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. -### Overview +### نظره عامة This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. @@ -164,7 +164,7 @@ Grafting is an effective strategy for deploying hotfixes in subgraph development However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. -## Additional Resources +## مصادر إضافية - **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. @@ -173,14 +173,14 @@ By incorporating grafting into your subgraph development workflow, you can enhan ## Subgraph Best Practices 1-6 -1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ar/cookbook/grafting.mdx b/website/pages/ar/cookbook/grafting.mdx index 548091ac5b7d..08c347c50a63 100644 --- a/website/pages/ar/cookbook/grafting.mdx +++ b/website/pages/ar/cookbook/grafting.mdx @@ -22,7 +22,7 @@ For more information, you can check: - [تطعيم(Grafting)](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic usecase. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ In this tutorial, we will be covering a basic usecase. We will replace an existi ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. ## Grafting Manifest Definition @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## مصادر إضافية -If you want more experience with grafting, here's a few examples for popular contracts: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/ar/cookbook/immutable-entities-bytes-as-ids.mdx b/website/pages/ar/cookbook/immutable-entities-bytes-as-ids.mdx index f38c33385604..541212617f9f 100644 --- a/website/pages/ar/cookbook/immutable-entities-bytes-as-ids.mdx +++ b/website/pages/ar/cookbook/immutable-entities-bytes-as-ids.mdx @@ -174,3 +174,17 @@ Query Response: Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ar/cookbook/near.mdx b/website/pages/ar/cookbook/near.mdx index 68089d21c7c4..b2f9eaf75feb 100644 --- a/website/pages/ar/cookbook/near.mdx +++ b/website/pages/ar/cookbook/near.mdx @@ -194,8 +194,8 @@ The node configuration will depend on where the subgraph is being deployed. ### Subgraph Studio ```sh -graph auth --studio -graph deploy --studio +graph auth +graph deploy ``` ### Local Graph Node (based on default configuration) diff --git a/website/pages/ar/cookbook/pruning.mdx b/website/pages/ar/cookbook/pruning.mdx index f22a2899f1de..d86bf50edf42 100644 --- a/website/pages/ar/cookbook/pruning.mdx +++ b/website/pages/ar/cookbook/pruning.mdx @@ -39,3 +39,17 @@ dataSources: ## Conclusion Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ar/cookbook/subgraph-uncrashable.mdx b/website/pages/ar/cookbook/subgraph-uncrashable.mdx index 989310a3f9a0..0cc91a0fa2c3 100644 --- a/website/pages/ar/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/ar/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Safe Subgraph Code Generator - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. These logs can be viewed in the The Graph's hosted service under the 'Logs' section. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. diff --git a/website/pages/ar/cookbook/timeseries.mdx b/website/pages/ar/cookbook/timeseries.mdx index 88ee70005a6e..a6402c800725 100644 --- a/website/pages/ar/cookbook/timeseries.mdx +++ b/website/pages/ar/cookbook/timeseries.mdx @@ -6,7 +6,7 @@ title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggr Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. -## Overview +## نظره عامة Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. @@ -181,14 +181,14 @@ By adopting this pattern, developers can build more efficient and scalable subgr ## Subgraph Best Practices 1-6 -1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ar/cookbook/transfer-to-the-graph.mdx b/website/pages/ar/cookbook/transfer-to-the-graph.mdx index 287cd7d81b4b..5c0446fa7fda 100644 --- a/website/pages/ar/cookbook/transfer-to-the-graph.mdx +++ b/website/pages/ar/cookbook/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -48,7 +48,7 @@ graph init --product subgraph-studio In The Graph CLI, use the auth command seen in Subgraph Studio: ```sh -graph auth --studio +graph auth ``` ## 2. Deploy Your Subgraph to Studio @@ -58,7 +58,7 @@ If you have your source code, you can easily deploy it to Studio. If you don't h In The Graph CLI, run the following command: ```sh -graph deploy --studio --ipfs-hash +graph deploy --ipfs-hash ``` @@ -74,7 +74,7 @@ graph deploy --studio --ipfs-hash You can start [querying](/querying/querying-the-graph/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### مثال [CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: @@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### مصادر إضافية - To quickly create and publish a new subgraph, check out the [Quick Start](/quick-start/). - To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). diff --git a/website/pages/ar/deploying/deploy-using-subgraph-studio.mdx b/website/pages/ar/deploying/deploy-using-subgraph-studio.mdx index 1e6e22de7282..3e357875b406 100644 --- a/website/pages/ar/deploying/deploy-using-subgraph-studio.mdx +++ b/website/pages/ar/deploying/deploy-using-subgraph-studio.mdx @@ -12,9 +12,9 @@ In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: - View a list of subgraphs you've created - Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- إنشاء وإدارة مفاتيح API الخاصة بك لـ subgraphs محددة - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create a new subgraph +- Create your subgraph - Deploy your subgraph using The Graph CLI - Test your subgraph in the playground environment - Integrate your subgraph in staging using the development query URL @@ -27,21 +27,19 @@ Before deploying, you must install The Graph CLI. You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use The Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -**Install with yarn:** +### Install with yarn ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +### Install with npm ```bash npm install -g @graphprotocol/graph-cli ``` -## Create Your Subgraph - -Before deploying your subgraph you need to create an account in [Subgraph Studio](https://thegraph.com/studio/). +## البدء 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. @@ -49,30 +47,30 @@ Before deploying your subgraph you need to create an account in [Subgraph Studio 3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs. +> Important: You need an API key to query subgraphs ### How to Create a Subgraph in Subgraph Studio -> For additional written detail, review the [Quick-Start](/quick-start/). +> For additional written detail, review the [Quick Start](/quick-start/). -### Subgraph Compatibility with The Graph Network +### توافق الـ Subgraph مع شبكة The Graph In order to be supported by Indexers on The Graph Network, subgraphs must: - Index a [supported network](/developing/supported-networks) -- Must not use any of the following features: +- يجب ألا تستخدم أيًا من الميزات التالية: - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting + - أخطاء غير فادحة + - تطعيم(Grafting) ## Initialize Your Subgraph Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash -graph init --studio +graph init ``` You can find the `` value on your subgraph details page in Subgraph Studio, see image below: @@ -83,24 +81,24 @@ After running `graph init`, you will be asked to input the contract address, net ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to login into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. Then, use the following command to authenticate from the CLI: ```bash -graph auth --studio +graph auth ``` ## Deploying a Subgraph Once you are ready, you can deploy your subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. Use the following CLI command to deploy your subgraph: ```bash -graph deploy --studio +graph deploy ``` After running this command, the CLI will ask for a version label. @@ -126,11 +124,11 @@ If you want to update your subgraph, you can do the following: - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). - This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an on-chain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. > Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/network/curating/). -## Automatic Archiving of Subgraph Versions +## الأرشفة التلقائية لإصدارات الـ Subgraph Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. diff --git a/website/pages/ar/developing/creating-a-subgraph/advanced.mdx b/website/pages/ar/developing/creating-a-subgraph/advanced.mdx new file mode 100644 index 000000000000..04984ebb31a6 --- /dev/null +++ b/website/pages/ar/developing/creating-a-subgraph/advanced.mdx @@ -0,0 +1,555 @@ +--- +title: Advance Subgraph Features +--- + +## نظره عامة + +Add and implement advanced subgraph features to enhanced your subgraph's built. + +Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: + +| Feature | Name | +| ---------------------------------------------------- | ---------------- | +| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | +| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | + +For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +features: + - fullTextSearch + - nonFatalErrors +dataSources: ... +``` + +> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. + +## Timeseries and Aggregations + +Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, etc. + +This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the Timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. + +### Example Schema + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} + +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +### Defining Timeseries and Aggregations + +Timeseries entities are defined with `@entity(timeseries: true)` in schema.graphql. Every timeseries entity must have a unique ID of the int8 type, a timestamp of the Timestamp type, and include data that will be used for calculation by aggregation entities. These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the Aggregation entities. + +Aggregation entities are defined with `@aggregation` in schema.graphql. Every aggregation entity defines the source from which it will gather data (which must be a Timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. + +#### Available Aggregation Intervals + +- `hour`: sets the timeseries period every hour, on the hour. +- `day`: sets the timeseries period every day, starting and ending at 00:00. + +#### Available Aggregation Functions + +- `sum`: Total of all values. +- `count`: Number of values. +- `min`: Minimum value. +- `max`: Maximum value. +- `first`: First value in the period. +- `last`: Last value in the period. + +#### Example Aggregations Query + +```graphql +{ + stats(interval: "hour", where: { timestamp_gt: 1704085200 }) { + id + timestamp + sum + } +} +``` + +Note: + +To use Timeseries and Aggregations, a subgraph must have a spec version ≥1.1.0. Note that this feature might undergo significant changes that could affect backward compatibility. + +[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations. + +## أخطاء غير فادحة + +Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. + +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. + +Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +features: + - nonFatalErrors + ... +``` + +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: + +```graphql +foos(first: 100, subgraphError: allow) { + id +} + +_meta { + hasIndexingErrors +} +``` + +If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: + +```graphql +"data": { + "foos": [ + { + "id": "0xdead" + } + ], + "_meta": { + "hasIndexingErrors": true + } +}, +"errors": [ + { + "message": "indexing_error" + } +] +``` + +## IPFS/Arweave File Data Sources + +File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. + +> This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. + +### نظره عامة + +Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found. + +This is similar to the [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources. + +> This replaces the existing `ipfs.cat` API + +### Upgrade guide + +#### Update `graph-ts` and `graph-cli` + +File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1 + +#### Add a new entity type which will be updated when files are found + +File data sources cannot access or update chain-based entities, but must update file specific entities. + +This may mean splitting out fields from existing entities into separate entities, linked together. + +Original combined entity: + +```graphql +type Token @entity { + id: ID! + tokenID: BigInt! + tokenURI: String! + externalURL: String! + ipfsURI: String! + image: String! + name: String! + description: String! + type: String! + updatedAtTimestamp: BigInt + owner: User! +} +``` + +New, split entity: + +```graphql +type Token @entity { + id: ID! + tokenID: BigInt! + tokenURI: String! + ipfsURI: TokenMetadata + updatedAtTimestamp: BigInt + owner: String! +} + +type TokenMetadata @entity { + id: ID! + image: String! + externalURL: String! + name: String! + description: String! +} +``` + +If the relationship is 1:1 between the parent entity and the resulting file data source entity, the simplest pattern is to link the parent entity to a resulting file entity by using the IPFS CID as the lookup. Get in touch on Discord if you are having difficulty modelling your new file-based entities! + +> You can use [nested filters](/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities. + +#### Add a new templated data source with `kind: file/ipfs` or `kind: file/arweave` + +This is the data source which will be spawned when a file of interest is identified. + +```yaml +templates: + - name: TokenMetadata + kind: file/ipfs + mapping: + apiVersion: 0.0.7 + language: wasm/assemblyscript + file: ./src/mapping.ts + handler: handleMetadata + entities: + - TokenMetadata + abis: + - name: Token + file: ./abis/Token.json +``` + +> Currently `abis` are required, though it is not possible to call contracts from within file data sources + +The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#limitations) for more details. + +#### Create a new handler to process files + +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). + +The CID of the file as a readable string can be accessed via the `dataSource` as follows: + +```typescript +const cid = dataSource.stringParam() +``` + +Example handler: + +```typescript +import { json, Bytes, dataSource } from '@graphprotocol/graph-ts' +import { TokenMetadata } from '../generated/schema' + +export function handleMetadata(content: Bytes): void { + let tokenMetadata = new TokenMetadata(dataSource.stringParam()) + const value = json.fromBytes(content).toObject() + if (value) { + const image = value.get('image') + const name = value.get('name') + const description = value.get('description') + const externalURL = value.get('external_url') + + if (name && image && description && externalURL) { + tokenMetadata.name = name.toString() + tokenMetadata.image = image.toString() + tokenMetadata.externalURL = externalURL.toString() + tokenMetadata.description = description.toString() + } + + tokenMetadata.save() + } +} +``` + +#### Spawn file data sources when required + +You can now create file data sources during execution of chain-based handlers: + +- Import the template from the auto-generated `templates` +- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave + +For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). + +For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). + +Example: + +```typescript +import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' + +const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' +//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. + +export function handleTransfer(event: TransferEvent): void { + let token = Token.load(event.params.tokenId.toString()) + if (!token) { + token = new Token(event.params.tokenId.toString()) + token.tokenID = event.params.tokenId + + token.tokenURI = '/' + event.params.tokenId.toString() + '.json' + const tokenIpfsHash = ipfshash + token.tokenURI + //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" + + token.ipfsURI = tokenIpfsHash + + TokenMetadataTemplate.create(tokenIpfsHash) + } + + token.updatedAtTimestamp = event.block.timestamp + token.owner = event.params.to.toHexString() + token.save() +} +``` + +This will create a new file data source, which will poll Graph Node's configured IPFS or Arweave endpoint, retrying if it is not found. When the file is found, the file data source handler will be executed. + +This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. + +> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file + +Congratulations, you are using file data sources! + +#### Deploying your subgraphs + +You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. + +#### Limitations + +File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: + +- Entities created by File Data Sources are immutable, and cannot be updated +- File Data Source handlers cannot access entities from other file data sources +- Entities associated with File Data Sources cannot be accessed by chain-based handlers + +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! + +Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. + +#### Best practices + +If you are linking NFT metadata to corresponding tokens, use the metadata's IPFS hash to reference a Metadata entity from the Token entity. Save the Metadata entity using the IPFS hash as an ID. + +You can use [DataSource context](/developing/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. + +If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. + +> We are working to improve the above recommendation, so queries only return the "most recent" version + +#### Known issues + +File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. + +Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. + +#### Examples + +[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) + +#### المراجع + +[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) + +## Indexed Argument Filters / Topic Filters + +> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` + +Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. + +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. + +- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. + +### How Topic Filters Work + +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. + +- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. + +```solidity +// SPDX-License-Identifier: MIT +pragma solidity ^0.8.0; + +contract Token { + // Event declaration with indexed parameters for addresses + event Transfer(address indexed from, address indexed to, uint256 value); + + // Function to simulate transferring tokens + function transfer(address to, uint256 value) public { + // Emitting the Transfer event with from, to, and value + emit Transfer(msg.sender, to, value); + } +} +``` + +In this example: + +- The `Transfer` event is used to log transactions of tokens between addresses. +- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. +- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. + +#### Configuration in Subgraphs + +Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: + +```yaml +eventHandlers: + - event: SomeEvent(indexed uint256, indexed address, indexed uint256) + handler: handleSomeEvent + topic1: ['0xValue1', '0xValue2'] + topic2: ['0xAddress1', '0xAddress2'] + topic3: ['0xValue3'] +``` + +In this setup: + +- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. +- Each topic can have one or more values, and an event is only processed if it matches one of the values in each specified topic. + +#### Filter Logic + +- Within a Single Topic: The logic functions as an OR condition. The event will be processed if it matches any one of the listed values in a given topic. +- Between Different Topics: The logic functions as an AND condition. An event must satisfy all specified conditions across different topics to trigger the associated handler. + +#### Example 1: Tracking Direct Transfers from Address A to Address B + +```yaml +eventHandlers: + - event: Transfer(indexed address,indexed address,uint256) + handler: handleDirectedTransfer + topic1: ['0xAddressA'] # Sender Address + topic2: ['0xAddressB'] # Receiver Address +``` + +In this configuration: + +- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. +- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. +- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. + +#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses + +```yaml +eventHandlers: + - event: Transfer(indexed address,indexed address,uint256) + handler: handleTransferToOrFrom + topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Sender Address + topic2: ['0xAddressB', '0xAddressC'] # Receiver Address +``` + +In this configuration: + +- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. +- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. +- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. + +## Declared eth_call + +> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. + +Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. + +This feature does the following: + +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Allows faster data fetching, resulting in quicker query responses and a better user experience. +- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. + +### Key Concepts + +- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. +- Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously. +- Time Efficiency: The total time taken for all the calls changes from the sum of the individual call times (sequential) to the time taken by the longest call (parallel). + +#### Scenario without Declarative `eth_calls` + +Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. + +Traditionally, these calls might be made sequentially: + +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds + +Total time taken = 3 + 2 + 4 = 9 seconds + +#### Scenario with Declarative `eth_calls` + +With this feature, you can declare these calls to be executed in parallel: + +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds + +Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. + +Total time taken = max (3, 2, 4) = 4 seconds + +#### How it Works + +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. + +#### Example Configuration in Subgraph Manifest + +Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. + +`Subgraph.yaml` using `event.address`: + +```yaml +eventHandlers: +event: Swap(indexed address,indexed address,int256,int256,uint160,uint128,int24) +handler: handleSwap +calls: + global0X128: Pool[event.address].feeGrowthGlobal0X128() + global1X128: Pool[event.address].feeGrowthGlobal1X128() +``` + +Details for the example above: + +- `global0X128` is the declared `eth_call`. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` +- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. + +`Subgraph.yaml` using `event.params` + +```yaml +calls: + - ERC20DecimalsToken0: ERC20[event.params.token0].decimals() +``` + +### Grafting على Subgraphs موجودة + +> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). + +When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. + +A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: + +```yaml +description: ... +graft: + base: Qm... # Subgraph ID of base subgraph + block: 7345624 # Block number +``` + +When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. + +Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. + +The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: + +- يضيف أو يزيل أنواع الكيانات +- يزيل الصفات من أنواع الكيانات +- يضيف صفات nullable لأنواع الكيانات +- يحول صفات non-nullable إلى صفات nullable +- يضيف قيما إلى enums +- يضيف أو يزيل الواجهات +- يغير للكيانات التي يتم تنفيذ الواجهة لها + +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. diff --git a/website/pages/ar/developing/creating-a-subgraph/assemblyscript-mappings.mdx b/website/pages/ar/developing/creating-a-subgraph/assemblyscript-mappings.mdx new file mode 100644 index 000000000000..2518d7620204 --- /dev/null +++ b/website/pages/ar/developing/creating-a-subgraph/assemblyscript-mappings.mdx @@ -0,0 +1,113 @@ +--- +title: Writing AssemblyScript Mappings +--- + +## نظره عامة + +The mappings take data from a particular source and transform it into entities that are defined within your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. + +## كتابة الـ Mappings + +For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. + +In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: + +```javascript +import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' +import { Gravatar } from '../generated/schema' + +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let id = event.params.id + let gravatar = Gravatar.load(id) + if (gravatar == null) { + gravatar = new Gravatar(id) + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. + +The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on-demand. The entity is then updated to match the new event parameters before it is saved back to the store using `gravatar.save()`. + +### الـ IDs الموصى بها لإنشاء كيانات جديدة + +It is highly recommended to use `Bytes` as the type for `id` fields, and only use `String` for attributes that truly contain human-readable text, like the name of a token. Below are some recommended `id` values to consider when creating new entities. + +- `transfer.id = event.transaction.hash` + +- `let id = event.transaction.hash.concatI32(event.logIndex.toI32())` + +- For entities that store aggregated data, for e.g, daily trade volumes, the `id` usually contains the day number. Here, using a `Bytes` as the `id` is beneficial. Determining the `id` would look like + +```typescript +let dayID = event.block.timestamp.toI32() / 86400 +let id = Bytes.fromI32(dayID) +``` + +- Convert constant addresses to `Bytes`. + +`const id = Bytes.fromHexString('0xdead...beef')` + +There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) which contains utilities for interacting with the Graph Node store and conveniences for handling smart contract data and entities. It can be imported into `mapping.ts` from `@graphprotocol/graph-ts`. + +### Handling of entities with identical IDs + +When creating and saving a new entity, if an entity with the same ID already exists, the properties of the new entity are always preferred during the merge process. This means that the existing entity will be updated with the values from the new entity. + +If a null value is intentionally set for a field in the new entity with the same ID, the existing entity will be updated with the null value. + +If no value is set for a field in the new entity with the same ID, the field will result in null as well. + +## توليد الكود + +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. + +This is done with + +```sh +graph codegen [--output-dir ] [] +``` + +but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: + +```sh +# Yarn +yarn codegen + +# NPM +npm run codegen +``` + +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. + +```javascript +import { + // The contract class: + Gravity, + // The events classes: + NewGravatar, + UpdatedGravatar, +} from '../generated/Gravity/Gravity' +``` + +In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with + +```javascript +'import { Gravatar } from '../generated/schema +``` + +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. + +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/pages/ar/developing/creating-a-subgraph/install-the-cli.mdx b/website/pages/ar/developing/creating-a-subgraph/install-the-cli.mdx new file mode 100644 index 000000000000..b18e9aa8f7fb --- /dev/null +++ b/website/pages/ar/developing/creating-a-subgraph/install-the-cli.mdx @@ -0,0 +1,119 @@ +--- +title: قم بتثبيت Graph CLI +--- + +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/network/curating/). + +## نظره عامة + +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/creating-a-subgraph/subgraph-manifest/) and compiles the [mappings](/creating-a-subgraph/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. + +## Getting Started + +### قم بتثبيت Graph CLI + +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run one of the following commands: + +#### Using [npm](https://www.npmjs.com/) + +```bash +npm install -g @graphprotocol/graph-cli@latest +``` + +#### Using [yarn](https://yarnpkg.com/) + +```bash +yarn global add @graphprotocol/graph-cli +``` + +The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. + +## إنشاء الـ Subgraph + +### من عقد موجود + +The following command creates a subgraph that indexes all events of an existing contract: + +```sh +graph init \ + --product subgraph-studio + --from-contract \ + [--network ] \ + [--abi ] \ + [] +``` + +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. + +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. + +### من مثال Subgraph + +The following command initializes a new project from an example subgraph: + +```sh +graph init --from-example=example-subgraph +``` + +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. + +### Add New `dataSources` to an Existing Subgraph + +`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. + +Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: + +```sh +graph add
[] + +Options: + + --abi Path to the contract ABI (default: download from Etherscan) + --contract-name Name of the contract (default: Contract) + --merge-entities Whether to merge entities with the same name (default: false) + --network-file Networks config file path (default: "./networks.json") +``` + +#### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. + +### Getting The ABIs + +يجب أن تتطابق ملف (ملفات) ABI مع العقد (العقود) الخاصة بك. هناك عدة طرق للحصول على ملفات ABI: + +- إذا كنت تقوم ببناء مشروعك الخاص ، فمن المحتمل أن تتمكن من الوصول إلى أحدث ABIs. +- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. + +## SpecVersion Releases + +| الاصدار | ملاحظات الإصدار | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/pages/ar/developing/creating-a-subgraph/ql-schema.mdx b/website/pages/ar/developing/creating-a-subgraph/ql-schema.mdx new file mode 100644 index 000000000000..20b4acef827a --- /dev/null +++ b/website/pages/ar/developing/creating-a-subgraph/ql-schema.mdx @@ -0,0 +1,312 @@ +--- +title: The Graph QL Schema +--- + +## نظره عامة + +The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. + +> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/querying/graphql-api/) section. + +### Defining Entities + +Before defining entities, it is important to take a step back and think about how your data is structured and linked. + +- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- It may be useful to imagine entities as "objects containing data", rather than as events or functions. +- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. +- Each type that should be an entity is required to be annotated with an `@entity` directive. +- By default, entities are mutable, meaning that mappings can load existing entities, modify them and store a new version of that entity. + - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. + - If changes happen in the same block in which the entity was created, then mappings can make changes to immutable entities. Immutable entities are much faster to write and to query so they should be used whenever possible. + +#### مثال جيد + +The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. + +```graphql +type Gravatar @entity(immutable: true) { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String + accepted: Boolean +} +``` + +#### مثال سيئ + +The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. + +```graphql +type GravatarAccepted @entity { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String +} + +type GravatarDeclined @entity { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String +} +``` + +#### الحقول الاختيارية والمطلوبة + +Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If the field is a scalar field, you get an error when you try to store the entity. If the field references another entity then you get this error: + +``` +'Null value resolved for non-null field 'name +``` + +Each entity must have an `id` field, which must be of type `Bytes!` or `String!`. It is generally recommended to use `Bytes!`, unless the `id` contains human-readable text, since entities with `Bytes!` id's will be faster to write and query as those with a `String!` `id`. The `id` field serves as the primary key, and needs to be unique among all entities of the same type. For historical reasons, the type `ID!` is also accepted and is a synonym for `String!`. + +For some entity types the `id` for `Bytes!` is constructed from the id's of two other entities; that is possible using `concat`, e.g., `let id = left.id.concat(right.id) ` to form the id from the id's of `left` and `right`. Similarly, to construct an id from the id of an existing entity and a counter `count`, `let id = left.id.concatI32(count)` can be used. The concatenation is guaranteed to produce unique id's as long as the length of `left` is the same for all such entities, for example, because `left.id` is an `Address`. + +### أنواع المقاييس المضمنة + +#### المقاييس المدعومة من GraphQL + +The following scalars are supported in the GraphQL API: + +| النوع | الوصف | +| --- | --- | +| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | + +### Enums + +You can also create enums within a schema. Enums have the following syntax: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner"`. The example below demonstrates what the Token entity would look like with an enum field: + +More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). + +### علاقات الكيانات + +An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. + +Relationships are defined on entities just like any other field except that the type specified is that of another entity. + +#### العلاقات واحد-لواحد + +Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: + +```graphql +type Transaction @entity(immutable: true) { + id: Bytes! + transactionReceipt: TransactionReceipt +} + +type TransactionReceipt @entity(immutable: true) { + id: Bytes! + transaction: Transaction +} +``` + +#### علاقات واحد-لمتعدد + +Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: + +```graphql +type Token @entity(immutable: true) { + id: Bytes! +} + +type TokenBalance @entity { + id: Bytes! + amount: Int! + token: Token! +} +``` + +### البحث العكسي + +Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. + +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. + +#### مثال + +We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: + +```graphql +type Token @entity(immutable: true) { + id: Bytes! + tokenBalances: [TokenBalance!]! @derivedFrom(field: "token") +} + +type TokenBalance @entity { + id: Bytes! + amount: Int! + token: Token! +} +``` + +#### علاقات متعدد_لمتعدد + +For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. + +#### مثال + +Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. + +```graphql +type Organization @entity { + id: Bytes! + name: String! + members: [User!]! +} + +type User @entity { + id: Bytes! + name: String! + organizations: [Organization!]! @derivedFrom(field: "members") +} +``` + +A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like + +```graphql +type Organization @entity { + id: Bytes! + name: String! + members: [UserOrganization!]! @derivedFrom(field: "organization") +} + +type User @entity { + id: Bytes! + name: String! + organizations: [UserOrganization!] @derivedFrom(field: "user") +} + +type UserOrganization @entity { + id: Bytes! # Set to `user.id.concat(organization.id)` + user: User! + organization: Organization! +} +``` + +This approach requires that queries descend into one additional level to retrieve, for example, the organizations for users: + +```graphql +query usersWithOrganizations { + users { + organizations { + # this is a UserOrganization entity + organization { + name + } + } + } +} +``` + +This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. + +### إضافة تعليقات إلى المخطط (schema) + +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: + +```graphql +type MyFirstEntity @entity { + # unique identifier and primary key of the entity + id: Bytes! + address: Bytes! +} +``` + +## تعريف حقول البحث عن النص الكامل + +Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing them to the indexed text data. + +A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. + +To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. + +```graphql +type _Schema_ + @fulltext( + name: "bandSearch" + language: en + algorithm: rank + include: [{ entity: "Band", fields: [{ name: "name" }, { name: "description" }, { name: "bio" }] }] + ) + +type Band @entity { + id: Bytes! + name: String! + description: String! + bio: String + wallet: Address + labels: [Label!]! + discography: [Album!]! + members: [Musician!]! +} +``` + +The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/querying/graphql-api#queries) for a description of the fulltext search API and more example usage. + +```graphql +query { + bandSearch(text: "breaks & electro & detroit") { + id + name + description + wallet + } +} +``` + +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. + +## اللغات المدعومة + +Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary from language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". + +Supported language dictionaries: + +| Code | القاموس | +| ------ | ---------- | +| simple | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | Portuguese | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | + +### خوارزميات التصنيف + +Supported algorithms for ordering results: + +| Algorithm | Description | +| ------------- | --------------------------------------------------------------- | +| rank | استخدم جودة مطابقة استعلام النص-الكامل (0-1) لترتيب النتائج. | +| proximityRank | Similar to rank but also includes the proximity of the matches. | diff --git a/website/pages/ar/developing/creating-a-subgraph/starting-your-subgraph.mdx b/website/pages/ar/developing/creating-a-subgraph/starting-your-subgraph.mdx new file mode 100644 index 000000000000..f48efba92d85 --- /dev/null +++ b/website/pages/ar/developing/creating-a-subgraph/starting-your-subgraph.mdx @@ -0,0 +1,21 @@ +--- +title: Starting Your Subgraph +--- + +## نظره عامة + +The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. + +When you create a [subgraph](/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. + +Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. + +### Start Building + +Start the process and build a subgraph that matches your needs: + +1. [Install the CLI](/developing/creating-a-subgraph/install-the-cli/) - Set up your infrastructure +2. [Subgraph Manifest](/developing/creating-a-subgraph/subgraph-manifest/) - Understand a subgraph's key component +3. [The Graph Ql Schema](/developing/creating-a-subgraph/ql-schema/) - Write your schema +4. [Writing AssemblyScript Mappings](/developing/creating-a-subgraph/assemblyscript-mappings/) - Write your mappings +5. [Advanced Features](/developing/creating-a-subgraph/advanced/) - Customize your subgraph with advanced features diff --git a/website/pages/ar/developing/creating-a-subgraph/subgraph-manifest.mdx b/website/pages/ar/developing/creating-a-subgraph/subgraph-manifest.mdx new file mode 100644 index 000000000000..8c36c56b624a --- /dev/null +++ b/website/pages/ar/developing/creating-a-subgraph/subgraph-manifest.mdx @@ -0,0 +1,534 @@ +--- +title: Subgraph Manifest +--- + +## نظره عامة + +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. + +The **subgraph definition** consists of the following files: + +- `subgraph.yaml`: Contains the subgraph manifest + +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL + +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +### Subgraph Capabilities + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +repository: https://github.com/graphprotocol/graph-tooling +schema: + file: ./schema.graphql +indexerHints: + prune: auto +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + abi: Gravity + startBlock: 6175244 + endBlock: 7175245 + context: + foo: + type: Bool + data: true + bar: + type: String + data: 'bar' + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + - event: UpdatedGravatar(uint256,address,string,string) + handler: handleUpdatedGravatar + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCall + filter: + kind: call + file: ./src/mapping.ts +``` + +## Subgraph Entries + +> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/developing/creating-a-subgraph/ql-schema/). + +الإدخالات الهامة لتحديث manifest هي: + +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. + +- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. + +- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. + +- `features`: a list of all used [feature](#experimental-features) names. + +- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. + +- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. + +- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. + +- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. + +- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. + +- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. + +- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. + +- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. + +- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. + +- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. + +A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. + +## Event Handlers + +Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. + +### Defining an Event Handler + +An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: Approval(address,address,uint256) + handler: handleApproval + - event: Transfer(address,address,uint256) + handler: handleTransfer + topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optional topic filter which filters only events with the specified topic. +``` + +## معالجات الاستدعاء(Call Handlers) + +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. + +Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. + +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. + +### تعريف معالج الاستدعاء + +To define a call handler in your manifest, simply add a `callHandlers` array under the data source you would like to subscribe to. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar +``` + +The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. + +### دالة الـ Mapping + +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: + +```typescript +import { CreateGravatarCall } from '../generated/Gravity/Gravity' +import { Transaction } from '../generated/schema' + +export function handleCreateGravatar(call: CreateGravatarCall): void { + let id = call.transaction.hash + let transaction = new Transaction(id) + transaction.displayName = call.inputs._displayName + transaction.imageUrl = call.inputs._imageUrl + transaction.save() +} +``` + +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. + +## معالجات الكتلة + +In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. + +### الفلاتر المدعومة + +#### Call Filter + +```yaml +filter: + kind: call +``` + +_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ + +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. + +The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCallToContract + filter: + kind: call +``` + +#### Polling Filter + +> **Requires `specVersion` >= 0.0.8** +> +> **Note:** Polling filters are only available on dataSources of `kind: ethereum`. + +```yaml +blockHandlers: + - handler: handleBlock + filter: + kind: polling + every: 10 +``` + +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. + +#### Once Filter + +> **Requires `specVersion` >= 0.0.8** +> +> **Note:** Once filters are only available on dataSources of `kind: ethereum`. + +```yaml +blockHandlers: + - handler: handleOnce + filter: + kind: once +``` + +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. + +```ts +export function handleOnce(block: ethereum.Block): void { + let data = new InitialData(Bytes.fromUTF8('initial')) + data.data = 'Setup data here' + data.save() +} +``` + +### دالة الـ Mapping + +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. + +```typescript +import { ethereum } from '@graphprotocol/graph-ts' + +export function handleBlock(block: ethereum.Block): void { + let id = block.hash + let entity = new Block(id) + entity.save() +} +``` + +## أحداث الـ مجهول + +If you need to process anonymous events in Solidity, that can be achieved by providing the topic 0 of the event, as in the example: + +```yaml +eventHandlers: + - event: LogNote(bytes4,address,bytes32,bytes32,uint256,bytes) + topic0: '0x644843f351d3fba4abcd60109eaff9f54bac8fb8ccf0bab941009c21df21cf31' + handler: handleGive +``` + +An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. + +## Transaction Receipts in Event Handlers + +Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. + +To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. + +```yaml +eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + receipt: true +``` + +Inside the handler function, the receipt can be accessed in the `Event.receipt` field. When the `receipt` key is set to `false` or omitted in the manifest, a `null` value will be returned instead. + +## Order of Triggering Handlers + +يتم ترتيب المشغلات (triggers) لمصدر البيانات داخل الكتلة باستخدام العملية التالية: + +1. يتم ترتيب triggers الأحداث والاستدعاءات أولا من خلال فهرس الإجراء داخل الكتلة. +2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. يتم تشغيل مشغلات الكتلة بعد مشغلات الحدث والاستدعاء، بالترتيب المحدد في الـ manifest. + +قواعد الترتيب هذه عرضة للتغيير. + +> **Note:** When new [dynamic data source](#data-source-templates-for-dynamically-created-contracts) are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. + +## قوالب مصدر البيانات + +A common pattern in EVM-compatible smart contracts is the use of registry or factory contracts, where one contract creates, manages, or references an arbitrary number of other contracts that each have their own state and events. + +The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. + +### مصدر البيانات للعقد الرئيسي + +First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.org) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on-chain by the factory contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: Factory + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - Directory + abis: + - name: Factory + file: ./abis/factory.json + eventHandlers: + - event: NewExchange(address,address) + handler: handleNewExchange +``` + +### قوالب مصدر البيانات للعقود التي تم إنشاؤها ديناميكيا + +Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a pre-defined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + # ... other source fields for the main contract ... +templates: + - name: Exchange + kind: ethereum/contract + network: mainnet + source: + abi: Exchange + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/exchange.ts + entities: + - Exchange + abis: + - name: Exchange + file: ./abis/exchange.json + eventHandlers: + - event: TokenPurchase(address,uint256,uint256) + handler: handleTokenPurchase + - event: EthPurchase(address,uint256,uint256) + handler: handleEthPurchase + - event: AddLiquidity(address,uint256,uint256) + handler: handleAddLiquidity + - event: RemoveLiquidity(address,uint256,uint256) + handler: handleRemoveLiquidity +``` + +### إنشاء قالب مصدر البيانات + +In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + // Start indexing the exchange; `event.params.exchange` is the + // address of the new exchange contract + Exchange.create(event.params.exchange) +} +``` + +> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> +> إذا كانت الكتل السابقة تحتوي على بيانات ذات صلة بمصدر البيانات الجديد ، فمن الأفضل فهرسة تلك البيانات من خلال قراءة الحالة الحالية للعقد وإنشاء كيانات تمثل تلك الحالة في وقت إنشاء مصدر البيانات الجديد. + +### سياق مصدر البيانات + +Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + let context = new DataSourceContext() + context.setString('tradingPair', event.params.tradingPair) + Exchange.createWithContext(event.params.exchange, context) +} +``` + +Inside a mapping of the `Exchange` template, the context can then be accessed: + +```typescript +import { dataSource } from '@graphprotocol/graph-ts' + +let context = dataSource.context() +let tradingPair = context.getString('tradingPair') +``` + +There are setters and getters like `setString` and `getString` for all value types. + +## كتل البدء + +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. + +```yaml +dataSources: + - kind: ethereum/contract + name: ExampleSource + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: ExampleContract + startBlock: 6627917 + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - User + abis: + - name: ExampleContract + file: ./abis/ExampleContract.json + eventHandlers: + - event: NewEvent(address,address) + handler: handleNewEvent +``` + +> **Note:** The contract creation block can be quickly looked up on Etherscan: +> +> 1. ابحث عن العقد بإدخال عنوانه في شريط البحث. +> 2. Click on the creation transaction hash in the `Contract Creator` section. +> 3. قم بتحميل صفحة تفاصيل الإجراء(transaction) حيث ستجد كتلة البدء لذلك العقد. + +## Indexer Hints + +The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. + +> This feature is available from `specVersion: 1.0.0` + +### Prune + +`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: + +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. + +``` + indexerHints: + prune: auto +``` + +> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. + +History as of a given block is required for: + +- [Time travel queries](/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history +- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block +- Rewinding the subgraph back to that block + +If historical data as of the block has been pruned, the above capabilities will not be available. + +> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. + +For subgraphs leveraging [time travel queries](/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: + +To retain a specific amount of historical data: + +``` + indexerHints: + prune: 1000 # Replace 1000 with the desired number of blocks to retain +``` + +To preserve the complete history of entity states: + +``` +indexerHints: + prune: never +``` diff --git a/website/pages/ar/developing/developer-faqs.mdx b/website/pages/ar/developing/developer-faqs.mdx index 1758e9f909b6..01aa712bb83c 100644 --- a/website/pages/ar/developing/developer-faqs.mdx +++ b/website/pages/ar/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: الأسئلة الشائعة للمطورين --- -## 1. What is a subgraph? +This page summarizes some of the most common questions for developers building on The Graph. -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers. +## Subgraph Related -## 2. Can I delete my subgraph? +### 1. What is a subgraph? -لا يمكن حذف ال Subgraph بمجرد إنشائها. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Can I change my subgraph name? +### 2. What is the first step to create a subgraph? -لا. بمجرد إنشاء ال Subgraph ، لا يمكن تغيير الاسم. تأكد من التفكير بعناية قبل إنشاء ال Subgraph الخاص بك حتى يسهل البحث عنه والتعرف عليه من خلال ال Dapps الأخرى. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Can I change the GitHub account associated with my subgraph? +### 3. Can I still create a subgraph if my smart contracts don't have events? -لا. بمجرد إنشاء ال Subgraph ، لا يمكن تغيير حساب GitHub المرتبط. تأكد من التفكير بعناية قبل إنشاء ال Subgraph الخاص بك. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Am I still able to create a subgraph if my smart contracts don't have events? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. +### 4. Can I change the GitHub account associated with my subgraph? -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. How do I update a subgraph on mainnet? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. How are templates different from data sources? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) upfront you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +يجب عليك إعادة نشر ال الفرعيةرسم بياني ، ولكن إذا لم يتغير الفرعيةرسم بياني (ID (IPFS hash ، فلن يضطر إلى المزامنة من البداية. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +ضمن ال Subgraph ، تتم معالجة الأحداث دائمًا بالترتيب الذي تظهر به في الكتل ، بغض النظر عما إذا كان ذلك عبر عقود متعددة أم لا. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -يمكنك تشغيل الأمر التالي: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**ملاحظة:** سيستخدم docker / docker-compose دائما أي إصدار من graph-node تم سحبه في المرة الأولى التي قمت بتشغيلها ، لذلك من المهم القيام بذلك للتأكد من أنك محدث بأحدث إصدار من graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. How do I call a contract function or access a public state variable from my subgraph mappings? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +يمكنك تشغيل الأمر التالي: -## 11. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? إذا تم إنشاء كيان واحد فقط أثناء الحدث ولم يكن هناك أي شيء متاح بشكل أفضل ، فسيكون hash الإجراء + فهرس السجل فريدا. يمكنك إبهامها عن طريق تحويلها إلى Bytes ثم تمريرها عبر `crypto.keccak256` ولكن هذا لن يجعلها فريدة من نوعها. -## 13. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 15. Can I delete my subgraph? -ضمن ال Subgraph ، تتم معالجة الأحداث دائمًا بالترتيب الذي تظهر به في الكتل ، بغض النظر عما إذا كان ذلك عبر عقود متعددة أم لا. +Yes, you can [delete](/managing/delete-a-subgraph/) and [transfer](/managing/transfer-a-subgraph/) your subgraph. -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? نعم. يمكنك القيام بذلك عن طريق استيراد `graph-ts` كما في المثال أدناه: @@ -78,23 +99,21 @@ Yes. On `graph init` command itself you can add multiple datasources by entering ()dataSource.address ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Can I import ethers.js or other JS libraries into my subgraph mappings? - -ليس حاليًا ، حيث تتم كتابة ال mappings في AssemblyScript. أحد الحلول البديلة الممكنة لذلك هو تخزين البيانات الأولية في الكيانات وتنفيذ المنطق الذي يتطلب مكتبات JS على ال client. +## Indexing & Querying Related -## 17. Is it possible to specify what block to start indexing on? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? نعم! جرب الأمر التالي ، مع استبدال "Organization / subgraphName" بالمؤسسة واسم الـ subgraph الخاص بك: @@ -102,19 +121,7 @@ Yes, you should take a look at the optional start block feature to start indexin curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. What networks are supported by The Graph? - -You can find the list of the supported networks [here](/developing/supported-networks). - -## 21. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? - -يجب عليك إعادة نشر ال الفرعيةرسم بياني ، ولكن إذا لم يتغير الفرعيةرسم بياني (ID (IPFS hash ، فلن يضطر إلى المزامنة من البداية. - -## 22. Is this possible to use Apollo Federation on top of graph-node? - -لم يتم دعم Federation بعد ، على الرغم من أننا نريد دعمه في المستقبل. و في الوقت الحالي ، الذي يمكنك القيام به هو استخدام schema stitching ، إما على client أو عبر خدمة البروكسي. - -## 23. Is there a limit to how many objects The Graph can return per query? +### 22. Is there a limit to how many objects The Graph can return per query? By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: @@ -122,24 +129,19 @@ By default, query responses are limited to 100 items per collection. If you want someCollection(first: 1000, skip: ) { ... } ``` -## 24. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? - -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. - -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/ar/developing/graph-ts/api.mdx b/website/pages/ar/developing/graph-ts/api.mdx index 15fa02b9b2ba..2de72189db87 100644 --- a/website/pages/ar/developing/graph-ts/api.mdx +++ b/website/pages/ar/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -هذه الصفحة توثق APIs المضمنة التي يمكن استخدامها عند كتابة subgraph mappings. يتوفر نوعان من APIs خارج الصندوق: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## مرجع API @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### تحميل الكيانات من المخزن @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Looking up entities created withing a block As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a Transaction from some on-chain event, and a later handler wants to access this transaction if it exists. In the case where the transaction does not exist, the subgraph will have to go to the database just to find out that the entity does not exist; if the subgraph author already knows that the entity must have been created in the same block, using loadInBlock avoids this database roundtrip. For some subgraphs, these missed lookups can contribute significantly to the indexing time. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Any other contract that is part of the subgraph can be imported from the generat #### معالجة الاستدعاءات المعادة -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### تشفير/فك تشفير ABI diff --git a/website/pages/ar/developing/supported-networks.mdx b/website/pages/ar/developing/supported-networks.mdx index 96e737b0d743..c2e7677ae4fb 100644 --- a/website/pages/ar/developing/supported-networks.mdx +++ b/website/pages/ar/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/ar/developing/unit-testing-framework.mdx b/website/pages/ar/developing/unit-testing-framework.mdx index 67407f8349be..d123dc0f994b 100644 --- a/website/pages/ar/developing/unit-testing-framework.mdx +++ b/website/pages/ar/developing/unit-testing-framework.mdx @@ -2,23 +2,32 @@ title: اختبار وحدة Framework --- -Matchstick هو اختبار وحدة framework ، تم تطويره بواسطة [ LimeChain ](https://limechain.tech/) ، والذي يسمح لمطوري الـ subgraph من اختبار منطق الـ mapping في بيئة sandboxed ونشر الـ subgraphs الخاصة بهم بثقة! +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and sucessfully deploy their subgraphs. + +## Benefits of Using Matchstick + +- It's written in Rust and optimized for high performance. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. ## Getting Started -### Install dependencies +### Install Dependencies -In order to use the test helper methods and run the tests, you will need to install the following dependencies: +In order to use the test helper methods and run tests, you need to install the following dependencies: ```sh yarn add --dev matchstick-as ``` -❗ `graph-node` depends on PostgreSQL, so if you don't already have it, you will need to install it. We highly advise using the commands below as adding it in any other way may cause unexpected errors! +### Install PostgreSQL + +`graph-node` depends on PostgreSQL, so if you don't already have it, then you will need to install it. + +> Note: It's highly recommended to use the commands below to avoid unexpected errors. -#### MacOS +#### Using MacOS -Postgres installation command: +Installation command: ```sh brew install postgresql @@ -30,15 +39,15 @@ Create a symlink to the latest libpq.5.lib _You may need to create this dir firs ln -sf /usr/local/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/opt/postgresql/lib/libpq.5.dylib ``` -#### Linux +#### Using Linux -Postgres installation command (depends on your distro): +Installation command (depends on your distro): ```sh sudo apt install postgresql ``` -### WSL (Windows Subsystem for Linux) +### Using WSL (Windows Subsystem for Linux) You can use Matchstick on WSL both using the Docker approach and the binary approach. As WSL can be a bit tricky, here's a few tips in case you encounter issues like @@ -76,7 +85,7 @@ And finally, do not use `graph test` (which uses your global installation of gra } ``` -### الاستخدام +### Using Matchstick To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). @@ -1384,6 +1393,10 @@ This means you have used `console.log` in your code, which is not supported by A The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. +## مصادر إضافية + +For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). + ## Feedback If you have any questions, feedback, feature requests or just want to reach out, the best place would be The Graph Discord where we have a dedicated channel for Matchstick, called 🔥| unit-testing. diff --git a/website/pages/ar/glossary.mdx b/website/pages/ar/glossary.mdx index a94cb5d4be55..6e4fbeab2e85 100644 --- a/website/pages/ar/glossary.mdx +++ b/website/pages/ar/glossary.mdx @@ -10,11 +10,9 @@ title: قائمة المصطلحات - **نقطة النهاية (Endpoint)**: عنوان URL يمكن استخدامه للاستعلام عن سبغراف. نقطة الاختبار لـ سبغراف استوديو هي: `https://api.studio.thegraph.com/query///` ونقطة نهاية مستكشف الغراف هي: `https://gateway.thegraph.com/api//subgraphs/id/` تُستخدم نقطة نهاية مستكشف الغراف للاستعلام عن سبغرافات على شبكة الغراف اللامركزية. -- **غراف فرعي (Subgraph)**: واجهة برمجة تطبيقات مفتوحة تستخلص البيانات من سلسلة الكتل، ومعالجتها، وتخزينها ليكون من السهل الاستعلام عنها من خلال لغة استعلام GraphQL. يمكن للمطورين بناء ونشر الغرافات الفرعية على شبكة الغراف اللامركزية. بعد ذلك، يمكن للمفهرسين البدء في فهرسة الغرافات الفرعية لتكون متاحة للاستعلام من قبل أي كان. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **الخدمة المستضافة (Hosted Service)**: هي خدمة مؤقتة تعمل كبنية تحتية لبناء واستعلام الغرافات الفرعية، حيث تقوم شبكة الغراف اللامركزية بتحسين تكاليف الخدمة وجودة الخدمة وتجربة المطور. - -- **Indexers**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. @@ -22,19 +20,19 @@ title: قائمة المصطلحات 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. -- **Indexer's Self Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. +- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegators**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curators**: Network participants that identify high-quality subgraphs, and “curate” them (i.e., signal GRT on them) in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. Indexers earn indexing rewards proportional to the signal on a subgraph. We see a correlation between the amount of GRT signalled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. -- **مستهلك الغراف الفرعي**: أي تطبيق أو مستخدم يستعلم عن غراف فرعي معين. +- **Data Consumer**: Any application or user that queries a subgraph. - **مطور السوبغراف**: هو المطور الذي يقوم ببناء ونشر السوبغراف على شبكة الغراف اللامركزية. @@ -46,15 +44,15 @@ title: قائمة المصطلحات 1. **Active**: An allocation is considered active when it is created on-chain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. -- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. +- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. - **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. @@ -62,11 +60,11 @@ title: قائمة المصطلحات - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. @@ -76,12 +74,8 @@ title: قائمة المصطلحات - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. - -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/ar/index.json b/website/pages/ar/index.json index f09f7bdca0b3..ef9526840c44 100644 --- a/website/pages/ar/index.json +++ b/website/pages/ar/index.json @@ -56,10 +56,6 @@ "graphExplorer": { "title": "Graph Explorer", "description": "استكشف ال Subgraphsوتفاعل مع البروتوكول" - }, - "hostedService": { - "title": "الخدمة المستضافة (Hosted Service)", - "description": "Create and explore subgraphs on the hosted service" } } }, diff --git a/website/pages/ar/managing/delete-a-subgraph.mdx b/website/pages/ar/managing/delete-a-subgraph.mdx index 68ef0a37da75..1807741026ae 100644 --- a/website/pages/ar/managing/delete-a-subgraph.mdx +++ b/website/pages/ar/managing/delete-a-subgraph.mdx @@ -9,7 +9,9 @@ Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). ## Step-by-Step 1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). + 2. Click on the three-dots to the right of the "publish" button. + 3. Click on the option to "delete this subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) diff --git a/website/pages/ar/managing/transfer-a-subgraph.mdx b/website/pages/ar/managing/transfer-a-subgraph.mdx index dc0a1d63936e..19999c39b1e3 100644 --- a/website/pages/ar/managing/transfer-a-subgraph.mdx +++ b/website/pages/ar/managing/transfer-a-subgraph.mdx @@ -2,18 +2,16 @@ title: Transfer a Subgraph --- -## Transferring ownership of a subgraph - Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. -**Please note the following:** +## Reminders - Whoever owns the NFT controls the subgraph. - If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. - You can easily move control of a subgraph to a multi-sig. - A community member can create a subgraph on behalf of a DAO. -### View your subgraph as an NFT +## View Your Subgraph as an NFT To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: @@ -27,39 +25,18 @@ Or a wallet explorer like **Rainbow.me**: https://rainbow.me/your-wallet-addres ``` -### Step-by-Step +## Step-by-Step To transfer ownership of a subgraph, do the following: -1. Use the UI built into Subgraph Studio: +1. Use the UI built into Subgraph Studio: - ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the subgraph to: - ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: ![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) - -## Deprecating a subgraph - -Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. - -### Step-by-Step - -To deprecate your subgraph, do the following: - -1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). -2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. -3. Your subgraph will no longer appear in searches on Graph Explorer. - -**Please note the following:** - -- The owner's wallet should call the `deprecateSubgraph` function. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deprecated subgraphs will show an error message. - -> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/ar/network/benefits.mdx b/website/pages/ar/network/benefits.mdx index d4a42c2e21f9..3bdd3f1e6e25 100644 --- a/website/pages/ar/network/benefits.mdx +++ b/website/pages/ar/network/benefits.mdx @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [upgrade your subgraph to The Graph's decentralized network](/cookbook/upgrading-a-subgraph). +Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/quick-start). diff --git a/website/pages/ar/network/curating.mdx b/website/pages/ar/network/curating.mdx index 09b06f9e3476..970b6fbbc405 100644 --- a/website/pages/ar/network/curating.mdx +++ b/website/pages/ar/network/curating.mdx @@ -8,9 +8,7 @@ Curators are critical to The Graph's decentralized economy. They use their knowl Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. -Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. - -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +16,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -30,11 +28,11 @@ Within the Curator tab in Graph Explorer, curators will be able to signal and un يمكن للمنسق الإشارة إلى إصدار معين ل subgraph ، أو يمكنه اختيار أن يتم ترحيل migrate إشاراتهم تلقائيا إلى أحدث إصدار لهذا ال subgraph. كلاهما استراتيجيات سليمة ولها إيجابيات وسلبيات. -Signaling on a specific version is especially useful when one subgraph is used by multiple dApps. One dApp might need to regularly update the subgraph with new features. Another dApp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,8 +47,8 @@ However, it is recommended that curators leave their signaled GRT in place not o ## المخاطر 1. سوق الاستعلام يعتبر حديثا في The Graph وهناك خطر من أن يكون٪ APY الخاص بك أقل مما تتوقع بسبب ديناميكيات السوق الناشئة. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. يمكن أن يفشل ال subgraph بسبب خطأ. ال subgraph الفاشل لا يمكنه إنشاء رسوم استعلام. نتيجة لذلك ، سيتعين عليك الانتظار حتى يصلح المطور الخطأ وينشر إصدارا جديدا. - إذا كنت مشتركا في أحدث إصدار من subgraph ، فسيتم ترحيل migrate أسهمك تلقائيا إلى هذا الإصدار الجديد. هذا سيتحمل ضريبة تنسيق بنسبة 0.5٪. - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. @@ -63,9 +61,9 @@ By signalling on a subgraph, you will earn a share of all the query fees that th ### 2. كيف يمكنني تقرير ما إذا كان ال subgraph عالي الجودة لكي أقوم بالإشارة إليه؟ -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dApp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: -- يمكن للمنسقين استخدام فهمهم للشبكة لمحاولة التنبؤ كيف لل subgraph أن يولد حجم استعلام أعلى أو أقل في المستقبل +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. What’s the cost of updating a subgraph? @@ -78,50 +76,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. هل يمكنني بيع أسهم التنسيق الخاصة بي؟ -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## منحنى الترابط 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![سعر السهم](/img/price-per-share.png) - -نتيجة لذلك ، يرتفع السعر بثبات ، مما يعني أنه سيكون شراء السهم أكثر تكلفة مع مرور الوقت. فيما يلي مثال لما نعنيه ، راجع منحنى الترابط أدناه: - -![منحنى الترابط Bonding curve](/img/bonding-curve.png) - -ضع في اعتبارك أن لدينا منسقان يشتركان في Subgraph واحد: - -- المنسق (أ) هو أول من أشار إلى ال Subgraphs. من خلال إضافة 120000 GRT إلى المنحنى ، سيكون من الممكن صك 2000 سهم. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- لأن كلا من المنسقين يحتفظان بنصف إجمالي اسهم التنسيق ، فإنهم سيحصلان على قدر متساوي من عوائد المنسقين. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- سيحصل المنسق المتبقي على جميع عوائد المنسق لهذ ال subgraphs. وإذا قام بحرق حصته للحصول علىGRT ، فإنه سيحصل على 120.000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signaling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. - لازلت مشوشا؟ راجع فيديو دليل التنسيق أدناه: diff --git a/website/pages/ar/network/delegating.mdx b/website/pages/ar/network/delegating.mdx index 18571df08c11..185dc09a8cac 100644 --- a/website/pages/ar/network/delegating.mdx +++ b/website/pages/ar/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegating --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## دليل المفوض -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly. The Ethereum community provides a comprehensive resource regarding wallets through the following link ([source](https://ethereum.org/en/wallets/)). There are three sections in this guide: @@ -24,60 +34,82 @@ There are three sections in this guide: Delegators cannot be slashed for bad behavior, but there is a tax on Delegators to disincentivize poor decision-making that could harm the integrity of the network. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### فترة إلغاء التفويض Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
لاحظ 0.5٪ رسوم التفويض ، بالإضافة إلى فترة 28 يوما لإلغاء التفويض.
### اختيار مفهرس جدير بالثقة مع عائد جيد للمفوضين -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
*المفهرس الأعلى يمنح المفوضين 90٪ من المكافآت. والمتوسط يمنح المفوضين 20٪. والأدنى يعطي المفوضين ~ 83٪.*
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### حساب العائد المتوقع للمفوضين +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- يمكن للمفوض إلقاء نظرة على قدرة المفهرسين على استخدام التوكن المفوضة المتاحة لهم. إذا لم يقم المفهرس بتخصيص جميع التوكن المتاحة ، فإنه لا يكسب أقصى ربح يمكن أن يحققه لنفسه أو للمفوضين. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### النظر في اقتطاع رسوم الاستعلام query fee cut واقتطاع رسوم الفهرسة indexing fee cut -كما هو موضح في الأقسام أعلاه ، يجب عليك اختيار مفهرس يتسم بالشفافية والصدق بشأن اقتطاع رسوم الاستعلام Query Fee Cut واقتطاع رسوم الفهرسة Indexing Fee Cuts. يجب على المفوض أيضا إلقاء نظرة على بارامتارات Cooldown time لمعرفة مقدار الوقت المتاح لديهم. بعد الانتهاء من ذلك ، من السهل إلى حد ما حساب مقدار المكافآت التي يحصل عليها المفوضون. الصيغة هي: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![صورة التفويض 3](/img/Delegation-Reward-Formula.png) ### النظر في أسهم تفويض المفهرس -باستخدام هذه الصيغة ، يمكننا أن نرى أنه من الممكن فعليا للمفهرس الذي يعرض 20٪ فقط للمفوضين ، أن يمنح المفوضين عائدا أفضل من المفهرس الذي يعطي 90٪ للمفوضين. +Delegators should consider the proportion of the Delegation Pool they own. -![شارك الصيغة](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![شارك الصيغة](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### النظر في سعة التفويض (delegation capacity) -Another thing to consider is the delegation capacity. Currently, the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -85,16 +117,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Pending Transaction" Bug -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### مثال -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Video guide for the network UI +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/ar/network/developing.mdx b/website/pages/ar/network/developing.mdx index 638f2b5af282..6f456be01c17 100644 --- a/website/pages/ar/network/developing.mdx +++ b/website/pages/ar/network/developing.mdx @@ -2,52 +2,29 @@ title: Developing --- -Developers are the demand side of The Graph ecosystem. Developers build subgraphs and publish them to The Graph Network. Then, they query live subgraphs with GraphQL in order to power their applications. +To start coding right away, go to [Developer Quick Start](/quick-start/). -## دورة حياة الـ Subgraph +## نظره عامة -Subgraphs deployed to the network have a defined lifecycle. +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. -### Build locally +On The Graph, you can: -As with all subgraph development, it starts with local development and testing. Developers can use the same local setup whether they are building for The Graph Network, the hosted service or a local Graph Node, leveraging `graph-cli` and `graph-ts` to build their subgraph. Developers are encouraged to use tools such as [Matchstick](https://github.com/LimeChain/matchstick) for unit testing to improve the robustness of their subgraphs. +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. -> There are certain constraints on The Graph Network, in terms of feature and network support. Only subgraphs on [supported networks](/developing/supported-networks) will earn indexing rewards, and subgraphs which fetch data from IPFS are also not eligible. +### What is GraphQL? -### Deploy to Subgraph Studio +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. +### Developer Actions -### Publish to the Network +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. -When the developer is happy with their subgraph, they can publish it to The Graph Network. This is an on-chain action, which registers the subgraph so that it is discoverable by Indexers. Published subgraphs have a corresponding NFT, which is then easily transferable. The published subgraph has associated metadata, which provides other network participants with useful context and information. +### What are subgraphs? -### Signal to Encourage Indexing +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Published subgraphs are unlikely to be picked up by Indexers without the addition of signal. Signal is locked GRT associated with a given subgraph, which indicates to Indexers that a given subgraph will receive query volume, and also contributes to the indexing rewards available for processing it. Subgraph developers will generally add signal to their subgraph, in order to encourage indexing. Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. - -### Querying & Application Development - -Once a subgraph has been processed by Indexers and is available for querying, developers can start to use the subgraph in their applications. Developers query subgraphs via a gateway, which forwards their queries to an Indexer who has processed the subgraph, paying query fees in GRT. - -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. - -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. - -### Updating Subgraphs - -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. - -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. - -### Deprecating Subgraphs - -At some point a developer may decide that they no longer need a published subgraph. At that point they may deprecate the subgraph, which returns any signalled GRT to the Curators. - -### Diverse Developer Roles - -Some developers will engage with the full subgraph lifecycle on the network, publishing, querying and iterating on their own subgraphs. Some may be focused on subgraph development, building open APIs which others can build on. Some may be application focused, querying subgraphs deployed by others. - -### Developers and Network Economics - -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +Check out the documentation on [subgraphs](/subgraphs/) to learn specifics. diff --git a/website/pages/ar/network/explorer.mdx b/website/pages/ar/network/explorer.mdx index 4c82281ebc72..2024b24bcd1c 100644 --- a/website/pages/ar/network/explorer.mdx +++ b/website/pages/ar/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![صورة المستكشف 1](/img/Subgraphs-Explorer-Landing.png) -عند النقر على Subgraphs ، يمكنك اختبار الاستعلامات وستكون قادرا على الاستفادة من تفاصيل الشبكة لاتخاذ قرارات صائبة. سيمكنك ايضا من الإشارة إلى GRT على Subgraphs الخاص بك أو subgraphs الآخرين لجعل المفهرسين على علم بأهميته وجودته. هذا أمر مهم جدا وذلك لأن الإشارة ل Subgraphs تساعد المفهرسين في اختيار ذلك ال Subgraph لفهرسته ، مما يعني أنه سيظهر على الشبكة لتقديم الاستعلامات. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![صورة المستكشف 2](/img/Subgraph-Details.png) -في كل صفحة مخصصة ل subgraphs ، تظهر العديد من التفاصيل. وهذا يتضمن +On each subgraph’s dedicated page, you can do the following: - أشر/الغي الإشارة على Subgraphs - اعرض المزيد من التفاصيل مثل المخططات و ال ID الحالي وبيانات التعريف الأخرى @@ -31,26 +45,32 @@ First things first, if you just finished deploying and publishing your subgraph ## المشاركون -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in-depth review of what each tab means for you. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexers ![صورة المستكشف 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- اقتطاع رسوم الاستعلام Query Fee Cut - هي النسبة المئوية لخصم رسوم الاستعلام والتي يحتفظ بها المفهرس عند التقسيم مع المفوضين Delegators -- اقتطاع المكافأة الفعالة Effective Reward Cut - هو اقتطاع مكافأة الفهرسة indexing reward cut المطبقة على مجموعة التفويضات. إذا كانت سالبة ، فهذا يعني أن المفهرس يتنازل عن جزء من مكافآته. إذا كانت موجبة، فهذا يعني أن المفهرس يحتفظ ببعض مكافآته -- فترة التهدئة Cooldown المتبقية - هو الوقت المتبقي حتى يتمكن المفهرس من تغيير بارامترات التفويض. يتم إعداد فترات التهدئة من قبل المفهرسين عندما يقومون بتحديث بارامترات التفويض الخاصة بهم -- مملوكة Owned - هذه هي حصة المفهرس المودعة ، والتي قد يتم شطبها بسبب السلوك الضار أو غير الصحيح -- مفوضة Delegated - هي حصة مفوضة من قبل المفوضين والتي يمكن تخصيصها بواسطة المفهرس ، لكن لا يمكن شطبها -- مخصصة Allocated - حصة يقوم المفهرسون بتخصيصها بشكل نشط نحو subgraphs التي يقومون بفهرستها -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. -- رسوم الاستعلام Query Fees - هذا هو إجمالي الرسوم التي دفعها المستخدمون للاستعلامات التي يقدمها المفهرس طوال الوقت +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - مكافآت المفهرس Indexer Rewards - هو مجموع مكافآت المفهرس التي حصل عليها المفهرس ومفوضيهم Delegators. تدفع مكافآت المفهرس ب GRT. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking on the right-hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. To learn more about how to become an Indexer, you can take a look at the [official documentation](/network/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ To learn more about how to become an Indexer, you can take a look at the [offici ### 3. المفوضون Delegators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -يمكن للمنسقين أن يكونوا من أعضاء المجتمع أو من مستخدمي البيانات أو حتى من مطوري ال subgraph والذين يشيرون إلى ال subgraphs الخاصة بهم وذلك عن طريق إيداع توكن GRT في منحنى الترابط. وبإيداع GRT ، يقوم المنسقون بصك أسهم التنسيق في ال subgraph. نتيجة لذلك ، يكون المنسقون مؤهلين لكسب جزء من رسوم الاستعلام التي يُنشئها ال subgraph المشار إليها. يساعد منحنى الترابط المنسقين على تنسيق مصادر البيانات الأعلى جودة. جدول المنسق في هذا القسم سيسمح لك برؤية: +In the The Curator table listed below you can see: - التاريخ الذي بدأ فيه المنسق بالتنسق - عدد ال GRT الذي تم إيداعه @@ -68,34 +92,36 @@ Curators analyze subgraphs to identify which subgraphs are of the highest qualit ![صورة المستكشف 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. المفوضون Delegators -يلعب المفوضون دورا رئيسيا في الحفاظ على الأمن واللامركزية في شبكة The Graph. يشاركون في الشبكة عن طريق تفويض (أي ، "Staking") توكن GRT إلى مفهرس واحد أو أكثر. بدون المفوضين، من غير المحتمل أن يربح المفهرسون مكافآت ورسوم مجزية. لذلك ، يسعى المفهرسون إلى جذب المفوضين من خلال منحهم جزءا من مكافآت الفهرسة ورسوم الاستعلام التي يكسبونها. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![صورة المستكشف 7](/img/Delegation-Overview.png) -جدول المفوضين سيسمح لك برؤية المفوضين النشطين في المجتمع ، بالإضافة إلى مقاييس مثل: +In the Delegators table you can see the active Delegators in the community and important metrics: - عدد المفهرسين المفوض إليهم - التفويض الأصلي للمفوض Delegator’s original delegation - المكافآت التي جمعوها والتي لم يسحبوها من البروتوكول - المكافآت التي تم سحبها من البروتوكول - كمية ال GRT التي يمتلكونها حاليا في البروتوكول -- تاريخ آخر تفويض لهم +- The date they last delegated -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Network -في قسم الشبكة ، سترى KPIs بالإضافة إلى القدرة على التبديل بين الفترات وتحليل مقاييس الشبكة بشكل مفصل. ستمنحك هذه التفاصيل فكرة عن كيفية أداء الشبكة بمرور الوقت. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### نظره عامة -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - إجمالي حصة الشبكة الحالية - الحصة المقسمة بين المفهرسين ومفوضيهم @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - بارامترات البروتوكول مثل مكافأة التنسيق ومعدل التضخم والمزيد - رسوم ومكافآت الفترة الحالية -بعض التفاصيل الأساسية الجديرة بالذكر: +A few key details to note: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![صورة المستكشف 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as: - الفترة النشطة هي الفترة التي يقوم فيها المفهرسون حاليا بتخصيص الحصص وتحصيل رسوم الاستعلام - فترات التسوية هي تلك الفترات التي يتم فيها تسوية قنوات الحالة state channels. هذا يعني أن المفهرسين يكونون عرضة للشطب إذا فتح المستخدمون اعتراضات ضدهم. - فترات التوزيع هي تلك الفترات التي يتم فيها تسوية قنوات الحالة للفترات ويمكن للمفهرسين المطالبة بخصم رسوم الاستعلام الخاصة بهم. - - الفترات النهائية هي تلك الفترات التي ليس بها خصوم متبقية على رسوم الاستعلام للمطالبة بها من قبل المفهرسين ، وبالتالي يتم الانتهاء منها. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![صورة المستكشف 9](/img/Epoch-Stats.png) ## ملف تعريف المستخدم الخاص بك -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### نظرة عامة على الملف الشخصي -هذا هو المكان الذي يمكنك فيه رؤية الإجراءات الحالية التي اتخذتها. وأيضا هو المكان الذي يمكنك فيه العثور على معلومات ملفك الشخصي والوصف وموقع الويب (إذا قمت بإضافته). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![صورة المستكشف 10](/img/Profile-Overview.png) ### تبويب ال Subgraphs -إذا قمت بالنقر على تبويب Subgraphs ، فسترى ال subgraphs المنشورة الخاصة بك. لن يشمل ذلك أي subgraphs تم نشرها ب CLI لأغراض الاختبار - لن تظهر ال subgraphs إلا عند نشرها على الشبكة اللامركزية. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![صورة المستكشف 11](/img/Subgraphs-Overview.png) ### تبويب الفهرسة -إذا قمت بالنقر على تبويب الفهرسة "Indexing " ، فستجد جدولا به جميع المخصصات النشطة والتاريخية ل subgraphs ، بالإضافة إلى المخططات التي يمكنك تحليلها ورؤية أدائك السابق كمفهرس. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. هذا القسم سيتضمن أيضا تفاصيل حول صافي مكافآت المفهرس ورسوم الاستعلام الصافي الخاصة بك. سترى المقاييس التالية: @@ -158,7 +189,9 @@ Now that we’ve talked about the network stats, let’s move on to your persona ### تبويب التفويض Delegating Tab -المفوضون مهمون لشبكة the Graph. يجب أن يستخدم المفوض معرفته لاختيار مفهرسا يوفر عائدا على المكافآت. هنا يمكنك العثور على تفاصيل تفويضاتك النشطة والتاريخية ، مع مقاييس المفهرسين الذين قمت بتفويضهم. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. في النصف الأول من الصفحة ، يمكنك رؤية مخطط التفويض الخاص بك ، بالإضافة إلى مخطط المكافآت فقط. إلى اليسار ، يمكنك رؤية KPIs التي تعكس مقاييس التفويض الحالية. diff --git a/website/pages/ar/network/indexing.mdx b/website/pages/ar/network/indexing.mdx index 06055f703f94..e5cb4d8ea17d 100644 --- a/website/pages/ar/network/indexing.mdx +++ b/website/pages/ar/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap تشتمل العديد من لوحات المعلومات التي أنشأها المجتمع على قيم المكافآت المعلقة ويمكن التحقق منها بسهولة يدويًا باتباع الخطوات التالية: -1. استعلم عن [mainnet الفرعيةرسم بياني ](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) للحصول على IDs لجميع المخصصات النشطة: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql } query indexerAllocations @@ -477,7 +477,7 @@ graph-indexer-agent start \ --index-node-ids default \ --indexer-management-port 18000 \ --metrics-port 7040 \ - --network-subgraph-endpoint https://gateway.network.thegraph.com/network \ + --network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \ --default-allocation-amount 100 \ --register true \ --inject-dai true \ @@ -512,7 +512,7 @@ graph-indexer-service start \ --postgres-username \ --postgres-password \ --postgres-database is_staging \ - --network-subgraph-endpoint https://gateway.network.thegraph.com/network \ + --network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \ | pino-pretty ``` @@ -545,7 +545,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additonal argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Queue allocation action @@ -810,7 +810,7 @@ To set the delegation parameters using Graph Explorer interface, follow these st ### عمر التخصيص allocation -After being created by an Indexer a healthy allocation goes through four states. +After being created by an Indexer a healthy allocation goes through two states. - **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. diff --git a/website/pages/ar/network/overview.mdx b/website/pages/ar/network/overview.mdx index 08469cdc547b..c6fdf2fdc81f 100644 --- a/website/pages/ar/network/overview.mdx +++ b/website/pages/ar/network/overview.mdx @@ -2,14 +2,20 @@ title: Network Overview --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## نظره عامة +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![اقتصاد الـ Token](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20 used to allocate resources in the network. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/ar/new-chain-integration.mdx b/website/pages/ar/new-chain-integration.mdx index 5b4925685de2..75df818160ce 100644 --- a/website/pages/ar/new-chain-integration.mdx +++ b/website/pages/ar/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: تكامل الشبكات الجديدة +title: New Chain Integration --- -عقدة الغراف يمكنه حاليًا فهرسة البيانات من أنواع الشبكات التالية: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- إيثيريوم، من خلال استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية ( EVM JSON-RPC) و [فايرهوز إيثيريوم](https://github.com/streamingfast/firehose-ethereum) -- نير، عبر [نير فايرهوز](https://github.com/streamingfast/near-firehose-indexer) -- كوسموس، عبر [كوسموس فايرهوز](https://github.com/graphprotocol/firehose-cosmos) -- أرويف، عبر [أرويف فايرهوز](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -إذا كنت مهتمًا بأي من تلك السلاسل، فإن التكامل يتطلب ضبط واختبار عقدة الغراف. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -إذا كنت مهتمًا بنوع سلسلة مختلفة، فيجب بناء تكامل جديد مع عقدة الغراف. الطريقة الموصى بها هي تطوير فايرهوز جديد للسلسلة المعنية، ثم دمج ذلك الفايرهوز مع عقدة الغراف. المزيد من المعلومات أدناه. +## Integration Strategies -**1. استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية** +### 1. EVM JSON-RPC -إذا كانت سلسلة الكتل متوافقة مع آلة الإيثريوم الافتراضية وإذا كان العميل/العقدة يوفر واجهة برمجة التطبيقات القياسية لاستدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية، ، فإنه يمكن لعقدة الغراف فهرسة هذه السلسلة الجديدة. لمزيد من المعلومات، يرجى الاطلاع على [اختبار استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية] (تكامل*سلسة*جديدة #اختبار*استدعاء*إجراء*عن*بُعد*باستخدام*تمثيل*كائنات*جافا*سكريبت*لآلة*التشغيل*الافتراضية_لإثريوم). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. فايرهوز** +#### اختبار استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية (EVM JSON-RPC) -بالنسبة لسلاسل الكتل الغير المبنية على آلة الإيثيريوم الافتراضية، يجب على عقدة الغراف استيعاب بيانات سلسلة الكتل عبر استدعاء الإجراءات عن بُعد من جوجل(gRPC) وتعريفات الأنواع المعروفة. يمكن القيام بذلك باستخدام [فايرهوز](فايرهوز/)، وهي تقنية جديدة تم تطويرها بواسطة [ستريمنج فاست](https://www.streamingfast.io/)، وتوفر حلاً لفهرسة سلسلة الكتل والقابلة للتوسع باستخدام نهج قائم على الملفات والتدفق المباشر. يمكنكم التواصل مع [فريق ستريمنج فاست](mailto:integrations@streamingfast.io/) إذا كنتم بحاجة إلى مساعدة في تطوير فايرهوز. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## الفرق بين استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية والفايرهوز +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`، ضمن طلب دفعة استدعاء الإجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت +- `trace_filter` *(optionally required for Graph Node to support call handlers)* -في حين أن الاثنين مناسبان للغرافات الفرعية، فإن فايرهوز مطلوب دائمًا للمطورين الراغبين في البناء باستخدام [سبستريمز](سبستريمز/)، مثل بناء [غرافات فرعية مدعومة بسبستريمز](cookbook/substreams-powered-subgraphs/). بالإضافة إلى ذلك، يسمح فايرهوز بتحسين سرعات الفهرسة مقارنةً باستدعاء الإجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت. +### 2. Firehose Integration -قد يفكر المطورون الجدد لسلاسل آلة الإيثيريوم الافتراضة أيضًا في الاستفادة من نهج فايرهوز بناءً على فوائد سبستريمز وقدرات الفهرسة المتوازية الضخمة. إن دعم كليهما يسمح للمطورين بالاختيار بين بناء سبستريمز أو غرافات فرعية للسلسلة الجديدة. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> ملاحظة: أن التكامل القائم على فايرهوز لسلاسل الآلة الإيثيريوم الافتراضية يتطلب من المفهرسين تشغيل عقدة نداء الإجراء عن بعد للأرشيف الخاص بالشبكة لفهرسة الغرافات الفرعية بشكل صحيح. يرجع ذلك إلى عدم قدرة فايرهوز على توفير حالة العقد الذكية التي يمكن الوصول إليها عادةً بطريقةنداء الإجراء عن بعد `eth_call`. (من الجدير بالذكر أن استخدام eth_calls [ليست ممارسة جيدة للمطورين](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## اختبار استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية (EVM JSON-RPC) +#### Specific Firehose Instrumentation for EVM (`geth`) chains -لكي تتمكن عقدة الغراف من جمع البيانات من سلسلة EVM، يجب أن يوفر العقد RPC طرق EVM JSON RPC التالية: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(للكتل التاريخية، باستخدام EIP-1898 - يتطلب نقطة أرشيف): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`، ضمن طلب دفعة استدعاء الإجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت -- _`trace_filter`_ _(مطلوبة اختياريًا لعقدة الغراف لدعم معالجات الاستدعاء)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### تكوين عقدة الغراف +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**ابدأ بإعداد بيئتك المحلية** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## تكوين عقدة الغراف + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [استنسخ عقدة الغراف](https://github.com/graphprotocol/graph-node) -2. قم بتعديل [هذا السطر](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) لتضمين اسم الشبكة الجديدة والعنوان المتوافق مع استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية - > لا تقم بتعديل اسم المتغير البيئي نفسه. يجب أن يظل اسمه `ethereum` حتى لو كان اسم الشبكة مختلفًا. -3. قم بتشغيل عقدة نظام الملفات بين الكواكب (IPFS) أو استخدم العقدة التي يستخدمها الغراف: https://api.thegraph.com/ipfs/ -**اختبر التكامل من خلال نشر الغراف الفرعي محليًا.** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. قم بإنشاء مثالًا بسيطًا للغراف الفرعي. بعض الخيارات المتاحة هي كالتالي: - 1. يُعتبر [غرافيتار](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) المُعد مسبقًا مثالًا جيدًا لعقد ذكي وغراف فرعي كنقطة انطلاقة جيدة - 2. قم بإعداد غراف فرعي محلي من أي عقد ذكي موجود أو بيئة تطوير صلبة [باستخدام هاردهات وملحق الغراف](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. أنشئ غرافك الفرعي في عقدة الغراف: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. انشر غرافك الفرعي إلى عقدة الغراف: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -إذا لم تكن هناك أخطاء يجب أن يقوم عقدة الغراف بمزامنة الغراف الفرعي المنشور. قم بمنحه بعض الوقت لإتمام عملية المزامنة، ثم قم بإرسال بعض استعلامات لغة الإستعلام للغراف (GraphQL) إلى نقطة نهاية واجهة برمجة التطبيقات الموجودة في السجلات. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## تكامل سلسلة جديدة تدعم فايرهوز +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. قم بإنشاء مثالًا بسيطًا للغراف الفرعي. بعض الخيارات المتاحة هي كالتالي: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +إذا لم تكن هناك أخطاء يجب أن يقوم عقدة الغراف بمزامنة الغراف الفرعي المنشور. قم بمنحه بعض الوقت لإتمام عملية المزامنة، ثم قم بإرسال بعض استعلامات لغة الإستعلام للغراف (GraphQL) إلى نقطة نهاية واجهة برمجة التطبيقات الموجودة في السجلات. -يتيح فايرهوز أيضًا إمكانية دمج سلسلة جديدة. يُعتبر هذا حاليًا الخيار الأفضل للسلاسل الغير معتمدة على آلة الإيثريوم الافتراضية ويعتبر متطلبًا لدعم سبستريمز. الوثائق الإضافية تركز على كيفية عمل فايرهوز وإضافة دعم فايرهوز لسلسلة جديدة ودمجها مع عقدة الغراف. يُوصى بالوثائق التالية للمطورين الذين يقومون بذلك: +## Substreams-powered Subgraphs -1. [وثائق عامة عن فايرهوز] (firehose/) -2. [إضافة دعم فايرهوز لسلسلة جديدة](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [دمج غراف نود مع سلسلة جديدة عبر فايرهوز](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/ar/querying/graphql-api.mdx b/website/pages/ar/querying/graphql-api.mdx index 2d2efb6008c2..9699c231d93b 100644 --- a/website/pages/ar/querying/graphql-api.mdx +++ b/website/pages/ar/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## الاستعلامات +## What is GraphQL? -في مخطط الـ subgraph الخاص بك ، يمكنك تعريف أنواع وتسمى `Entities`. لكل نوع من `Entity` ، سيتم إنشاء حقل `entity` و `entities` في المستوى الأعلى من نوع `Query`. لاحظ أنه لا يلزم تضمين `query` أعلى استعلام `graphql` عند استخدام The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Examples @@ -21,7 +29,7 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. } ``` -> **Note:** When querying for a single entity, the `id` field is required, and it must be a string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Query all `Token` entities: @@ -36,7 +44,10 @@ Query all `Token` entities: ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### مثال @@ -53,7 +64,7 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. -In the following example, we sort the tokens by the name of their owner: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ In the following example, we sort the tokens by the name of their owner: ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. - -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +When querying a collection, it's best to: -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Example using `first` @@ -106,7 +118,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example using `first` and `id_ge` -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Example using `where` @@ -155,7 +168,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Example for block filtering -You can also filter entities by the `_change_block(number_gte: Int)` - this filters entities which were updated in or after the specified block. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). @@ -193,7 +206,7 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release ##### `AND` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -223,7 +236,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ##### `OR` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### مثال @@ -376,11 +389,11 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 ## المخطط -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/ar/querying/managing-api-keys.mdx b/website/pages/ar/querying/managing-api-keys.mdx index 7e94abf2a4a8..f23017bf015e 100644 --- a/website/pages/ar/querying/managing-api-keys.mdx +++ b/website/pages/ar/querying/managing-api-keys.mdx @@ -2,23 +2,33 @@ title: Managing your API keys --- -Regardless of whether you’re a dapp developer or a subgraph developer, you’ll need to manage your API keys. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. +## نظره عامة -The "API keys" table lists out existing API keys, which will give you the ability to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, as well as total query numbers. You can click the "three dots" menu to edit a given API key: +API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. + +### Create and Manage API Keys + +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. + +The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. + +You can click the "three dots" menu to the right of a given API key to: - Rename API key - Regenerate API key - Delete API key - Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +### API Key Details + You can click on an individual API key to view the Details page: -1. The **Overview** section will allow you to: +1. Under the **Overview** section, you can: - تعديل اسم المفتاح الخاص بك - إعادة إنشاء مفاتيح API - عرض الاستخدام الحالي لمفتاح API مع الإحصائيات: - عدد الاستعلامات - كمية GRT التي تم صرفها -2. Under **Security**, you’ll be able to opt into security settings depending on the level of control you’d like to have over your API keys. In this section, you can: +2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - عرض وإدارة أسماء النطاقات المصرح لها باستخدام مفتاح API الخاص بك - تعيين الـ subgraphs التي يمكن الاستعلام عنها باستخدام مفتاح API الخاص بك diff --git a/website/pages/ar/querying/querying-best-practices.mdx b/website/pages/ar/querying/querying-best-practices.mdx index 1068236fc184..48e8dec63a28 100644 --- a/website/pages/ar/querying/querying-best-practices.mdx +++ b/website/pages/ar/querying/querying-best-practices.mdx @@ -2,17 +2,15 @@ title: أفضل الممارسات للاستعلام --- -يوفر The Graph طريقة لامركزية للاستعلام عن البيانات من سلاسل الكتل. +The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -يتم عرض بيانات شبكة Graph من خلال GraphQL API ، مما يسهل الاستعلام عن البيانات باستخدام لغة GraphQL. - -ستوجهك هذه الصفحة خلال القواعد الأساسية للغة GraphQL وأفضل ممارسات استعلامات GraphQL. +Learn the essential GraphQL language rules and best practices to optimize your subgraph. --- ## الاستعلام عن واجهة برمجة تطبيقات GraphQL -### بنية استعلام GraphQL +### The Anatomy of a GraphQL Query على عكس REST API ، فإن GraphQL API مبنية على مخطط يحدد الاستعلامات التي يمكن تنفيذها. @@ -52,7 +50,7 @@ query [operationName]([variableName]: [variableType]) { } ``` -على الرغم من أن قائمة القواعد التي يجب اتباعها طويلة، إلا أن هناك قواعد أساسية يجب أخذها في الاعتبار عند كتابة استعلامات GraphQL: +## Rules for Writing GraphQL Queries - يجب استخدام كل `queryName` مرة واحدة فقط لكل عملية. - يجب استخدام كل `field` مرة واحدة فقط في التحديد (لا يمكننا الاستعلام عن `id` مرتين ضمن `token`) @@ -61,9 +59,9 @@ query [operationName]([variableName]: [variableType]) { - في قائمة المتغيرات المعطاة ، يجب أن يكون كل واحد منها فريدًا. - يجب استخدام جميع المتغيرات المحددة. -إذا لم تتبع القواعد المذكورة أعلاه ، فستحدث خطأ من Graph API. +> Note: Failing to follow these rules will result in an error from The Graph API. -For a complete list of rules with code examples, please look at our [GraphQL Validations guide](/release-notes/graphql-validations-migration-guide/). +For a complete list of rules with code examples, check out [GraphQL Validations guide](/release-notes/graphql-validations-migration-guide/). ### إرسال استعلام إلى GraphQL API @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as mentioned in ["Querying from an Application"](/querying/querying-from-an-application), it's recommended to use `graph-client`, which supports the following unique features: - التعامل مع ال subgraph عبر السلاسل: الاستعلام من عدة subgraphs عبر استعلام واحد - [تتبع الكتلة التلقائي](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -164,11 +160,11 @@ Doing so brings **many advantages**: - ** يمكن تخزين المتغيرات مؤقتًا ** على مستوى الخادم - ** يمكن تحليل طلبات البحث بشكل ثابت بواسطة الأدوات ** (المزيد حول هذا الموضوع في الأقسام التالية) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -337,8 +332,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- harder to read for more extensive queries -- عند استخدام الأدوات التي تنشئ أنواع TypeScript بناءً على الاستعلامات (_المزيد عن ذلك في القسم الأخير_)، و `newDelate` و `oldDelegate` سينتج عنهما واجهتين مضمنتان متمايزتين. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### ما يجب فعله وما لا يجب فعله في GraphQL Fragment -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- عند تكرار الحقول من نفس النوع في استعلام ، قم بتجميعها في Fragment -- عند تكرار الحقول متشابهه ولكن غير متطابقة ، قم بإنشاء fragments متعددة ، على سبيل المثال: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## الأدوات الأساسية +## The Essential Tools ### GraphQL web-based explorers @@ -473,11 +468,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- syntax highlighting -- اقتراحات الإكمال التلقائي -- validation against schema -- snippets -- انتقل إلى تعريف ال fragment وأنواع الإدخال +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -485,9 +480,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- syntax highlighting -- اقتراحات الإكمال التلقائي -- validation against schema -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/ar/querying/querying-from-an-application.mdx b/website/pages/ar/querying/querying-from-an-application.mdx index e415babb3be5..552464f28803 100644 --- a/website/pages/ar/querying/querying-from-an-application.mdx +++ b/website/pages/ar/querying/querying-from-an-application.mdx @@ -2,42 +2,46 @@ title: الاستعلام من التطبيق --- -Once a subgraph is deployed to Subgraph Studio or to Graph Explorer, you will be given the endpoint for your GraphQL API that should look something like this: +Learn how to query The Graph from your application. -**Subgraph Studio (اختبار endpoint)** +## Getting GraphQL Endpoint -```sh -استعلامات (HTTP) +Once a subgraph is deployed to [Subgraph Studio](https://thegraph.com/studio/) or [Graph Explorer](https://thegraph.com/explorer), you will be given the endpoint for your GraphQL API that should look something like this: + +### Subgraph Studio + +``` https://api.studio.thegraph.com/query/// ``` -**Graph Explorer** +### Graph Explorer -```sh -استعلامات (HTTP) +``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -باستخدام GraphQL endpoint ، يمكنك استخدام العديد من مكتبات GraphQL Client للاستعلام عن ال Subgraph وملء تطبيقك بالبيانات المفهرسة بواسطة ال Subgraph. - -في ما يلي بعض عملاء GraphQL الأكثر شيوعا في النظام البيئي وكيفية استخدامها: +With your GraphQL endpoint, you can use various GraphQL Client libraries to query the subgraph and populate your app with data indexed by the subgraph. -## GraphQL clients +## Using Popular GraphQL Clients -### Graph client +### Graph Client -The Graph is providing it own GraphQL client, `graph-client` that supports unique features such as: +The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: - التعامل مع ال subgraph عبر السلاسل: الاستعلام من عدة subgraphs عبر استعلام واحد - [تتبع الكتلة التلقائي](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [ترقيم الصفحات التلقائي](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - نتيجة مكتوبة بالكامل -Also integrated with popular GraphQL clients such as Apollo and URQL and compatible with all environments (React, Angular, Node.js, React Native), using `graph-client` will give you the best experience for interacting with The Graph. +> Note: `graph-client` is integrated with other popular GraphQL clients such as Apollo and URQL, which are compatible with environments such as React, Angular, Node.js, and React Native. As a result, using `graph-client` will provide you with an enhanced experience for working with The Graph. + +### Fetch Data with Graph Client + +Let's look at how to fetch data from a subgraph with `graph-client`: -Let's look at how to fetch data from a subgraph with `graphql-client`. +#### Step 1 -To get started, make sure to install The Graph Client CLI in your project: +Install The Graph Client CLI in your project: ```sh yarn add -D @graphprotocol/client-cli @@ -45,6 +49,8 @@ yarn add -D @graphprotocol/client-cli npm install --save-dev @graphprotocol/client-cli ``` +#### Step 2 + Define your query in a `.graphql` file (or inlined in your `.js` or `.ts` file): ```graphql @@ -72,7 +78,9 @@ query ExampleQuery { } ``` -Then, create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +#### Step 3 + +Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: ```yaml # .graphclientrc.yml @@ -90,13 +98,17 @@ documents: - ./src/example-query.graphql ``` -Running the following The Graph Client CLI command will generate typed and ready to use JavaScript code: +#### Step 4 + +Run the following The Graph Client CLI command to generate typed and ready to use JavaScript code: ```sh graphclient build ``` -Finally, update your `.ts` file to use the generated typed GraphQL documents: +#### Step 5 + +Update your `.ts` file to use the generated typed GraphQL documents: ```tsx import React, { useEffect } from 'react' @@ -134,33 +146,35 @@ function App() { export default App ``` -**⚠️ Important notice** +> **Important Note:** `graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you can [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). However, if you choose to go with another client, keep in mind that **you won't be able to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. -`graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you will [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). +### Apollo Client -However, if you choose to go with another client, keep in mind that **you won't be able to get to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. +[Apollo client](https://www.apollographql.com/docs/) is a common GraphQL client on front-end ecosystems. It's available for React, Angular, Vue, Ember, iOS, and Android. -### عميل Apollo +Although it's the heaviest client, it has many features to build advanced UI on top of GraphQL: -[Apollo client](https://www.apollographql.com/docs/) is the ubiquitous GraphQL client on the front-end ecosystem. +- Advanced error handling +- Pagination +- Data prefetching +- Optimistic UI +- Local state management -Available for React, Angular, Vue, Ember, iOS, and Android, Apollo Client, although the heaviest client, brings many features to build advanced UI on top of GraphQL: +### Fetch Data with Apollo Client -- advanced error handling -- pagination -- data prefetching -- optimistic UI -- local state management +Let's look at how to fetch data from a subgraph with Apollo client: -Let's look at how to fetch data from a subgraph with Apollo client in a web project. +#### Step 1 -First, install `@apollo/client` and `graphql`: +Install `@apollo/client` and `graphql`: ```sh npm install @apollo/client graphql ``` -بعد ذلك يمكنك الاستعلام عن API بالكود التالي: +#### Step 2 + +Query the API with the following code: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -193,6 +207,8 @@ client }) ``` +#### Step 3 + To use variables, you can pass in a `variables` argument to the query: ```javascript @@ -224,24 +240,30 @@ client }) ``` -### URQL +### URQL Overview -Another option is [URQL](https://formidable.com/open-source/urql/) which is available within Node.js, React/Preact, Vue, and Svelte environments, with more advanced features: +[URQL](https://formidable.com/open-source/urql/) is available within Node.js, React/Preact, Vue, and Svelte environments, with some more advanced features: - Flexible cache system - Extensible design (easing adding new capabilities on top of it) - Lightweight bundle (~5x lighter than Apollo Client) - Support for file uploads and offline mode -Let's look at how to fetch data from a subgraph with URQL in a web project. +### Fetch data with URQL + +Let's look at how to fetch data from a subgraph with URQL: -First, install `urql` and `graphql`: +#### Step 1 + +Install `urql` and `graphql`: ```sh npm install urql graphql ``` -بعد ذلك يمكنك الاستعلام عن API بالكود التالي: +#### Step 2 + +Query the API with the following code: ```javascript import { createClient } from 'urql' diff --git a/website/pages/ar/querying/querying-the-graph.mdx b/website/pages/ar/querying/querying-the-graph.mdx index aeef84cbe5a9..1255e0e88a51 100644 --- a/website/pages/ar/querying/querying-the-graph.mdx +++ b/website/pages/ar/querying/querying-the-graph.mdx @@ -2,7 +2,7 @@ title: Querying The Graph --- -When a subgraph is published to The Graph Network, you can visit its subgraph details page on [Graph Explorer](https://thegraph.com/explorer) and use the "Playground" tab to explore the deployed GraphQL API for the subgraph, issuing queries and viewing the schema. +When a subgraph is published to The Graph Network, you can visit its subgraph details page on [Graph Explorer](https://thegraph.com/explorer) and use the "query" tab to explore the deployed GraphQL API for the subgraph, issuing queries and viewing the schema. > Please see the [Query API](/querying/graphql-api) for a complete reference on how to query the subgraph's entities. You can learn about GraphQL querying best practices [here](/querying/querying-best-practices) @@ -10,7 +10,9 @@ When a subgraph is published to The Graph Network, you can visit its subgraph de Each subgraph published to The Graph Network has a unique query URL in Graph Explorer for making direct queries that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. -![نافذة الاستعلام عن Subgraph](/img/query-subgraph-pane.png) +![Query Subgraph Button](/img/query-button-screenshot.png) + +![Query Subgraph URL](/img/query-url-screenshot.png) Learn more about querying from an application [here](/querying/querying-from-an-application). diff --git a/website/pages/ar/quick-start.mdx b/website/pages/ar/quick-start.mdx index f510c6ba381d..8a52663bfc8c 100644 --- a/website/pages/ar/quick-start.mdx +++ b/website/pages/ar/quick-start.mdx @@ -2,24 +2,26 @@ title: بداية سريعة --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily build, publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -تأكد من أن الغراف الفرعي الخاص بك سيقوم بفهرسة البيانات من [الشبكة المدعومة](/developing/supported-networks). - -تم كتابة هذا الدليل على افتراض أن لديك: +## المتطلبات الأساسية - محفظة عملات رقمية -- عنوان عقد ذكي على الشبكة التي تختارها +- A smart contract address on a [supported network](/developing/supported-networks/) +- [Node.js](https://nodejs.org/) installed +- A package manager of your choice (`npm`, `yarn` or `pnpm`) + +## How to Build a Subgraph -## 1. Create a subgraph on Subgraph Studio +### 1. Create a subgraph in Subgraph Studio -انتقل إلى [سبغراف استوديو] (https://thegraph.com/studio) وقم بربط محفظتك. +Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -## 2. Install the Graph CLI +Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +### 2. Install the Graph CLI On your local machine, run one of the following commands: @@ -35,133 +37,148 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 3. Initialize your subgraph + +> يمكنك العثور على الأوامر المتعلقة بالغراف الفرعي الخاص بك على صفحة الغراف الفرعي في (سبغراف استوديو) (https://thegraph.com/studio). + +The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. -Initialize your subgraph from an existing contract by running the initialize command: +The following command initializes your subgraph from an existing contract: ```sh -graph init --studio +graph init ``` -> يمكنك العثور على الأوامر المتعلقة بالغراف الفرعي الخاص بك على صفحة الغراف الفرعي في (سبغراف استوديو) (https://thegraph.com/studio). +If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. -عند تهيئة غرافك الفرعي، ستطلب منك أداة "واجهة سطر الأوامر" (CLI) المعلومات التالية: +When you initialize your subgraph, the CLI will ask you for the following information: -- البروتوكول: اختر البروتوكول الذي سيفهرس من فهرسة البيانات -- المعرّف الخاص بالغراف الفرعي: قم بإنشاء اسم لغرافك الغرعي. يُعتبر "سبغراف سلوج" معرّف فريد يستخدم لتمييز غرافك الفرعي. -- الدليل الذي سيتم إنشاء الغراف الفرعي فيه: اختر الدليل المحلي الذي ترغب في إنشاء الغراف الفرعي فيه -- شبكة الايثيروم(اختيارية): قد تحتاج إلى تحديد الشبكة المتوافقة مع آلة إيثيريوم الإفتراضية التي سيقوم غرافك الفرعي بفهرسة البيانات منها -- Contract address: Locate the smart contract address you’d like to query data from -- واجهة التطبيق الثنائية: إذا لم يتم ملء واجهة التطبيق الثنائية تلقائياً، فستحتاج إلى إدخاله يدوياً كملف JSON -- كتلة البداية: يُقترح إدخال كتلة البداية لتوفير الوقت أثناء قيام غرافك الفرعي بفهرسة بيانات سلاسل الكتل. يمكنك تحديد كتلة البداية من خلال العثور على الكتلة التي تم نشر عقدك فيها. -- Contract Name: input the name of your contract -- فهرسة أحداث العقد ككيانات: يُقترح ضبط هذا الخيار على "صحيح" (True) حيث سيتم إضافة تعيينات تلقائية إلى غرافك الفرعي لكل حدث يتم إصداره -- إضافة عقد آخر (اختياري): يمكنك إضافة عقد آخر +- **Protocol**: Choose the protocol your subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- **Directory**: Choose a directory to create your subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Contract address**: Locate the smart contract address you’d like to query data from. +- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Contract Name**: Input the name of your contract. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Add another contract** (optional): You can add another contract. يرجى مراجعة الصورة المرفقة كمثال عن ما يمكن توقعه عند تهيئة غرافك الفرعي: -أمر الغراف الفرعي(/img/subgraph-init-example.png) +![Subgraph command](/img/CLI-Example.png) + +### 4. Edit your subgraph + +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. + +When making changes to the subgraph, you will mainly work with three files: -## 4. Write your subgraph +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -الأوامر السابقة تنشئ هيكل غرافك الفرعي والذي يمكنك استخدامه كنقطة بداية لبناء غرافك الفرعي. عند إجراء تغييرات على الغراف الفرعي، ستعمل بشكل رئيسي مع ثلاثة ملفات: +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +### 5. Deploy your subgraph -للمزيد من المعلومات حول كيفية كتابة غرافك الفرعي، يُرجى الاطلاع على إنشاء غراف فرعي(/developing/creating-a-subgraph). +Remember, deploying is not the same as publishing. -## 5. Deploy to Subgraph Studio +When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. + +When you publish a subgraph, you are publishing it onchain to the decentralized network. عند كتابة غرافك الفرعي، قم بتنفيذ الأوامر التالية: +```` ```sh -$ graph codegen -$ graph build +graph codegen && graph build ``` +```` + +Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. + +![Deploy key](/img/subgraph-studio-deploy-key.jpg) + +```` +```sh + +graph auth + +graph deploy +``` +```` + +The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -- قم بالمصادقة وأنشر غرافك الفرعي. يمكن العثور على مفتاح النشر على صفحة الغراف الفرعي في سبغراف استيديو. +### 6. Review your subgraph +If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: + +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: + + ![Subgraph logs](/img/subgraph-logs-image.png) + +### 7. Publish your subgraph to The Graph Network + +Publishing a subgraph to the decentralized network is an onchain action that makes your subgraph available for [Curators](/network/curating/) to curate it and [Indexers](/network/indexing/) to index it. + +#### Publishing with Subgraph Studio + +To publish your subgraph, click the Publish button in the dashboard. + +![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) + +Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +Open the `graph-cli`. + +Use the following commands: + +```` ```sh -$ graph auth --studio -$ graph deploy --studio +graph codegen && graph build ``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. اختبر غرافك الفرعي - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -ستخبرك السجلات إذا كانت هناك أي أخطاء في غرافك الفرعي. ستبدو سجلات الغراف الفرعي الفعّال على النحو التالي: - -![Subgraph logs](/img/subgraph-logs-image.png) - -إذا فشل غرافك الفرعي، فيمكنك الاستعلام عن صحة الغراف الفرعي باستخدام ملعب غرافي GraphiQL Playground. لاحظ أنه يمكنك الاستفادة من الاستعلام أدناه وإدخال معرف النشر الخاص بك لغرافك الفرعي. في هذه الحالة، `Qm...` هو معرف النشر (يمكن العثور عليه في صفحة الغراف الفرعي ضمن **التفاصيل**). سيخبرك الاستعلام أدناه عند فشل الغراف الفرعي حتى تتمكن من إصلاحه بناءً عليه: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +Then, + +```sh +graph publish ``` +```` + +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). -## 7. Publish your subgraph to The Graph’s Decentralized Network +#### Adding signal to your subgraph -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. -حدد الشبكة التي ترغب في نشر غرافك الفرعي عليها. يُوصى بنشر الغرافات الفرعية على شبكة أربترم ون للاستفادة من [سرعة معاملات أسرع وتكاليف غاز أقل](/arbitrum/arbitrum-faq). +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +To learn more about curation, read [Curating](/network/curating/). -لتوفير تكاليف الغاز، يمكنك تنسيق غرافك الفرعي في نفس العملية التي نشرته عن طريق اختيار هذا الزر عند نشر غرافك الفرعي على شبكة الغراف اللامركزية: +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: -![Subgraph publish](/img/publish-and-signal-tx.png) +![Subgraph publish](/img/studio-publish-modal.png) -## 8. Query your subgraph +### 8. Query your subgraph -الآن يمكنك الاستعلام عن غرافك الفرعي عن طريق إرسال استعلامات لغة GraphQL إلى عنوان استعلامات غرافك الفرعي URL والذي يمكنك أن تجده عن طريق النقر على زر الاستعلام. +You now have access to 100,000 free queries per month with your subgraph on The Graph Network! -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/ar/sps/introduction.mdx b/website/pages/ar/sps/introduction.mdx index 3e50521589af..12e3f81c6d53 100644 --- a/website/pages/ar/sps/introduction.mdx +++ b/website/pages/ar/sps/introduction.mdx @@ -14,6 +14,6 @@ It is really a matter of where you put your logic, in the subgraph or the Substr Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: -- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application/solana) -- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application/evm) -- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application/injective) +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/ar/sps/triggers-example.mdx b/website/pages/ar/sps/triggers-example.mdx index d8d61566295e..943e9898ed14 100644 --- a/website/pages/ar/sps/triggers-example.mdx +++ b/website/pages/ar/sps/triggers-example.mdx @@ -2,7 +2,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' --- -## Prerequisites +## المتطلبات الأساسية Before starting, make sure to: @@ -11,6 +11,8 @@ Before starting, make sure to: ## Step 1: Initialize Your Project + + 1. Open your Dev Container and run the following command to initialize your project: ```bash @@ -18,6 +20,7 @@ Before starting, make sure to: ``` 2. Select the "minimal" project option. + 3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: ```yaml @@ -87,17 +90,7 @@ type MyTransfer @entity { This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. -## Step 4: Generate Protobuf Files - -To generate Protobuf objects in AssemblyScript, run the following command: - -```bash -npm run protogen -``` - -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. - -## Step 5: Handle Substreams Data in `mappings.ts` +## Step 4: Handle Substreams Data in `mappings.ts` With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: @@ -120,7 +113,7 @@ export function handleTriggers(bytes: Uint8Array): void { entity.designation = event.transfer!.accounts!.destination if (event.transfer!.accounts!.signer!.single != null) { - entity.signers = [event.transfer!.accounts!.signer!.single.signer] + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] } else if (event.transfer!.accounts!.signer!.multisig != null) { entity.signers = event.transfer!.accounts!.signer!.multisig!.signers } @@ -130,6 +123,16 @@ export function handleTriggers(bytes: Uint8Array): void { } ``` +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + ## Conclusion You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. diff --git a/website/pages/ar/subgraphs.mdx b/website/pages/ar/subgraphs.mdx index 27b452211477..d770a7cb9c57 100644 --- a/website/pages/ar/subgraphs.mdx +++ b/website/pages/ar/subgraphs.mdx @@ -24,7 +24,13 @@ The **subgraph definition** consists of the following files: - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each of subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). + +## دورة حياة الـ Subgraph + +Here is a general overview of a subgraph’s lifecycle: + +![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development @@ -34,8 +40,47 @@ To learn more about each of subgraph component, check out [creating a subgraph]( 4. [Publish a subgraph](/publishing/publishing-a-subgraph/) 5. [Signal on a subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) -## Subgraph Lifecycle +### Build locally -Here is a general overview of a subgraph’s lifecycle: +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. -![Subgraph Lifecycle](/img/subgraph-lifecycle.png) +### Deploy to Subgraph Studio + +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: + +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. + +### Publish to the Network + +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. + +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. + +### Add Curation Signal for Indexing + +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. + +#### What is signal? + +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. + +### Querying & Application Development + +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). + +Learn more about [querying subgraphs](/querying/querying-the-graph/). + +### Updating Subgraphs + +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. + +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. + +### Deleting & Transferring Subgraphs + +If you no longer need a published subgraph, you can [delete](/managing/delete-a-subgraph/) or [transfer](/managing/transfer-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/ar/substreams.mdx b/website/pages/ar/substreams.mdx index cc4cb7918c45..7b73a6aedb7d 100644 --- a/website/pages/ar/substreams.mdx +++ b/website/pages/ar/substreams.mdx @@ -4,25 +4,27 @@ title: سبستريمز ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps 1. **You write a Rust program, which defines the transformations that you want to apply to the blockchain data.** For example, the following Rust function extracts relevant information from an Ethereum block (number, hash, and parent hash). -```rust -fn get_my_block(blk: Block) -> Result { - let header = blk.header.as_ref().unwrap(); + ```rust + fn get_my_block(blk: Block) -> Result { + let header = blk.header.as_ref().unwrap(); - Ok(MyBlock { - number: blk.number, - hash: Hex::encode(&blk.hash), - parent_hash: Hex::encode(&header.parent_hash), - }) -} -``` + Ok(MyBlock { + number: blk.number, + hash: Hex::encode(&blk.hash), + parent_hash: Hex::encode(&header.parent_hash), + }) + } + ``` 2. **You wrap up your Rust program into a WASM module just by running a single CLI command.** @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/ar/sunrise.mdx b/website/pages/ar/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/ar/sunrise.mdx +++ b/website/pages/ar/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/ar/tap.mdx b/website/pages/ar/tap.mdx index 0a41faab9c11..0cef5f4209fa 100644 --- a/website/pages/ar/tap.mdx +++ b/website/pages/ar/tap.mdx @@ -4,7 +4,7 @@ title: TAP Migration Guide Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. -## Overview +## نظره عامة [TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: @@ -45,15 +45,15 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed ### Contracts -| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| Contract | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) | | ------------------- | -------------------------------------------- | -------------------------------------------- | -| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | -| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | -| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | +| TAP Verifier | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | +| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | +| Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | ### Gateway -| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| Component | Edge and Node Mainnet (Aribtrum Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) | | ---------- | --------------------------------------------- | --------------------------------------------- | | Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | @@ -190,4 +190,4 @@ You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs ### Launchpad -Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/main/charts/graph-network-indexer) diff --git a/website/pages/ar/tokenomics.mdx b/website/pages/ar/tokenomics.mdx index ef7aee0871ec..049df2b7f086 100644 --- a/website/pages/ar/tokenomics.mdx +++ b/website/pages/ar/tokenomics.mdx @@ -1,25 +1,25 @@ --- title: اقتصاد التوكن (Tokenomics) لشبكة الغراف -description: تعتمد شبكة The Graphعلى نظام إقتصادي قوي للتشجيع على المشاركة. إليك كيف يعمل GRT ، التوكن الأساسي للعمل في The Graph. +description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. --- -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +## نظره عامة -- عنوان توكن GRT على Arbitrum One: [ 0x9623063377AD1B27544C965cCd7342f7EA7e88C7 ](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. -The Graph is a decentralized protocol that enables easy access to blockchain data. +## Specifics -إنه مشابه لنموذج B2B2C ، إلا أنه مدعوم بشبكة لا مركزية من المشاركين. يعمل المشاركون في الشبكة معًا لتوفير البيانات للمستخدمين النهائيين مقابل مكافآت GRT. GRT هو أداة العمل الذي ينسق بين موفري البيانات والمستهلكين. تعمل GRT كأداة مساعدة للتنسيق بين موفري البيانات والمستهلكين داخل الشبكة وتحفيز المشاركين في البروتوكول على تنظيم البيانات بشكل فعال. +The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. -By using The Graph, users can easily access data from the blockchain, paying only for the specific information they need. The Graph is used by many [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem today. +The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/billing/). -يقوم الغراف بفهرسة بيانات blockchain بنفس طريقة فهرسة Google للويب. في الواقع ، ربما كنت تستخدم الغراف بالفعل دون أن تدرك ذلك. إذا كنت قد شاهدت الواجهة الأمامية لـ dapp الذي يحصل على بياناته من subgraph! ، فقد استعلمت عن البيانات من ال subgraph! +- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -The Graph plays a crucial role in making blockchain data more accessible and enabling a marketplace for its exchange. +- عنوان توكن GRT على Arbitrum One: [ 0x9623063377AD1B27544C965cCd7342f7EA7e88C7 ](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -## أدوار المشاركين على الشبكة +## The Roles of Network Participants -هناك أربعة أدوار أساسية في الشبكة: +There are four primary network participants: 1. المفوضين (Delegators) - يقومو بتفويض GRT للمفهرسين & تأمين الشبكة @@ -29,82 +29,74 @@ The Graph plays a crucial role in making blockchain data more accessible and ena 4. المفهرسون (Indexers) - العمود الفقري لبيانات blockchain -الصيادون والمحكمون (Fishermen و Arbitrators) يلعبون أيضاً دورا حاسما في نجاح الشبكة من خلال مساهماتهم الأخرى، ويدعمون عمل الأدوار الأساسية للمشاركين الآخرين. لمزيد من المعلومات حول أدوار الشبكة، يُرجى [قراءة هذه المقالة](https://thegraph.com/blog/the-graph-grt-token-economics/). +Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). -![رسم بياني لاقتصاد التوكن (Tokenomics diagram)](/img/updated-tokenomics-image.png) +![Tokenomics diagram](/img/updated-tokenomics-image.png) -## المفوِّضين (يربحون GRT بشكل سلبي) +## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. -For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1500 GRT in rewards annually. +For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. -هناك ضريبة تفويض بنسبة 0.5٪ يتم حرقها عندما يقوم المفوض بتفويض GRT على الشبكة. إذا قرر أحد المفوضين سحب GRT المفوضة ، فيجب عليه الانتظار لفترة فك الارتباط والتي تستغرق 28 حقبة. كل حقبة تتكون من 6646 كتلة ، مما يعني أن 28 حقبة تستغرق حوالي 26 يومًا. +There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. -إذا كنت تقرأ هذا ، فيمكنك أن تصبح مفوضًا الآن من خلال التوجه إلى [ صفحة المشاركين في الشبكة ](https://thegraph.com/explorer/participants/indexers) ، و تفويض GRT إلى مفهرس من اختيارك. +If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. -## المنسِّقون (كسب GRT) +## Curators (Earn GRT) -Curators identify high-quality subgraphs, and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. -As of April 11th, 2024, subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. -## المطورين +## Developers -يقوم المطورون ببناء الsubgraphs والاستعلام عنها لاسترداد بيانات blockchain. نظرًا لأن الsubgraph مفتوحة المصدر ، يمكن للمطورين الاستعلام عن الsubgraph الموجودة لتحميل بيانات blockchain في dapps الخاصة بهم. يدفع المطورون ثمن الاستعلامات التي يقومون بها ب GRT ، والتي يتم توزيعها على المشاركين في الشبكة. +Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. ### إنشاء subgraph -يمكن للمطورين [ إنشاء subgraph ](/developing/creating-a-subgraph/) لفهرسة البيانات على blockchain. الsubgraph هي تعليمات للمفهرسين حول البيانات التي يجب تقديمها للمستهلكين. +Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -بمجرد أن يقوم المطورون ببناء الsubgraph واختباره ، يمكنهم [ نشر الsubgraph ](/publishing/publishing-a-subgraph/) على الشبكة اللامركزية لـ The Graph. +Once developers have built and tested their subgraph, they can [publish their subgraph](/publishing/publishing-a-subgraph/) on The Graph's decentralized network. ### الاستعلام عن subgraph موجود Once a subgraph is [published](/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. -يتم [ الاستعلام عن الSubgraph باستخدام GraphQL ](/querying/querying-the-graph/) ، ويتم دفع رسوم الاستعلام باستخدام GRT في [ Subgraph Studio ](https://thegraph.com/studio/). يتم توزيع رسوم الاستعلام على المشاركين في الشبكة بناءً على مساهماتهم في البروتوكول. - -يتم حرق 1٪ من رسوم الاستعلام المدفوعة للشبكة. - -## المفهرسون (كسب GRT) - -المفهرسين هم العمود الفقري لThe Graph. يعملون على أجهزة وبرامج مستقلة تشغل الشبكة اللامركزية لـ The Graph. يقدم المفهرسين البيانات للمستهلكين بناءً على تعليمات من الsubgraphs. - -يمكن للمفهرسين ربح مكافآت GRT بطريقتين: +Subgraphs are [queried using GraphQL](/querying/querying-the-graph/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. -1. Query fees: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1% of the query fees paid to the network are burned. -2. مكافآت الفهرسة: يتم توزيع 3% من الإصدار السنوي على المفهرسين بناءً على عدد الsubgraphs التي يقومون بفهرستها. هذه المكافآت تشجع المفهرسين على فهرسة الsubgraphs ، أحيانًا قبل البدء بفرض الرسوم على الاستعلامات ،يقوم المفهرسون بتجميع وتقديم أدلة فهرسة (POIs) للتحقق من دقة فهرسة البيانات التي قاموا بفهرستها. +## Indexers (Earn GRT) -كل subgraph يخصص له جزء من إجمالي إصدار التوكن للشبكة بناءً على مقدار إشارة تنسيق الsubgraph. هذا المقدار يتم منحه للمفهرسين وفقا لحصصهم المخصصة على الـ subgraph. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. -من أجل تشغيل عقدة الفهرسة ، يجب أن يشارك المفهرسون برهن 100،000 GRT أو أكثر مع الشبكة. يتم تشجيع المفهرسين برهن GRT تتناسب مع عدد الاستعلامات التي يقدمونها. +Indexers can earn GRT rewards in two ways: -يمكن للمفهرسين زيادة تخصيصاتهم من GRT على الsubgraph عن طريق قبول تفويض GRT من المفوضين ، ويمكنهم قبول ما يصل إلى 16 ضعف من رهانهم أو"حصتهم" الأولي. إذا أصبح المفهرس "مفوضا بشكل زائد" (أي أكثر من 16 ضعف من حصته الأولية) ، فلن يتمكن من استخدام GRT الإضافي من المفوضين حتى يزيد المفهرس حصته في الشبكة. +1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -يمكن أن يختلف مقدار المكافآت التي يتلقاها المفهرس بناءً على الحصة الأولية والتفويض المقبول وجودة الخدمة والعديد من العوامل الأخرى. الرسم البياني التالي هي بيانات عامة لمفهرس نشط في شبكة TheGraph اللامركزية. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -### The Indexer stake & reward of allnodes-com.eth +Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. -![حصص الفهرسة والمكافآت](/img/indexing-stake-and-income.png) +In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -هذه البيانات من فبراير 2021 إلى سبتمبر 2022. +Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. -> يرجى ملاحظة أن هذا سيتحسن عند اكتمال عملية الترحيل إلى [Arbitrum](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551)، مما يجعل تكاليف الغاز أقل بشكل كبير ويجعلها أقل عبئا للمشاركة في الشبكة. +The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. -## معروض التوكن: الحرق & الإصدار +## Token Supply: Burning & Issuance -يبلغ المعروض الأولي للتوكن 10 مليار GRT ، مع هدف إصدار جديد بنسبة 3٪ سنويًا لمكافأة المفهرسين الذين يخصصون حصصهم على الsubgraphs. هذا يعني أن إجمالي المعروض من توكن GRT سيزيد بنسبة 3٪ كل عام حيث يتم إصدار توكن جديد للمفهرسين تكريما لمساهمتهم في الشبكة. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. -![مجموع عملة القراف المحروقة](/img/total-burned-grt.jpeg) +![Total burned GRT](/img/total-burned-grt.jpeg) -بالإضافة إلى أنشطة الحرق الدورية المذكورة، يتوفر في توكن GRT آلية القطع لمعاقبة المفهرسين المسؤولين عن سلوك ضار أو غير مسؤول. وفي حالة إعطائهم عقوبة القطع، يتم حرق 50% من مكافآتهم الخاصة بالفهرسة في فترة زمنية محددة (بينما يذهب النصف الآخر للصياد"fisherman")، ويتم خفض حصتهم الشخصية بنسبة 2.5%، ويتم حرق نصف هذا المبلغ. ويساعد ذلك على ضمان أن المفهرسين لديهم حافز قوي للعمل بما يخدم مصالح الشبكة والمساهمة في أمنها واستقرارها. +In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. -## تحسين البروتوكول +## Improving the Protocol -تتطور شبكة Graph باستمرار ويتم إجراء تحسينات على التصميم الاقتصادي للبروتوكول باستمرار لتوفير أفضل تجربة لجميع المشاركين في الشبكة. يشرف مجلس The Graph على تغييرات البروتوكول ويتم تشجيع أعضاء المجتمع على المشاركة. شارك في تحسينات البروتوكول في [ منتدى The Graph ](https://forum.thegraph.com/). +The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). diff --git a/website/pages/cs/about.mdx b/website/pages/cs/about.mdx index 5e95320d27d4..e29bcf5fe650 100644 --- a/website/pages/cs/about.mdx +++ b/website/pages/cs/about.mdx @@ -2,46 +2,66 @@ title: O Grafu --- -Tato stránka vysvětlí, co je The Graph a jak můžete začít. - ## Co je Graf? -Grafu je decentralizovaný protokol pro indexování a dotazování dat blockchainu. Graf umožňuje dotazovat se na data, která je obtížné dotazovat přímo. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Projekty se složitými chytrými smlouvami, jako je [Uniswap](https://uniswap.org/), a iniciativy NFT, jako je [Bored Ape Yacht Club](https://boredapeyachtclub.com/), ukládají data do blockchainu Etherea, takže je opravdu obtížné číst cokoli jiného než základní data přímo z blockchainu. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -Můžete si také vytvořit vlastní server, zpracovávat na něm transakce, ukládat je do databáze a nad tím vším vytvořit koncový bod API pro dotazování na data. Tato možnost je však [náročná na zdroje](/network/benefits/), vyžaduje údržbu, představuje jediný bod selhání a porušuje důležité bezpečnostní vlastnosti potřebné pro decentralizaci. +### How The Graph Functions -**Indexování blockchainových dat je opravdu, ale opravdu těžké.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## Jak funguje graf +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -Grafu se učí, co a jak indexovat data Ethereu, m na základě popisů podgrafů, známých jako manifest podgrafu. Popis podgrafu definuje chytré smlouvy, které jsou pro podgraf zajímavé, události v těchto smlouvách, kterým je třeba věnovat pozornost, a způsob mapování dat událostí na data, která Grafu uloží do své databáze. +- When creating a subgraph, you need to write a subgraph manifest. -Jakmile napíšete `manifest podgrafu`, použijete Graph CLI k uložení definice do IPFS a řeknete indexeru, aby začal indexovat data pro tento podgraf. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -Tento diagram podrobněji popisuje tok dat po nasazení podgraf manifestu, který se zabývá transakcemi Ethereum: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![Grafu vysvětlující, jak Graf používá Uzel grafu k doručování dotazů konzumentům dat](/img/graph-dataflow.png) Průběh se řídí těmito kroky: -1. Dapp přidává data do Ethereum prostřednictvím transakce na chytrém kontraktu. -2. Chytrý smlouva vysílá při zpracování transakce jednu nebo více událostí. -3. Uzel grafu neustále vyhledává nové bloky Ethereum a data pro váš podgraf, která mohou obsahovat. -4. Uzel grafu v těchto blocích vyhledá události Etherea pro váš podgraf a spustí vámi zadané mapovací obsluhy. Mapování je modul WASM, který vytváří nebo aktualizuje datové entity, které Uzel grafu ukládá v reakci na události Ethereum. -5. Aplikace dapp se dotazuje grafického uzlu na data indexovaná z blockchainu pomocí [GraphQL endpoint](https://graphql.org/learn/). Uzel Grafu zase překládá dotazy GraphQL na dotazy pro své podkladové datové úložiště, aby tato data načetl, přičemž využívá indexovací schopnosti úložiště. Dapp tato data zobrazuje v bohatém UI pro koncové uživatele, kteří je používají k vydávání nových transakcí na platformě Ethereum. Cyklus se opakuje. +1. Dapp přidává data do Ethereum prostřednictvím transakce na chytrém kontraktu. +2. Chytrý smlouva vysílá při zpracování transakce jednu nebo více událostí. +3. Uzel grafu neustále vyhledává nové bloky Ethereum a data pro váš podgraf, která mohou obsahovat. +4. Uzel grafu v těchto blocích vyhledá události Etherea pro váš podgraf a spustí vámi zadané mapovací obsluhy. Mapování je modul WASM, který vytváří nebo aktualizuje datové entity, které Uzel grafu ukládá v reakci na události Ethereum. +5. Aplikace dapp se dotazuje grafického uzlu na data indexovaná z blockchainu pomocí [GraphQL endpoint](https://graphql.org/learn/). Uzel Grafu zase překládá dotazy GraphQL na dotazy pro své podkladové datové úložiště, aby tato data načetl, přičemž využívá indexovací schopnosti úložiště. Dapp tato data zobrazuje v bohatém UI pro koncové uživatele, kteří je používají k vydávání nových transakcí na platformě Ethereum. Cyklus se opakuje. ## Další kroky -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Než začnete psát vlastní podgraf, můžete se podívat do [Graph Explorer](https://thegraph.com/explorer) a prozkoumat některé z již nasazených podgrafů. Stránka každého podgrafu obsahuje hřiště, které vám umožní dotazovat se na data daného podgrafu pomocí GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/cs/arbitrum/arbitrum-faq.mdx b/website/pages/cs/arbitrum/arbitrum-faq.mdx index 4f9d8f545b6a..486e371b527d 100644 --- a/website/pages/cs/arbitrum/arbitrum-faq.mdx +++ b/website/pages/cs/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum FAQ Pokud chcete přejít na často ptal dotazy k účtování Arbitrum, klikněte na [here](#billing-on-arbitrum-faqs). -## Proč The Graph implementuje řešení L2? +## Why did The Graph implement an L2 Solution? -Škálováním The Graph na L2, sítě účastníci mohou očekávat: +By scaling The Graph on L2, network participants can now benefit from: - Až 26x úspora na poplatcích za plyn @@ -14,7 +14,7 @@ Pokud chcete přejít na často ptal dotazy k účtování Arbitrum, klikněte n - Zabezpečení zděděné po Ethereum -Škálování chytrých smluv protokolu na L2 umožňuje účastníkům sítě interakci častěji při snížených nákladech na plyn. Například, indexéry by mohly otevírat a zavírat alokace pro indexování většího počtu podgrafů s větší frekvencí, vývojáři mohli snadněji zavádět a aktualizovat podgrafy s větší lehkostí, Delegátor by mohli častěji delegovat GRT, Kurátoři by mohli přidávat nebo odebírat signály do většího počtu podgrafů–akcí dříve považovány za příliš nákladné dělat často kvůli nákladům. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Komunita Graf se v loňském roce rozhodla pokračovat v Arbitrum po výsledku diskuze [GIP-0031] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -41,27 +41,21 @@ Pro využití výhod používání a Graf na L2 použijte rozevírací přepína ## Jako vývojář podgrafů, Spotřebitel dat, indexer, kurátor, nebo delegátor, co mám nyní udělat? -Není třeba přijímat žádná okamžitá opatření, nicméně vyzýváme účastníky sítě, aby začali přecházet na Arbitrum a využívali výhod L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Týmy hlavních vývojářů pracují na vytvoření nástrojů pro přenos L2, které usnadní přesun delegování, kurátorství a podgrafů do služby Arbitrum. Účastníci sítě mohou očekávat, že nástroje pro přenos L2 budou k dispozici do léta 2023. +All indexing rewards are now entirely on Arbitrum. -Od 10. dubna 2023 se na Arbitrum razí 5 % všech indexačních odměn. S rostoucí účastí v síti a se souhlasem Rady, odměny za indexování se postupně přesunou z Etherea na Arbitrum a nakonec zcela na Arbitrum. - -## Co mám udělat, pokud se chci zapojit do sítě L2? - -Pomozte prosím [otestovat síť](https://testnet.thegraph.com/explorer) na L2 a nahlaste své zkušenosti na [Discord](https://discord.gg/graphprotocol). - -## Existují nějaká rizika spojená s rozšiřováním sítě na L2? +## Were there any risks associated with scaling the network to L2? Všechny chytré smlouvy byly důkladně [auditovány](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Vše bylo důkladně otestováno, a je připraven pohotovostní plán, který zajistí bezpečný a bezproblémový přechod. Podrobnosti naleznete [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Budou stávající subgrafy na Ethereum fungovat i nadále? +## Are existing subgraphs on Ethereum working? -Ano, smlouvy Graf síť budou fungovat paralelně na platformě Ethereum i Arbitrum, dokud se později plně nepřesunou na Arbitrum. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## Bude mít GRT na Arbitrum nasazen nový chytrý kontrakt? +## Does GRT have a new smart contract deployed on Arbitrum? Ano, GRT má další [smart contract na Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). Mainnetový [kontrakt GRT](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) na Ethereum však zůstane v provozu. diff --git a/website/pages/cs/billing.mdx b/website/pages/cs/billing.mdx index 8eb2b1e0bd2e..c308319f4286 100644 --- a/website/pages/cs/billing.mdx +++ b/website/pages/cs/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Klikněte na tlačítko "Připojit peněženku" v pravém horním rohu stránky. Budete přesměrováni na stránku pro výběr peněženky. Vyberte svou peněženku a klikněte na "Připojit". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ Více informací o získání ETH na Binance se dozvíte [zde](https://www.binan ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/cs/chain-integration-overview.mdx b/website/pages/cs/chain-integration-overview.mdx index a54cf6823bf1..673d312e81e1 100644 --- a/website/pages/cs/chain-integration-overview.mdx +++ b/website/pages/cs/chain-integration-overview.mdx @@ -6,12 +6,12 @@ Pro blockchainové týmy, které usilují o [integraci s protokolem The Graph](h ## Fáze 1. Technická integrace -- Týmy pracují na integraci Uzel grafu a Firehose pro řetězce nezaložené na EvM. [Zde je návod](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Týmy zahájí proces integrace protokolu vytvořením vlákna na fóru [zde](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (podkategorie Nové zdroje dat v části Správa a GIP). Použití výchozí šablony Fóra je povinné. ## Fáze 2. Ověřování integrace -- Týmy spolupracují s hlavními vývojáři, Graph Foundation a provozovateli GUIs a síťových bran, jako je [Subgraph Studio](https://thegraph.com/studio/), aby byl zajištěn hladký proces integrace. To zahrnuje poskytnutí nezbytné backendové infrastruktury, jako jsou koncové body JSON RPC nebo Firehose integračního řetězce. Týmy, které se chtějí vyhnout vlastnímu hostování takové infrastruktury, mohou k tomu využít komunitu provozovatelů uzlů (Indexers) Grafu, s čímž jim může pomoci nadace. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graf Indexers testují integraci na testovací síti Grafu. - Vývojáři jádra a indexátoři sledují stabilitu, výkon a determinismus dat. @@ -38,7 +38,7 @@ Tento proces souvisí se službou Datová služba podgrafů a vztahuje se pouze To by mělo vliv pouze na podporu protokolu pro indexování odměn na podgrafech s podsílou. Novou implementaci Firehose by bylo třeba testovat v testnetu podle metodiky popsané pro fázi 2 v tomto GIP. Podobně, za předpokladu, že implementace bude výkonná a spolehlivá, by bylo nutné provést PR na [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) (`Substreams data sources` Subgraph Feature) a také nový GIP pro podporu protokolu pro indexování odměn. PR a GIP může vytvořit kdokoli; nadace by pomohla se schválením Radou. -### 3. Kolik času zabere tento proces? +### 3. How much time will the process of reaching full protocol support take? Očekává se, že doba do uvedení do mainnetu bude trvat několik týdnů a bude se lišit v závislosti na době vývoje integrace, na tom, zda bude zapotřebí další výzkum, testování a opravy chyb, a jako vždy na načasování procesu řízení, který vyžaduje zpětnou vazbu od komunity. @@ -46,4 +46,4 @@ Podpora protokolu pro odměny za indexování závisí na šířce pásma zúča ### 4. Jak budou řešeny priority? -Podobně jako u bodu č. 3 bude záležet na celkové připravenosti a šířce pásma zúčastněných stran. Například nový řetězec se zcela novou implementací Firehose může trvat déle než integrace, které již byly testovány v praxi nebo jsou v procesu správy dále. To platí zejména pro řetězce, které byly dříve podporovány na [hostované službě](https://thegraph.com/hosted-service) nebo které se spoléhají na již otestované stacky. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/cs/cookbook/arweave.mdx b/website/pages/cs/cookbook/arweave.mdx index adb99aaf5251..40c88f99ccca 100644 --- a/website/pages/cs/cookbook/arweave.mdx +++ b/website/pages/cs/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Definice schématu popisuje strukturu výsledné databáze podgrafu a vztahy mez Obslužné programy pro zpracování událostí jsou napsány v jazyce [AssemblyScript](https://www.assemblyscript.org/). -Indexování Arweave zavádí do [AssemblyScript API](/developing/graph-ts/api/) datové typy specifické pro Arweave. +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { @@ -155,7 +155,7 @@ Zápis mapování podgrafu Arweave je velmi podobný psaní mapování podgrafu Jakmile je podgraf vytvořen na ovládacím panelu Podgraf Studio, můžete jej nasadit pomocí příkazu `graph deploy` CLI. ```bash -graph deploy --studio --access-token +graph deploy --access-token ``` ## Dotazování podgrafu Arweave diff --git a/website/pages/cs/cookbook/avoid-eth-calls.mdx b/website/pages/cs/cookbook/avoid-eth-calls.mdx index 0b7af55002b4..2a469ec0844e 100644 --- a/website/pages/cs/cookbook/avoid-eth-calls.mdx +++ b/website/pages/cs/cookbook/avoid-eth-calls.mdx @@ -99,4 +99,18 @@ Poznámka: Deklarované eth_calls lze provádět pouze v podgraf s verzí specVe ## Závěr -Výkon indexování můžeme výrazně zlepšit minimalizací nebo odstraněním `eth_calls` v našich podgraf. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/cs/cookbook/cosmos.mdx b/website/pages/cs/cookbook/cosmos.mdx index 5e67e4c3ff92..b604b8dabda0 100644 --- a/website/pages/cs/cookbook/cosmos.mdx +++ b/website/pages/cs/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Definice schématu popisuje strukturu výsledné databáze podgrafů a vztahy me Obslužné programy pro zpracování událostí jsou napsány v jazyce [AssemblyScript](https://www.assemblyscript.org/). -Indexování Cosmos zavádí datové typy specifické pro Cosmos do [AssemblyScript API](/developing/graph-ts/api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { @@ -203,7 +203,7 @@ Po vytvoření podgrafu můžete podgraf nasadit pomocí příkazu `graph deploy Navštivte Studio podgrafů a vytvořte nový podgraf. ```bash -graph deploy --studio subgraph-name +graph deploy subgraph-name ``` **Místní uzel grafu (na základě výchozí config):** @@ -226,32 +226,32 @@ Koncový bod GraphQL pro podgrafy Cosmos je určen definicí schématu se stáva #### Co je Cosmos Hub? -The [Cosmos Hub blockchain](https://hub.cosmos.network/) is the first blockchain in the [Cosmos](https://cosmos.network/) ecosystem. You can visit the [official documentation](https://docs.cosmos.network/) for more information. +[Cosmos Hub blockchain](https://hub.cosmos.network/) je první blockchain v ekosystému [Cosmos](https://cosmos.network/). Další informace naleznete v [oficiální dokumentaci](https://docs.cosmos.network/). #### Sítě -Cosmos Hub mainnet is `cosmoshub-4`. Cosmos Hub current testnet is `theta-testnet-001`.
Other Cosmos Hub networks, i.e. `cosmoshub-3`, are halted, therefore no data is provided for them. +Hlavní síť Cosmos Hub je `cosmoshub-4`. Současná testovací síť Cosmos Hub je `theta-testnet-001`.
Ostatní sítě Cosmos Hub, jako je `cosmoshub-3`, jsou zastavené, a proto pro ně nejsou poskytována žádná data. ### Osmosis -> Osmosis support in Graph Node and on Subgraph Studio is in beta: please contact the graph team with any questions about building Osmosis subgraphs! +> Podpora Osmosis v uzel grafua v Podgraph Studio je ve fázi beta: s případnými dotazy ohledně vytváření podgrafů Osmosis se obraťte na grafový tým! #### Co je osmosis? -[Osmosis](https://osmosis.zone/) is a decentralized, cross-chain automated market maker (AMM) protocol built on top of the Cosmos SDK. It allows users to create custom liquidity pools and trade IBC-enabled tokens. You can visit the [official documentation](https://docs.osmosis.zone/) for more information. +[Osmosis](https://osmosis.zone/) je decentralizovaný, cross-chain automatizovaný tvůrce trhu (AMM) protokol postavený na Cosmos SDK. Umožňuje uživatelům vytvářet vlastní fondy likvidity a obchodovat s tokeny povolenými IBC. Pro více informací můžete navštívit [oficiální dokumentaci](https://docs.osmosis.zone/). #### Sítě -Osmosis mainnet is `osmosis-1`. Osmosis current testnet is `osmo-test-4`. +Osmosis mainnet je `osmosis-1`. Aktuální testnet Osmosis je `osmo-test-4`. ## Příklady podgrafů -Here are some example subgraphs for reference: +Zde je několik příkladů podgrafů: -[Block Filtering Example](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-block-filtering) +[Příklad blokového filtrování](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-block-filtering) -[Validator Rewards Example](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-validator-rewards) +[Příklad odměn validátoru](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-validator-rewards) -[Validator Delegations Example](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-validator-delegations) +[Příklad delegování validátoru](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-validator-delegations) -[Osmosis Token Swaps Example](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-osmosis-token-swaps) +[Příklad výměny tokenů Osmosis](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-osmosis-token-swaps) diff --git a/website/pages/cs/cookbook/derivedfrom.mdx b/website/pages/cs/cookbook/derivedfrom.mdx index e95a2cbe3069..b5662250e154 100644 --- a/website/pages/cs/cookbook/derivedfrom.mdx +++ b/website/pages/cs/cookbook/derivedfrom.mdx @@ -1,28 +1,28 @@ --- -title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom +title: Podgraf Doporučený postup 2 - Zlepšení indexování a rychlosti dotazů pomocí @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Pole ve vašem schématu mohou skutečně zpomalit výkon podgrafu, pokud jejich počet přesáhne tisíce položek. Pokud je to možné, měla by se při použití polí používat direktiva `@derivedFrom`, která zabraňuje vzniku velkých polí, zjednodušuje obslužné programy a snižuje velikost jednotlivých entit, čímž výrazně zvyšuje rychlost indexování a výkon dotazů. -## How to Use the `@derivedFrom` Directive +## Jak používat směrnici `@derivedFrom` -You just need to add a `@derivedFrom` directive after your array in your schema. Like this: +Stačí ve schématu za pole přidat směrnici `@derivedFrom`. Takto: ```graphql comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` vytváří efektivní vztahy typu one-to-many, které umožňují dynamické přiřazení entity k více souvisejícím entitám na základě pole v související entitě. Tento přístup odstraňuje nutnost ukládat duplicitní data na obou stranách vztahu, čímž se podgraf stává efektivnějším. -### Example Use Case for `@derivedFrom` +### Příklad případu použití pro `@derivedFrom` -An example of a dynamically growing array is a blogging platform where a “Post” can have many “Comments”. +Příkladem dynamicky rostoucího pole je blogovací platforma, kde "příspěvek“ může mít mnoho "komentářů“. -Let’s start with our two entities, `Post` and `Comment` +Začněme s našimi dvěma entitami, `příspěvek` a `Komentář` -Without optimization, you could implement it like this with an array: +Bez optimalizace byste to mohli implementovat takto pomocí pole: ```graphql type Post @entity { @@ -38,9 +38,9 @@ type Comment @entity { } ``` -Arrays like these will effectively store extra Comments data on the Post side of the relationship. +Taková pole budou efektivně ukládat další data komentářů na straně Post vztahu. -Here’s what an optimized version looks like using `@derivedFrom`: +Zde vidíte, jak vypadá optimalizovaná verze s použitím `@derivedFrom`: ```graphql type Post @entity { @@ -57,18 +57,32 @@ type Comment @entity { } ``` -Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. +Pouhým přidáním direktivy `@derivedFrom` bude toto schéma ukládat "Komentáře“ pouze na straně "Komentáře“ vztahu a nikoli na straně "Příspěvek“ vztahu. Pole se ukládají napříč jednotlivými řádky, což umožňuje jejich výrazné rozšíření. To může vést k obzvláště velkým velikostem, pokud je jejich růst neomezený. -This will not only make our subgraph more efficient, but it will also unlock three features: +Tím se nejen zefektivní náš podgraf, ale také se odemknou tři funkce: -1. We can query the `Post` and see all of its comments. +1. Můžeme se zeptat na `Post` a zobrazit všechny jeho komentáře. -2. We can do a reverse lookup and query any `Comment` and see which post it comes from. +2. Můžeme provést zpětné vyhledávání a dotazovat se na jakýkoli `Komentář` a zjistit, ze kterého příspěvku pochází. -3. We can use [Derived Field Loaders](/developing/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. Pomocí [Derived Field Loaders](/developing/graph-ts/api/#looking-up-derived-entities) můžeme odemknout možnost přímého přístupu a manipulace s daty z virtuálních vztahů v našich mapováních podgrafů. ## Závěr -Adopting the `@derivedFrom` directive in subgraphs effectively handles dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. -To learn more detailed strategies to avoid large arrays, read this blog from Kevin Jones: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). +For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/cs/cookbook/enums.mdx b/website/pages/cs/cookbook/enums.mdx index a10970c1539f..71f3f784a0eb 100644 --- a/website/pages/cs/cookbook/enums.mdx +++ b/website/pages/cs/cookbook/enums.mdx @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## Další zdroje For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/pages/cs/cookbook/grafting-hotfix.mdx b/website/pages/cs/cookbook/grafting-hotfix.mdx index 4be0a0b07790..e93c527fa90c 100644 --- a/website/pages/cs/cookbook/grafting-hotfix.mdx +++ b/website/pages/cs/cookbook/grafting-hotfix.mdx @@ -1,12 +1,12 @@ --- -Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment --- ## TLDR Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. -### Overview +### Přehled This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. @@ -154,7 +154,7 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec - **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. - **Testing**: Always test grafting in a development environment before deploying to production. -## Conclusion +## Závěr Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: @@ -164,7 +164,7 @@ Grafting is an effective strategy for deploying hotfixes in subgraph development However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. -## Additional Resources +## Další zdroje - **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. @@ -173,14 +173,14 @@ By incorporating grafting into your subgraph development workflow, you can enhan ## Subgraph Best Practices 1-6 -1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/cs/cookbook/grafting.mdx b/website/pages/cs/cookbook/grafting.mdx index b68cbe3707c2..637a85c5774e 100644 --- a/website/pages/cs/cookbook/grafting.mdx +++ b/website/pages/cs/cookbook/grafting.mdx @@ -22,15 +22,15 @@ Další informace naleznete na: - [Roubování](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -V tomto tutoriálu se budeme zabývat základním případem použití. Nahradíme stávající smlouvu identickou smlouvou (s novou adresou, ale stejným kódem). Poté naroubujeme stávající podgraf na "základní" podgraf, který sleduje nový kontrakt. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Důležité upozornění k roubování při aktualizaci na síť -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Upozornění**: Doporučujeme nepoužívat roubování pro podgrafy publikované v síti grafů ### Proč je to důležité? -Štěpování je výkonná funkce, která umožňuje "naroubovat" jeden podgraf na druhý, čímž efektivně přenese historická data ze stávajícího podgrafu do nové verze. Ačkoli se jedná o účinný způsob, jak zachovat data a ušetřit čas při indexování, roubování může přinést složitosti a potenciální problémy při migraci z hostovaného prostředí do decentralizované sítě. Podgraf není možné naroubovat ze sítě The Graph Network zpět do hostované služby nebo do aplikace Subgraph Studio. +Štěpování je výkonná funkce, která umožňuje "naroubovat" jeden podgraf na druhý, čímž efektivně přenese historická data ze stávajícího podgrafu do nové verze. Podgraf není možné naroubovat ze Sítě grafů zpět do Podgraf Studio. ### Osvědčené postupy @@ -42,7 +42,7 @@ Dodržováním těchto pokynů minimalizujete rizika a zajistíte hladší průb ## Vytvoření existujícího podgrafu -Building subgraphs is an essential part of The Graph, described more in depth [here](/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Vytváření podgrafů je důležitou součástí Grafu, která je podrobněji popsána [zde](/quick-start/). Aby bylo možné sestavit a nasadit existující podgraf použitý v tomto tutoriálu, je k dispozici následující repozitář: - [Příklad repo subgrafu](https://github.com/Shiyasmohd/grafting-tutorial) @@ -80,7 +80,7 @@ dataSources: ``` - Zdroj dat `Lock` je adresa abi a smlouvy, kterou získáme při kompilaci a nasazení smlouvy -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - Sekce `mapování` definuje spouštěče, které vás zajímají, a funkce, které by měly být spuštěny v reakci na tyto spouštěče. V tomto případě nasloucháme na událost `Výstup` a po jejím vyslání voláme funkci `obsluhovatVýstup`. ## Definice manifestu roubování @@ -96,14 +96,14 @@ graft: block: 5956000 # block number ``` -- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `funkce:` je seznam všech použitých [jmen funkcí](/developing/creating-a-subgraph/#experimental-features). - `graft:` je mapa subgrafu `base` a bloku, na který se má roubovat. `block` je číslo bloku, od kterého začít indexovat. Graph zkopíruje data základního subgrafu až k zadanému bloku včetně, a poté pokračuje v indexaci nového subgrafu od tohoto bloku dále. Hodnoty `base` a `block` lze nalézt nasazením dvou podgrafů: jednoho pro základní indexování a druhého s roubováním ## Nasazení základního podgrafu -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` +1. Přejděte do [Podgraf Studio](https://thegraph.com/studio/) a vytvořte podgraf v testovací síti Sepolia s názvem `graft-example` 2. Následujte pokyny v části `AUTH & DEPLOY` na stránce vašeho subgrafu v adresáři `graft-example` ve vašem repozitáři 3. Po dokončení ověřte, zda se podgraf správně indexuje. Pokud spustíte následující příkaz v The Graph Playground @@ -144,8 +144,8 @@ Jakmile ověříte, že se podgraf správně indexuje, můžete jej rychle aktua Náhradní podgraf.yaml bude mít novou adresu smlouvy. K tomu může dojít při aktualizaci dapp, novém nasazení kontraktu atd. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. +1. Přejděte do [Podgraf Studio](https://thegraph.com/studio/) a vytvořte podgraf v testovací síti Sepolia s názvem `graft-replacement` +2. Vytvořte nový manifest. Soubor `subgraph.yaml` pro `graph-replacement` obsahuje jinou adresu kontraktu a nové informace o tom, jak by měl být podgraf nasazen. Tyto informace zahrnují `block` [poslední emitovanou událost](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) od starého kontraktu a `base` starého podgrafu. ID `base` podgrafu je `Deployment ID` vašeho původního `graph-example` subgrafu. To můžete najít v Podgraf Studiu. 3. Postupujte podle pokynů v části `AUTH & DEPLOY` na stránce podgrafu ve složce `graft-replacement` z repozitáře 4. Po dokončení ověřte, zda se podgraf správně indexuje. Pokud spustíte následující příkaz v The Graph Playground @@ -185,18 +185,18 @@ Měla by vrátit následující: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +Vidíte, že podgraf `graft-replacement` indexuje ze starších dat `graph-example` a novějších dat z nové adresy smlouvy. Původní smlouva emitovala dvě události `Odstoupení`, [Událost 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) a [Událost 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). Nová smlouva emitovala jednu událost `Výběr` poté, [Událost 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). Dvě dříve indexované transakce (Událost 1 a 2) a nová transakce (Událost 3) byly spojeny dohromady v podgrafu `výměna-odvod`. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Gratulujeme! Úspěšně jste naroubovali podgraf na jiný podgraf. ## Další zdroje -Pokud chcete získat více zkušeností s roubováním, zde je několik příkladů oblíbených smluv: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) - [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), -To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results +Chcete-li se stát ještě větším odborníkem na graf, zvažte možnost seznámit se s dalšími způsoby zpracování změn v podkladových zdrojích dat. Alternativy jako [Šablony zdroje dat](/developing/creating-a-subgraph/#data-source-templates) mohou dosáhnout podobných výsledků > Poznámka: Mnoho materiálů z tohoto článku bylo převzato z dříve publikovaného [článku Arweave](/cookbook/arweave/) diff --git a/website/pages/cs/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx b/website/pages/cs/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx index 6864eef796ff..12a504471cb7 100644 --- a/website/pages/cs/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx +++ b/website/pages/cs/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx @@ -1,48 +1,48 @@ --- -title: How to Secure API Keys Using Next.js Server Components +title: Jak zabezpečit klíče API pomocí komponent serveru Next.js --- ## Přehled -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +K řádnému zabezpečení našeho klíče API před odhalením ve frontendu naší aplikace můžeme použít [komponenty serveru Next.js](https://nextjs.org/docs/app/building-your-application/rendering/server-components). Pro další zvýšení zabezpečení našeho klíče API můžeme také [omezit náš klíč API na určité podgrafy nebo domény v Podgraf Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +V této kuchařce probereme, jak vytvořit serverovou komponentu Next.js, která se dotazuje na podgraf a zároveň skrývá klíč API před frontend. -### Caveats +### Upozornění -- Next.js server components do not protect API keys from being drained using denial of service attacks. -- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. -- Next.js server components introduce centralization risks as the server can go down. +- Součásti serveru Next.js nechrání klíče API před odčerpáním pomocí útoků typu odepření služby. +- Brány Graf síť mají zavedené strategie detekce a zmírňování odepření služby, avšak použití serverových komponent může tyto ochrany oslabit. +- Server komponenty Next.js přinášejí rizika centralizace, protože může dojít k výpadku serveru. -### Why It's Needed +### Proč je to důležité -In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. +Ve standardní aplikaci React mohou být klíče API obsažené v kódu frontendu vystaveny na straně klienta, což představuje bezpečnostní riziko. Soubory `.env` se sice běžně používají, ale plně klíče nechrání, protože kód Reactu se spouští na straně klienta a vystavuje klíč API v hlavičkách. Serverové komponenty Next.js tento problém řeší tím, že citlivé operace zpracovávají na straně serveru. -### Using client-side rendering to query a subgraph +### Použití vykreslování na straně klienta k dotazování podgrafu ![Client-side rendering](/img/api-key-client-side-rendering.png) ### Požadavky -- An API key from [Subgraph Studio](https://thegraph.com/studio) -- Basic knowledge of Next.js and React. -- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). +- Klíč API od [Subgraph Studio](https://thegraph.com/studio) +- Základní znalosti Next.js a React. +- Existující projekt Next.js, který používá [App Router](https://nextjs.org/docs/app). -## Step-by-Step Cookbook +## Kuchařka krok za krokem -### Step 1: Set Up Environment Variables +### Krok 1: Nastavení proměnných prostředí -1. In our Next.js project root, create a `.env.local` file. -2. Add our API key: `API_KEY=`. +1. V kořeni našeho projektu Next.js vytvořte soubor `.env.local`. +2. Přidejte náš klíč API: `API_KEY=`. -### Step 2: Create a Server Component +### Krok 2: Vytvoření součásti serveru -1. In our `components` directory, create a new file, `ServerComponent.js`. -2. Use the provided example code to set up the server component. +1. V adresáři `components` vytvořte nový soubor `ServerComponent.js`. +2. K nastavení komponenty serveru použijte přiložený ukázkový kód. -### Step 3: Implement Server-Side API Request +### Krok 3: Implementace požadavku API na straně serveru -In `ServerComponent.js`, add the following code: +Do souboru `ServerComponent.js` přidejte následující kód: ```javascript const API_KEY = process.env.API_KEY @@ -95,10 +95,10 @@ export default async function ServerComponent() { } ``` -### Step 4: Use the Server Component +### Krok 4: Použití komponenty serveru -1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. -2. Render the component: +1. V našem souboru stránky (např. `pages/index.js`) importujte `ServerComponent`. +2. Vykreslení komponenty: ```javascript import ServerComponent from './components/ServerComponent' @@ -112,12 +112,12 @@ export default function Home() { } ``` -### Step 5: Run and Test Our Dapp +### Krok 5: Spusťte a otestujte náš Dapp -Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. +Spusťte naši aplikaci Next.js pomocí `npm run dev`. Ověřte, že serverová komponenta načítá data bez vystavení klíče API. ![Server-side rendering](/img/api-key-server-side-rendering.png) ### Závěr -By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/cookbook/upgrading-a-subgraph/#securing-your-api-key) to increase your API key security even further. +Použitím serverových komponent Next.js jsme efektivně skryli klíč API před klientskou stranou, čímž jsme zvýšili bezpečnost naší aplikace. Tato metoda zajišťuje, že citlivé operace jsou zpracovávány na straně serveru, mimo potenciální zranitelnosti na straně klienta. Nakonec nezapomeňte prozkoumat [další opatření pro zabezpečení klíče API](/cookbook/upgrading-a-subgraph/#securing-your-api-key), abyste ještě více zvýšili zabezpečení svého klíče API. diff --git a/website/pages/cs/cookbook/immutable-entities-bytes-as-ids.mdx b/website/pages/cs/cookbook/immutable-entities-bytes-as-ids.mdx index 378e73ac83b8..620906f8cf65 100644 --- a/website/pages/cs/cookbook/immutable-entities-bytes-as-ids.mdx +++ b/website/pages/cs/cookbook/immutable-entities-bytes-as-ids.mdx @@ -1,14 +1,14 @@ --- -title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs +title: Osvědčený postup 3 - Zlepšení indexování a výkonu dotazů pomocí neměnných entit a bytů jako ID --- ## TLDR -Using Immutable Entities and Bytes for IDs in our `schema.graphql` file [significantly improves ](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/) indexing speed and query performance. +Použití neměnných entit a bytů pro ID v našem souboru `schema.graphql` [výrazně zlepšuje ](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/) rychlost indexování a výkonnost dotazů. -## Immutable Entities +## Nezměnitelné entity -To make an entity immutable, we simply add `(immutable: true)` to an entity. +Aby byla entita neměnná, jednoduše k ní přidáme `(immutable: true)`. ```graphql type Transfer @entity(immutable: true) { @@ -19,21 +19,21 @@ type Transfer @entity(immutable: true) { } ``` -By making the `Transfer` entity immutable, graph-node is able to process the entity more efficiently, improving indexing speeds and query responsiveness. +Tím, že je entita `Transfer` neměnná, je grafový uzel schopen ji zpracovávat efektivněji, což zvyšuje rychlost indexování a odezvu dotazů. -Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging on-chain event data, such as a `Transfer` event being logged as a `Transfer` entity. +Struktury neměnných entit se v budoucnu nezmění. Ideální entitou, která by se měla stát nezměnitelnou entitou, by byla entita, která přímo zaznamenává data událostí v řetězci, například událost `Převod` by byla zaznamenána jako entita `Převod`. -### Under the hood +### Pod kapotou -Mutable entities have a 'block range' indicating their validity. Updating these entities requires the graph node to adjust the block range of previous versions, increasing database workload. Queries also need filtering to find only live entities. Immutable entities are faster because they are all live and since they won't change, no checks or updates are required while writing, and no filtering is required during queries. +Mutabilní entity mají "rozsah bloku", který udává jejich platnost. Aktualizace těchto entit vyžaduje, aby uzel grafu upravil rozsah bloků předchozích verzí, což zvyšuje zatížení databáze. Dotazy je také třeba filtrovat, aby byly nalezeny pouze živé entity. Neměnné entity jsou rychlejší, protože jsou všechny živé, a protože se nebudou měnit, nejsou při zápisu nutné žádné kontroly ani aktualizace a při dotazech není nutné žádné filtrování. -### When not to use Immutable Entities +### Kdy nepoužívat nezměnitelné entity -If you have a field like `status` that needs to be modified over time, then you should not make the entity immutable. Otherwise, you should use immutable entities whenever possible. +Pokud máte pole, jako je `status`, které je třeba v průběhu času měnit, neměli byste entitu učinit neměnnou. Jinak byste měli používat neměnné entity, kdykoli je to možné. -## Bytes as IDs +## Bajty jako IDs -Every entity requires an ID. In the previous example, we can see that the ID is already of the Bytes type. +Každá entita vyžaduje ID. V předchozím příkladu vidíme, že ID je již typu Bytes. ```graphql type Transfer @entity(immutable: true) { @@ -44,19 +44,19 @@ type Transfer @entity(immutable: true) { } ``` -While other types for IDs are possible, such as String and Int8, it is recommended to use the Bytes type for all IDs due to character strings taking twice as much space as Byte strings to store binary data, and comparisons of UTF-8 character strings must take the locale into account which is much more expensive than the bytewise comparison used to compare Byte strings. +I když jsou možné i jiné typy ID, například String a Int8, doporučuje se pro všechna ID používat typ Bytes, protože pro uložení binárních dat zabírají znakové řetězce dvakrát více místa než řetězce Byte a při porovnávání znakových řetězců UTF-8 se musí brát v úvahu locale, což je mnohem dražší než bytewise porovnávání používané pro porovnávání řetězců Byte. -### Reasons to Not Use Bytes as IDs +### Důvody, proč nepoužívat bajty jako IDs -1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. -3. Indexing and querying performance improvements are not desired. +1. Pokud musí být IDs entit čitelné pro člověka, například automaticky doplňované číselné IDs nebo čitelné řetězce, neměly by být použity bajty pro IDs. +2. Při integraci dat podgrafu s jiným datovým modelem, který nepoužívá bajty jako IDs, by se bajty jako IDs neměly používat. +3. Zlepšení výkonu indexování a dotazování není žádoucí. -### Concatenating With Bytes as IDs +### Konkatenace s byty jako IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +V mnoha podgrafech se běžně používá spojování řetězců ke spojení dvou vlastností události do jediného ID, například pomocí `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. Protože se však tímto způsobem vrací řetězec, značně to zhoršuje indexování podgrafů a výkonnost dotazování. -Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. +Místo toho bychom měli použít metodu `concatI32()` pro spojování vlastností událostí. Výsledkem této strategie je ID `Bytes`, které je mnohem výkonnější. ```typescript export function handleTransfer(event: TransferEvent): void { @@ -73,11 +73,11 @@ export function handleTransfer(event: TransferEvent): void { } ``` -### Sorting With Bytes as IDs +### Třídění s bajty jako ID -Sorting using Bytes as IDs is not optimal as seen in this example query and response. +Třídění pomocí bajtů jako IDs není optimální, jak je vidět v tomto příkladu dotazu a odpovědi. -Query: +Dotaz: ```graphql { @@ -90,7 +90,7 @@ Query: } ``` -Query response: +Odpověď na dotaz: ```json { @@ -119,9 +119,9 @@ Query response: } ``` -The IDs are returned as hex. +ID jsou vrácena v hex. -To improve sorting, we should create another field on the entity that is a BigInt. +Abychom zlepšili třídění, měli bychom v entitě vytvořit další pole, které bude BigInt. ```graphql type Transfer @entity { @@ -133,9 +133,9 @@ type Transfer @entity { } ``` -This will allow for sorting to be optimized sequentially. +To umožní postupnou optimalizaci třídění. -Query: +Dotaz: ```graphql { @@ -146,7 +146,7 @@ Query: } ``` -Query Response: +Odpověď na dotaz: ```json { @@ -171,6 +171,20 @@ Query Response: ## Závěr -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Bylo prokázáno, že použití neměnných entit i bytů jako ID výrazně zvyšuje efektivitu podgrafů. Testy konkrétně ukázaly až 28% nárůst výkonu dotazů a až 48% zrychlení indexace. -Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). +Více informací o používání nezměnitelných entit a bytů jako ID najdete v tomto příspěvku na blogu Davida Lutterkorta, softwarového inženýra ve společnosti Edge & Node: [Dvě jednoduchá vylepšení výkonu podgrafu](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/cs/cookbook/near.mdx b/website/pages/cs/cookbook/near.mdx index 3044e8d66d81..ac95d0149954 100644 --- a/website/pages/cs/cookbook/near.mdx +++ b/website/pages/cs/cookbook/near.mdx @@ -6,7 +6,7 @@ Tato příručka je úvodem do vytváření subgrafů indexujících chytré kon ## Co je NEAR? -[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. +[NEAR](https://near.org/) je platforma pro chytré smlouvy, která slouží k vytváření decentralizovaných aplikací. Další informace najdete v [oficiální dokumentaci](https://docs.near.org/concepts/basics/protocol). ## Co jsou podgrafy NEAR? @@ -17,7 +17,7 @@ Podgrafy jsou založeny na událostech, což znamená, že naslouchají událost - Obsluhy bloků: jsou spouštěny při každém novém bloku. - Obsluhy příjmu: spouštějí se pokaždé, když je zpráva provedena na zadaném účtu. -[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): +[Z dokumentace NEAR](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): > Příjemka je jediným objektem, který lze v systému použít. Když na platformě NEAR hovoříme o "zpracování transakce", znamená to v určitém okamžiku "použití účtenky". @@ -37,7 +37,7 @@ Definice podgrafů má tři aspekty: **schema.graphql:** soubor se schématem, který definuje, jaká data jsou uložena pro váš podgraf, a jak je možné je dotazovat pomocí GraphQL. Požadavky na podgrafy NEAR jsou pokryty [existující dokumentací](/developing/creating-a-subgraph#the-graphql-schema). -**Mapování v jazyce AssemblyScript:** [Kód jazyka AssemblyScript](/developing/graph-ts/api), který převádí data událostí na entity definované ve vašem schématu. Podpora NEAR zavádí datové typy specifické pro NEAR a nové funkce pro parsování JSON. +**Mapování AssemblyScript:** [Kód AssemblyScript](/developing/graph-ts/api), který převádí data událostí na entity definované ve vašem schématu. Podpora NEAR zavádí datové typy specifické pro NEAR a nové funkce pro parsování JSON. Při vývoji podgrafů existují dva klíčové příkazy: @@ -71,8 +71,8 @@ dataSources: ``` - Podgrafy NEAR představují nový `druh` zdroje dat (`near`) -- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` -- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- `Síť` by měla odpovídat síti v hostitelském uzlu Graf. V Podgraf Studio je hlavní síť NEAR `near-mainnet` a testovací síť NEAR je `near-testnet` +- Zdroje dat NEAR zavádějí volitelné pole `source.account`, které je čitelným ID odpovídajícím [účtu NEAR](https://docs.near.org/concepts/protocol/account-model). Může to být účet nebo podúčet. - NEAR datové zdroje představují alternativní volitelné pole `source.accounts`, které obsahuje volitelné přípony a předpony. Musí být specifikována alespoň jedna z předpony nebo přípony, které odpovídají jakémukoli účtu začínajícímu nebo končícímu uvedenými hodnotami. Příklad níže by odpovídal: `[app|good].*[morning.near|morning.testnet]`. Pokud je potřeba pouze seznam předpon nebo přípon, druhé pole lze vynechat. ```yaml @@ -88,7 +88,7 @@ accounts: Zdroje dat NEAR podporují dva typy zpracovatelů: - `blockHandlers`: spustí se na každém novém bloku NEAR. Není vyžadován žádný `source.account`. -- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). +- `receiptHandlers`: spustí se na každé příjemce, kde je `účet zdroje dat` příjemcem. Všimněte si, že se zpracovávají pouze přesné shody ([podúčty](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) musí být přidány jako nezávislé zdroje dat). ### Definice schématu @@ -98,7 +98,7 @@ Definice schématu popisuje strukturu výsledné databáze podgrafů a vztahy me Obslužné programy pro zpracování událostí jsou napsány v jazyce [AssemblyScript](https://www.assemblyscript.org/). -Indexování NEAR zavádí do rozhraní [AssemblyScript API](/developing/graph-ts/api) datové typy specifické pro NEAR. +Indexování NEAR zavádí do [API AssemblyScript](/developing/graph-ts/api) datové typy specifické pro NEAR. ```typescript @@ -165,9 +165,9 @@ Tyto typy jsou předány do block & obsluha účtenek: - Obsluhy bloků obdrží `Block` - Obsluhy příjmu obdrží `ReceiptWithOutcome` -Jinak je zbytek [AssemblyScript API](/developing/graph-ts/api) dostupný vývojářům podgrafů NEAR během provádění mapování. +V opačném případě mají vývojáři podgrafů NEAR během provádění mapování k dispozici zbytek [AssemblyScript API](/developing/graph-ts/api). -To zahrnuje novou funkci parsování JSON - záznamy na NEAR jsou často vysílány ve formě zřetězených JSON. Nová funkce `json.fromString(...)` je k dispozici jako součást [JSON API](/developing/graph-ts/api#json-api), které umožňuje vývojářům snadno zpracovávat tyto záznamy. +To zahrnuje novou funkci pro parsování JSON - log na NEAR jsou často emitovány jako serializované JSONs. Nová funkce `json.fromString(...)` je k dispozici jako součást [JSON API](/developing/graph-ts/api#json-api), která umožňuje vývojářům snadno zpracovávat tyto log. ## Nasazení podgrafu NEAR @@ -194,8 +194,8 @@ Konfigurace uzlů závisí na tom, kde je podgraf nasazen. ### Podgraf Studio ```sh -graph auth --studio -graph deploy --studio +graph auth +graph deploy ``` ### Místní uzel grafu (na základě výchozí konfigurace) @@ -232,7 +232,7 @@ Koncový bod GraphQL pro podgrafy NEAR je určen definicí schématu se stávaj ## Příklady podgrafů -Here are some example subgraphs for reference: +Zde je několik příkladů podgrafů: [NEAR bloky](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) diff --git a/website/pages/cs/cookbook/pruning.mdx b/website/pages/cs/cookbook/pruning.mdx index 7533d0070737..32ffb4e5450a 100644 --- a/website/pages/cs/cookbook/pruning.mdx +++ b/website/pages/cs/cookbook/pruning.mdx @@ -1,22 +1,22 @@ --- -title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning +title: Doporučený postup 1 - Zlepšení rychlosti dotazu pomocí ořezávání podgrafů --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) odstraní archivní entity z databáze podgrafu až do daného bloku a odstranění nepoužívaných entit z databáze podgrafu zlepší výkonnost dotazu podgrafu, často výrazně. Použití `indexerHints` je snadný způsob, jak podgraf ořezat. -## How to Prune a Subgraph With `indexerHints` +## Jak prořezat podgraf pomocí `indexerHints` -Add a section called `indexerHints` in the manifest. +Přidejte do manifestu sekci `indexerHints`. -`indexerHints` has three `prune` options: +`indexerHints` má tři možnosti `prune`: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. -- `prune: `: Sets a custom limit on the number of historical blocks to retain. -- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/querying/graphql-api/#time-travel-queries) are desired. +- `prune: auto`: Udržuje minimální potřebnou historii nastavenou indexátorem, čímž optimalizuje výkon dotazu. Toto je obecně doporučené nastavení a je výchozí pro všechny podgrafy vytvořené pomocí `graph-cli` >= 0.66.0. +- `prune: `: Nastaví vlastní omezení počtu historických bloků, které se mají zachovat. +- `prune: never`: Je výchozí, pokud není k dispozici sekce `indexerHints`. `prune: never` by mělo být vybráno, pokud jsou požadovány [Dotazy na cestování časem](/querying/graphql-api/#time-travel-queries). -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +Aktualizací souboru `subgraph.yaml` můžeme do podgrafů přidat `indexerHints`: ```yaml specVersion: 1.0.0 @@ -30,12 +30,26 @@ dataSources: network: mainnet ``` -## Important Considerations +## Důležité úvahy -- If [Time Travel Queries](/querying/graphql-api/#time-travel-queries) are desired as well as pruning, pruning must be performed accurately to retain Time Travel Query functionality. Due to this, it is generally not recommended to use `indexerHints: prune: auto` with Time Travel Queries. Instead, prune using `indexerHints: prune: ` to accurately prune to a block height that preserves the historical data required by Time Travel Queries, or use `prune: never` to maintain all data. +- Pokud jsou kromě ořezávání požadovány i [dotazy na cestování v čase](/querying/graphql-api/#time-travel-queries), musí být ořezávání provedeno přesně, aby byla zachována funkčnost dotazů na cestování v čase. Z tohoto důvodu se obecně nedoporučuje používat `indexerHints: prune: auto` s Time Travel Queries. Místo toho proveďte ořezávání pomocí `indexerHints: prune: ` pro přesné ořezání na výšku bloku, která zachovává historická data požadovaná dotazy Time Travel, nebo použijte `prune: never` pro zachování všech dat. -- It is not possible to [graft](/cookbook/grafting/) at a block height that has been pruned. If grafting is routinely performed and pruning is desired, it is recommended to use `indexerHints: prune: ` that will accurately retain a set number of blocks (e.g., enough for six months). +- Není možné [roubovat](/cookbook/grafting/) na výšku bloku, který byl prořezán. Pokud se roubování provádí běžně a je požadováno prořezání, doporučuje se použít `indexerHints: prune: ` který přesně zachová stanovený počet bloků (např. dostatečný počet na šest měsíců). ## Závěr -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Ořezávání pomocí `indexerHints` je osvědčeným postupem pro vývoj podgrafů, který nabízí významné zlepšení výkonu dotazů. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/cs/cookbook/subgraph-debug-forking.mdx b/website/pages/cs/cookbook/subgraph-debug-forking.mdx index e0e3e2a69641..84df907a9602 100644 --- a/website/pages/cs/cookbook/subgraph-debug-forking.mdx +++ b/website/pages/cs/cookbook/subgraph-debug-forking.mdx @@ -2,7 +2,7 @@ title: Rychlé a snadné ladění podgrafů pomocí vidliček --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +Stejně jako u mnoha systémů zpracovávajících velké množství dat může indexerům grafu (Graph Nodes) trvat poměrně dlouho, než synchronizují váš podgraf s cílovým blockchainem. Nesoulad mezi rychlými změnami za účelem ladění a dlouhými čekacími dobami potřebnými pro indexaci je extrémně kontraproduktivní a jsme si toho dobře vědomi. To je důvod, proč představujeme **rozvětvování podgrafů**, vyvinutý společností [LimeChain](https://limechain.tech/), a v tomto článku Ukážu vám, jak lze tuto funkci použít k podstatnému zrychlení ladění podgrafů! ## Ok, co to je? @@ -12,9 +12,9 @@ V kontextu ladění vám ** vidličkování podgrafů** umožňuje ladit neúsp ## Co?! Jak? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +Když nasadíte podgraf do vzdáleného uzlu Graf pro indexování a ten selže v bloku _X_, dobrou zprávou je, že uzel Graf bude stále obsluhovat dotazy GraphQL pomocí svého úložiště, které je synchronizováno s blokem _X_. To je skvělé! To znamená, že můžeme využít tohoto "aktuálního" úložiště k opravě chyb vznikajících při indexování bloku _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +Stručně řečeno, _rozvětvíme neúspěšný podgraf_ ze vzdáleného uzlu grafu, u kterého je zaručeno, že podgraf bude indexován až do bloku _X_, abychom lokálně nasazenému podgrafu laděnému v bloku _X_ poskytli aktuální pohled na stav indexování. ## Ukažte mi prosím nějaký kód! @@ -44,12 +44,12 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, jak nešťastné, když jsem nasadil můj perfektně vypadající podgraf do [Podgraf Studio](https://thegraph.com/studio/), selhalo to s chybou _"Gravatar nenalezen!"_. Obvyklý způsob, jak se pokusit o opravu, je: 1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší (zatímco já vím, že ne). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Znovu nasaďte podgraf do [Subgraph Studio](https://thegraph.com/studio/) (nebo jiného vzdáleného uzlu Graf). 3. Počkejte na synchronizaci. 4. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá! @@ -59,7 +59,7 @@ Pomocí **vidličkování podgrafů** můžeme tento krok v podstatě eliminovat 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Nasazení do místního uzlu Graf, **_forking selhávajícího podgrafu_** a **_zahájení od problematického bloku_**. 3. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá! Nyní můžete mít 2 otázky: @@ -80,7 +80,7 @@ Nezapomeňte také nastavit pole `dataSources.source.startBlock` v manifestu pod Takže to dělám takhle: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. Spustím místní uzel Graf ([zde je návod, jak to udělat](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) s volbou `fork-base` nastavenou na: `https://api.thegraph.com/subgraphs/id/`, protože budu forkovat podgraf, ten chybný, který jsem nasadil dříve, z [Podgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. Po pečlivém prozkoumání si všímám, že existuje nesoulad v reprezentacích `id`, které se používají při indexaci `Gravatar` v mých dvou obslužných funkcích. Zatímco `handleNewGravatar` ho převede na hex (`event.params.id.toHex()`), `handleUpdatedGravatar` používá int32 (`event.params.id.toI32()`), což způsobuje, že `handleUpdatedGravatar` selže s chybou "Gravatar nenalezen!". Udělám, aby obě převedly `id` na hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. Po provedení změn jsem nasadil svůj podgraf do místního uzlu Graf, **_rozvětveníl selhávající podgraf_** a nastavil `dataSources.source.startBlock` na `6190343` v `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` -4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +4. Zkontroluji protokoly vytvořené místním graf uzlem a hurá, zdá se, že vše funguje. +5. Nasadím svůj nyní již bezchybný podgraf do vzdáleného uzlu Graf a žiji šťastně až do smrti! (bez brambor) diff --git a/website/pages/cs/cookbook/subgraph-uncrashable.mdx b/website/pages/cs/cookbook/subgraph-uncrashable.mdx index 1c2b3d9e4dad..13c979d18853 100644 --- a/website/pages/cs/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/cs/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Generátor kódu bezpečného podgrafu - Framework také obsahuje způsob (prostřednictvím konfiguračního souboru), jak vytvořit vlastní, ale bezpečné funkce setteru pro skupiny proměnných entit. Tímto způsobem není možné, aby uživatel načetl/použil zastaralou entitu grafu, a také není možné zapomenout uložit nebo nastavit proměnnou, kterou funkce vyžaduje. -- Varovné protokoly se zaznamenávají jako protokoly označující místa, kde došlo k porušení logiky podgrafu, aby bylo možné problém opravit a zajistit přesnost dat. Tyto protokoly lze zobrazit v hostované službě The Graph v části 'Logs' sekce. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Podgraf Uncrashable lze spustit jako volitelný příznak pomocí příkazu Graph CLI codegen. diff --git a/website/pages/cs/cookbook/timeseries.mdx b/website/pages/cs/cookbook/timeseries.mdx index 88ee70005a6e..48be01215291 100644 --- a/website/pages/cs/cookbook/timeseries.mdx +++ b/website/pages/cs/cookbook/timeseries.mdx @@ -6,7 +6,7 @@ title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggr Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. -## Overview +## Přehled Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. @@ -27,7 +27,7 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri - Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. - Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. -### Important Considerations +### Důležité úvahy - Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. - Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. @@ -44,7 +44,7 @@ A timeseries entity represents raw data points collected over time. It is define - `id`: Must be of type `Int8!` and is auto-incremented. - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. -Example: +Příklad: ```graphql type Data @entity(timeseries: true) { @@ -61,7 +61,7 @@ An aggregation entity computes aggregated values from a timeseries source. It is - Annotation Arguments: - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). -Example: +Příklad: ```graphql type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { @@ -77,7 +77,7 @@ In this example, Stats aggregates the price field from Data over hourly and dail Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. -Example: +Příklad: ```graphql { @@ -101,7 +101,7 @@ Example: Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. -Example: +Příklad: ### Timeseries Entity @@ -169,7 +169,7 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar - Sorting: Results are automatically sorted by timestamp and id in descending order. - Current Data: An optional current argument can include the current, partially filled interval. -### Conclusion +### Závěr Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: @@ -181,14 +181,14 @@ By adopting this pattern, developers can build more efficient and scalable subgr ## Subgraph Best Practices 1-6 -1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/cs/cookbook/transfer-to-the-graph.mdx b/website/pages/cs/cookbook/transfer-to-the-graph.mdx index 287cd7d81b4b..e0b9a1fbfe53 100644 --- a/website/pages/cs/cookbook/transfer-to-the-graph.mdx +++ b/website/pages/cs/cookbook/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -31,7 +31,7 @@ You must have [Node.js](https://nodejs.org/) and a package manager of your choic On your local machine, run the following command: -Using [npm](https://www.npmjs.com/): +Použitím [npm](https://www.npmjs.com/): ```sh npm install -g @graphprotocol/graph-cli@latest @@ -48,7 +48,7 @@ graph init --product subgraph-studio In The Graph CLI, use the auth command seen in Subgraph Studio: ```sh -graph auth --studio +graph auth ``` ## 2. Deploy Your Subgraph to Studio @@ -58,7 +58,7 @@ If you have your source code, you can easily deploy it to Studio. If you don't h In The Graph CLI, run the following command: ```sh -graph deploy --studio --ipfs-hash +graph deploy --ipfs-hash ``` @@ -74,7 +74,7 @@ graph deploy --studio --ipfs-hash You can start [querying](/querying/querying-the-graph/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### Příklad [CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: @@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### Další zdroje - To quickly create and publish a new subgraph, check out the [Quick Start](/quick-start/). - To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). diff --git a/website/pages/cs/deploying/deploy-using-subgraph-studio.mdx b/website/pages/cs/deploying/deploy-using-subgraph-studio.mdx index 502169b4ccfa..160322951c6f 100644 --- a/website/pages/cs/deploying/deploy-using-subgraph-studio.mdx +++ b/website/pages/cs/deploying/deploy-using-subgraph-studio.mdx @@ -12,13 +12,13 @@ In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: - View a list of subgraphs you've created - Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- Vytváření a správa klíčů API pro konkrétní podgrafy - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph through the Studio UI -- Deploy your subgraph using the The Graph CLI +- Create your subgraph +- Deploy your subgraph using The Graph CLI - Test your subgraph in the playground environment - Integrate your subgraph in staging using the development query URL -- Publish your subgraph with the Studio UI +- Publish your subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -27,21 +27,19 @@ Before deploying, you must install The Graph CLI. You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use The Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -**Install with yarn:** +### Install with yarn ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +### Install with npm ```bash npm install -g @graphprotocol/graph-cli ``` -## Create Your Subgraph - -Before deploying your subgraph you need to create an account in [Subgraph Studio](https://thegraph.com/studio/). +## Začněte 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. @@ -51,28 +49,28 @@ Before deploying your subgraph you need to create an account in [Subgraph Studio > Important: You need an API key to query subgraphs -### How to Create a Subgraph in Subgraph Studio +### Jak vytvořit podgraf v Podgraf Studio -> For additional written detail, review the [Quick-Start](/quick-start/). +> For additional written detail, review the [Quick Start](/quick-start/). -### Subgraph Compatibility with The Graph Network +### Kompatibilita podgrafů se sítí grafů -In order to be supported by Indexers on The Graph Network, subgraphs must: +Aby mohly být podgrafy podporovány indexátory v síti grafů, musí: - Index a [supported network](/developing/supported-networks) -- Must not use any of the following features: +- Nesmí používat žádnou z následujících funkcí: - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting + - Nefatální + - Roubování ## Initialize Your Subgraph Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash -graph init --studio +graph init ``` You can find the `` value on your subgraph details page in Subgraph Studio, see image below: @@ -81,26 +79,26 @@ You can find the `` value on your subgraph details page in Subgra After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. -## Graph Auth +## Autorizace grafu -Before you can deploy your subgraph to Subgraph Studio, you need to login into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. Then, use the following command to authenticate from the CLI: ```bash -graph auth --studio +graph auth ``` ## Deploying a Subgraph Once you are ready, you can deploy your subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. Use the following CLI command to deploy your subgraph: ```bash -graph deploy --studio +graph deploy ``` After running this command, the CLI will ask for a version label. @@ -126,11 +124,11 @@ If you want to update your subgraph, you can do the following: - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). - This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an on-chain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. > Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/network/curating/). -## Automatic Archiving of Subgraph Versions +## Automatická archivace verzí podgrafů Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. diff --git a/website/pages/cs/deploying/multiple-networks.mdx b/website/pages/cs/deploying/multiple-networks.mdx index dc2b8e533430..0c53c9686fb4 100644 --- a/website/pages/cs/deploying/multiple-networks.mdx +++ b/website/pages/cs/deploying/multiple-networks.mdx @@ -4,9 +4,9 @@ title: Deploying a Subgraph to Multiple Networks This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). -## Deploying the subgraph to multiple networks +## Nasazení podgrafu do více sítí -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +V některých případech budete chtít nasadit stejný podgraf do více sítí, aniž byste museli duplikovat celý jeho kód. Hlavním problémem, který s tím souvisí, je skutečnost, že smluvní adresy v těchto sítích jsou různé. ### Using `graph-cli` @@ -69,7 +69,7 @@ dataSources: kind: ethereum/events ``` -This is what your networks config file should look like: +Takto by měl vypadat konfigurační soubor sítě: ```json { @@ -86,7 +86,7 @@ This is what your networks config file should look like: } ``` -Now we can run one of the following commands: +Nyní můžeme spustit jeden z následujících příkazů: ```sh # Using default networks.json file @@ -123,7 +123,7 @@ yarn deploy --network sepolia yarn deploy --network sepolia --network-file path/to/config ``` -### Using subgraph.yaml template +### Použití šablony subgraph.yaml One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). @@ -136,7 +136,7 @@ To illustrate this approach, let's assume a subgraph should be deployed to mainn } ``` -and +a ```json { @@ -195,7 +195,7 @@ A working example of this can be found [here](https://github.com/graphprotocol/e This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Zásady archivace subgrafů Subgraph Studio A subgraph version in Studio is archived if and only if it meets the following criteria: @@ -205,11 +205,11 @@ A subgraph version in Studio is archived if and only if it meets the following c In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Každý podgraf ovlivněný touto zásadou má možnost vrátit danou verzi zpět. -## Checking subgraph health +## Kontrola stavu podgrafů -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +Pokud se podgraf úspěšně synchronizuje, je to dobré znamení, že bude dobře fungovat navždy. Nové spouštěče v síti však mohou způsobit, že se podgraf dostane do neověřeného chybového stavu, nebo může začít zaostávat kvůli problémům s výkonem či operátory uzlů. Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: diff --git a/website/pages/cs/developing/creating-a-subgraph/advanced.mdx b/website/pages/cs/developing/creating-a-subgraph/advanced.mdx new file mode 100644 index 000000000000..7d62e06bd1cd --- /dev/null +++ b/website/pages/cs/developing/creating-a-subgraph/advanced.mdx @@ -0,0 +1,555 @@ +--- +title: Advance Subgraph Features +--- + +## Přehled + +Add and implement advanced subgraph features to enhanced your subgraph's built. + +Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: + +| Feature | Name | +| ---------------------------------------------------- | ---------------- | +| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | +| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | + +For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +features: + - fullTextSearch + - nonFatalErrors +dataSources: ... +``` + +> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. + +## Timeseries and Aggregations + +Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, etc. + +This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the Timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. + +### Example Schema + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} + +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +### Defining Timeseries and Aggregations + +Timeseries entities are defined with `@entity(timeseries: true)` in schema.graphql. Every timeseries entity must have a unique ID of the int8 type, a timestamp of the Timestamp type, and include data that will be used for calculation by aggregation entities. These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the Aggregation entities. + +Aggregation entities are defined with `@aggregation` in schema.graphql. Every aggregation entity defines the source from which it will gather data (which must be a Timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. + +#### Available Aggregation Intervals + +- `hour`: sets the timeseries period every hour, on the hour. +- `day`: sets the timeseries period every day, starting and ending at 00:00. + +#### Available Aggregation Functions + +- `sum`: Total of all values. +- `count`: Number of values. +- `min`: Minimum value. +- `max`: Maximum value. +- `first`: First value in the period. +- `last`: Last value in the period. + +#### Example Aggregations Query + +```graphql +{ + stats(interval: "hour", where: { timestamp_gt: 1704085200 }) { + id + timestamp + sum + } +} +``` + +Note: + +To use Timeseries and Aggregations, a subgraph must have a spec version ≥1.1.0. Note that this feature might undergo significant changes that could affect backward compatibility. + +[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations. + +## Nefatální + +Chyby indexování v již synchronizovaných podgrafech ve výchozím nastavení způsobí selhání podgrafy a zastavení synchronizace. Podgrafy lze alternativně nakonfigurovat tak, aby pokračovaly v synchronizaci i při přítomnosti chyb, a to ignorováním změn provedených obslužnou rutinou, která chybu vyvolala. To dává autorům podgrafů čas na opravu jejich podgrafů, zatímco dotazy jsou nadále obsluhovány proti poslednímu bloku, ačkoli výsledky mohou být nekonzistentní kvůli chybě, která chybu způsobila. Všimněte si, že některé chyby jsou stále fatální. Aby chyba nebyla fatální, musí být známo, že je deterministická. + +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. + +Povolení nefatálních chyb vyžaduje nastavení následujícího příznaku funkce v manifestu podgraf: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +features: + - nonFatalErrors + ... +``` + +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: + +```graphql +foos(first: 100, subgraphError: allow) { + id +} + +_meta { + hasIndexingErrors +} +``` + +If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: + +```graphql +"data": { + "foos": [ + { + "id": "0xdead" + } + ], + "_meta": { + "hasIndexingErrors": true + } +}, +"errors": [ + { + "message": "indexing_error" + } +] +``` + +## IPFS/Arweave File Data Sources + +Zdroje dat souborů jsou novou funkcí podgrafu pro přístup k datům mimo řetězec během indexování robustním a rozšiřitelným způsobem. Zdroje souborových dat podporují načítání souborů ze systému IPFS a z Arweave. + +> To také vytváří základ pro deterministické indexování dat mimo řetězec a potenciální zavedení libovolných dat ze zdrojů HTTP. + +### Přehled + +Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found. + +This is similar to the [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources. + +> This replaces the existing `ipfs.cat` API + +### Průvodce upgradem + +#### Update `graph-ts` and `graph-cli` + +File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1 + +#### Přidání nového typu entity, který bude aktualizován při nalezení souborů + +Zdroje dat souborů nemohou přistupovat k entitám založeným na řetězci ani je aktualizovat, ale musí aktualizovat entity specifické pro soubor. + +To může znamenat rozdělení polí ze stávajících entit do samostatných entit, které budou vzájemně propojeny. + +Původní kombinovaný entita: + +```graphql +type Token @entity { + id: ID! + tokenID: BigInt! + tokenURI: String! + externalURL: String! + ipfsURI: String! + image: String! + name: String! + description: String! + type: String! + updatedAtTimestamp: BigInt + owner: User! +} +``` + +Nové, rozdělená entit: + +```graphql +type Token @entity { + id: ID! + tokenID: BigInt! + tokenURI: String! + ipfsURI: TokenMetadata + updatedAtTimestamp: BigInt + owner: String! +} + +type TokenMetadata @entity { + id: ID! + image: String! + externalURL: String! + name: String! + description: String! +} +``` + +Pokud je vztah mezi nadřazenou entitou a entitou výsledného zdroje dat souboru 1:1, je nejjednodušším vzorem propojení nadřazené entity s entitou výsledného souboru pomocí CID IPFS jako vyhledávacího prvku. Pokud máte potíže s modelováním nových entit založených na souborech, ozvěte se na Discord! + +> You can use [nested filters](/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities. + +#### Add a new templated data source with `kind: file/ipfs` or `kind: file/arweave` + +Jedná se o zdroj dat, který bude vytvořen při identifikaci souboru zájmu. + +```yaml +templates: + - name: TokenMetadata + kind: file/ipfs + mapping: + apiVersion: 0.0.7 + language: wasm/assemblyscript + file: ./src/mapping.ts + handler: handleMetadata + entities: + - TokenMetadata + abis: + - name: Token + file: ./abis/Token.json +``` + +> Currently `abis` are required, though it is not possible to call contracts from within file data sources + +The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#limitations) for more details. + +#### Vytvoření nové obslužné pro zpracování souborů + +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). + +The CID of the file as a readable string can be accessed via the `dataSource` as follows: + +```typescript +const cid = dataSource.stringParam() +``` + +Příklad + +```typescript +import { json, Bytes, dataSource } from '@graphprotocol/graph-ts' +import { TokenMetadata } from '../generated/schema' + +export function handleMetadata(content: Bytes): void { + let tokenMetadata = new TokenMetadata(dataSource.stringParam()) + const value = json.fromBytes(content).toObject() + if (value) { + const image = value.get('image') + const name = value.get('name') + const description = value.get('description') + const externalURL = value.get('external_url') + + if (name && image && description && externalURL) { + tokenMetadata.name = name.toString() + tokenMetadata.image = image.toString() + tokenMetadata.externalURL = externalURL.toString() + tokenMetadata.description = description.toString() + } + + tokenMetadata.save() + } +} +``` + +#### Spawn zdrojů dat souborů v případě potřeby + +Nyní můžete vytvářet zdroje dat souborů během provádění obslužných založených na řetězci: + +- Import the template from the auto-generated `templates` +- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave + +For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). + +For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). + +Příklad: + +```typescript +import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' + +const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' +//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. + +export function handleTransfer(event: TransferEvent): void { + let token = Token.load(event.params.tokenId.toString()) + if (!token) { + token = new Token(event.params.tokenId.toString()) + token.tokenID = event.params.tokenId + + token.tokenURI = '/' + event.params.tokenId.toString() + '.json' + const tokenIpfsHash = ipfshash + token.tokenURI + //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" + + token.ipfsURI = tokenIpfsHash + + TokenMetadataTemplate.create(tokenIpfsHash) + } + + token.updatedAtTimestamp = event.block.timestamp + token.owner = event.params.to.toHexString() + token.save() +} +``` + +Tím se vytvoří nový zdroj dat souborů, který bude dotazovat nakonfigurovaný koncový bod IPFS nebo Arweave grafického uzlu a v případě nenalezení se pokusí o opakování. Když je soubor nalezen, spustí se obslužná zdroje dat souboru. + +This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. + +> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file + +Gratulujeme, používáte souborové zdroje dat! + +#### Nasazení podgrafů + +You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. + +#### Omezení + +Zpracovatelé a entity zdrojů dat souborů jsou izolovány od ostatních entit podgrafů, což zajišťuje, že jsou při provádění deterministické a nedochází ke kontaminaci zdrojů dat založených na řetězci. Přesněji řečeno: + +- Entity vytvořené souborovými zdroji dat jsou neměnné a nelze je aktualizovat +- Obsluhy zdrojů dat souborů nemohou přistupovat k entita z jiných zdrojů dat souborů +- K entita přidruženým k datovým zdrojům souborů nelze přistupovat pomocí zpracovatelů založených na řetězci + +> Ačkoli by toto omezení nemělo být pro většinu případů použití problematické, pro některé může představovat složitost. Pokud máte problémy s modelováním dat založených na souborech v podgrafu, kontaktujte nás prosím prostřednictvím služby Discord! + +Kromě toho není možné vytvářet zdroje dat ze zdroje dat souborů, ať už se jedná o zdroj dat v řetězci nebo jiný zdroj dat souborů. Toto omezení může být v budoucnu zrušeno. + +#### Osvědčené postupy + +Pokud propojovat metadata NFT s odpovídajícími tokeny, použijte hash IPFS metadat k odkazu na entita Metadata z entity Token. Uložte entitu Metadata s použitím hashe IPFS jako ID. + +You can use [DataSource context](/developing/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. + +If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. + +> Pracujeme na zlepšení výše uvedeného doporučení, aby dotazy vracely pouze "nejnovější" verzi + +#### Známé problémy + +File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. + +Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. + +#### Příklady + +[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) + +#### Odkazy: + +[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) + +## Indexed Argument Filters / Topic Filters + +> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` + +Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. + +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. + +- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. + +### How Topic Filters Work + +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. + +- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. + +```solidity +// SPDX-License-Identifier: MIT +pragma solidity ^0.8.0; + +contract Token { + // Event declaration with indexed parameters for addresses + event Transfer(address indexed from, address indexed to, uint256 value); + + // Function to simulate transferring tokens + function transfer(address to, uint256 value) public { + // Emitting the Transfer event with from, to, and value + emit Transfer(msg.sender, to, value); + } +} +``` + +In this example: + +- The `Transfer` event is used to log transactions of tokens between addresses. +- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. +- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. + +#### Configuration in Subgraphs + +Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: + +```yaml +eventHandlers: + - event: SomeEvent(indexed uint256, indexed address, indexed uint256) + handler: handleSomeEvent + topic1: ['0xValue1', '0xValue2'] + topic2: ['0xAddress1', '0xAddress2'] + topic3: ['0xValue3'] +``` + +In this setup: + +- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. +- Each topic can have one or more values, and an event is only processed if it matches one of the values in each specified topic. + +#### Filter Logic + +- Within a Single Topic: The logic functions as an OR condition. The event will be processed if it matches any one of the listed values in a given topic. +- Between Different Topics: The logic functions as an AND condition. An event must satisfy all specified conditions across different topics to trigger the associated handler. + +#### Example 1: Tracking Direct Transfers from Address A to Address B + +```yaml +eventHandlers: + - event: Transfer(indexed address,indexed address,uint256) + handler: handleDirectedTransfer + topic1: ['0xAddressA'] # Sender Address + topic2: ['0xAddressB'] # Receiver Address +``` + +In this configuration: + +- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. +- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. +- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. + +#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses + +```yaml +eventHandlers: + - event: Transfer(indexed address,indexed address,uint256) + handler: handleTransferToOrFrom + topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Sender Address + topic2: ['0xAddressB', '0xAddressC'] # Receiver Address +``` + +In this configuration: + +- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. +- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. +- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. + +## Declared eth_call + +> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. + +Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. + +This feature does the following: + +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Allows faster data fetching, resulting in quicker query responses and a better user experience. +- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. + +### Key Concepts + +- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. +- Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously. +- Time Efficiency: The total time taken for all the calls changes from the sum of the individual call times (sequential) to the time taken by the longest call (parallel). + +#### Scenario without Declarative `eth_calls` + +Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. + +Traditionally, these calls might be made sequentially: + +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds + +Total time taken = 3 + 2 + 4 = 9 seconds + +#### Scenario with Declarative `eth_calls` + +With this feature, you can declare these calls to be executed in parallel: + +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds + +Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. + +Total time taken = max (3, 2, 4) = 4 seconds + +#### How it Works + +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. + +#### Example Configuration in Subgraph Manifest + +Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. + +`Subgraph.yaml` using `event.address`: + +```yaml +eventHandlers: +event: Swap(indexed address,indexed address,int256,int256,uint160,uint128,int24) +handler: handleSwap +calls: + global0X128: Pool[event.address].feeGrowthGlobal0X128() + global1X128: Pool[event.address].feeGrowthGlobal1X128() +``` + +Details for the example above: + +- `global0X128` is the declared `eth_call`. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` +- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. + +`Subgraph.yaml` using `event.params` + +```yaml +calls: + - ERC20DecimalsToken0: ERC20[event.params.token0].decimals() +``` + +### Roubování na existující podgrafy + +> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). + +When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. + +A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: + +```yaml +description: ... +graft: + base: Qm... # Subgraph ID of base subgraph + block: 7345624 # Block number +``` + +When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. + +Protože se při roubování základní data spíše kopírují než indexují, je mnohem rychlejší dostat podgraf do požadovaného bloku než při indexování od nuly, i když počáteční kopírování dat může u velmi velkých podgrafů trvat i několik hodin. Během inicializace roubovaného podgrafu bude uzel Graf Uzel zaznamenávat informace o typů entit, které již byly zkopírovány. + +Roubované podgraf může používat schéma GraphQL, které není totožné se schématem základního podgrafu, ale je s ním pouze kompatibilní. Musí to být platné schéma podgrafu jako takové, ale může se od schématu základního podgrafu odchýlit následujícími způsoby: + +- Přidává nebo odebírá typy entit +- Odstraňuje atributy z typů entit +- Přidává nulovatelné atributy k typům entit +- Mění nenulovatelné atributy na nulovatelné atributy +- Přidává hodnoty de enums +- Přidává nebo odebírá rozhraní +- Mění se, pro které typy entit je rozhraní implementováno + +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. diff --git a/website/pages/cs/developing/creating-a-subgraph/assemblyscript-mappings.mdx b/website/pages/cs/developing/creating-a-subgraph/assemblyscript-mappings.mdx new file mode 100644 index 000000000000..fad0d6ebaa1a --- /dev/null +++ b/website/pages/cs/developing/creating-a-subgraph/assemblyscript-mappings.mdx @@ -0,0 +1,113 @@ +--- +title: Writing AssemblyScript Mappings +--- + +## Přehled + +The mappings take data from a particular source and transform it into entities that are defined within your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. + +## Psát mapování + +For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. + +In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: + +```javascript +import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' +import { Gravatar } from '../generated/schema' + +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let id = event.params.id + let gravatar = Gravatar.load(id) + if (gravatar == null) { + gravatar = new Gravatar(id) + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. + +The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on-demand. The entity is then updated to match the new event parameters before it is saved back to the store using `gravatar.save()`. + +### Doporučené IDa pro vytváření nových Entity + +It is highly recommended to use `Bytes` as the type for `id` fields, and only use `String` for attributes that truly contain human-readable text, like the name of a token. Below are some recommended `id` values to consider when creating new entities. + +- `transfer.id = event.transaction.hash` + +- `let id = event.transaction.hash.concatI32(event.logIndex.toI32())` + +- For entities that store aggregated data, for e.g, daily trade volumes, the `id` usually contains the day number. Here, using a `Bytes` as the `id` is beneficial. Determining the `id` would look like + +```typescript +let dayID = event.block.timestamp.toI32() / 86400 +let id = Bytes.fromI32(dayID) +``` + +- Convert constant addresses to `Bytes`. + +`const id = Bytes.fromHexString('0xdead...beef')` + +There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) which contains utilities for interacting with the Graph Node store and conveniences for handling smart contract data and entities. It can be imported into `mapping.ts` from `@graphprotocol/graph-ts`. + +### Zpracování entit se stejnými ID + +Pokud při vytváření a ukládání nové entity již existuje entita se stejným ID, jsou při slučování vždy upřednostněny vlastnosti nové entity. To znamená, že existující entita bude aktualizována hodnotami z nové entity. + +Pokud je pro pole v nové entitě se stejným ID záměrně nastavena nulová hodnota, bude stávající entita aktualizována s nulovou hodnotou. + +Pokud není pro pole v nové entitě se stejným ID nastavena žádná hodnota, bude pole rovněž nulové. + +## Generování kódu + +Aby byla práce s inteligentními smlouvami, událostmi a entitami snadná a typově bezpečná, může Graf CLI generovat typy AssemblyScript ze schématu GraphQL podgrafu a ABI smluv obsažených ve zdrojích dat. + +To se provádí pomocí + +```sh +graph codegen [--output-dir ] [] +``` + +but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: + +```sh +# Yarn +yarn codegen + +# NPM +npm run codegen +``` + +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. + +```javascript +import { + // The contract class: + Gravity, + // The events classes: + NewGravatar, + UpdatedGravatar, +} from '../generated/Gravity/Gravity' +``` + +In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with + +```javascript +import { Gravatar } from '../generated/schema' +``` + +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. + +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/pages/cs/developing/creating-a-subgraph/install-the-cli.mdx b/website/pages/cs/developing/creating-a-subgraph/install-the-cli.mdx new file mode 100644 index 000000000000..41abfbdccf16 --- /dev/null +++ b/website/pages/cs/developing/creating-a-subgraph/install-the-cli.mdx @@ -0,0 +1,119 @@ +--- +title: Instalace Graf CLI +--- + +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/network/curating/). + +## Přehled + +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/creating-a-subgraph/subgraph-manifest/) and compiles the [mappings](/creating-a-subgraph/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. + +## Začínáme + +### Instalace Graf CLI + +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +V místním počítači spusťte jeden z následujících příkazů: + +#### Using [npm](https://www.npmjs.com/) + +```bash +npm install -g @graphprotocol/graph-cli@latest +``` + +#### Using [yarn](https://yarnpkg.com/) + +```bash +yarn global add @graphprotocol/graph-cli +``` + +The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. + +## Vytvoření podgrafu + +### Ze stávající smlouvy + +The following command creates a subgraph that indexes all events of an existing contract: + +```sh +graph init \ + --product subgraph-studio + --from-contract \ + [--network ] \ + [--abi ] \ + [] +``` + +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. + +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. + +### Z příkladu podgrafu + +The following command initializes a new project from an example subgraph: + +```sh +graph init --from-example=example-subgraph +``` + +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. + +### Add New `dataSources` to an Existing Subgraph + +`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. + +Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: + +```sh +graph add
[] + +Options: + + --abi Path to the contract ABI (default: download from Etherscan) + --contract-name Name of the contract (default: Contract) + --merge-entities Whether to merge entities with the same name (default: false) + --network-file Networks config file path (default: "./networks.json") +``` + +#### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. + +### Getting The ABIs + +Soubor(y) ABI se musí shodovat s vaší smlouvou. Soubory ABI lze získat několika způsoby: + +- Pokud vytváříte vlastní projekt, budete mít pravděpodobně přístup k nejaktuálnějším ABI. +- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. + +## SpecVersion Releases + +| Verze | Poznámky vydání | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/pages/cs/developing/creating-a-subgraph/ql-schema.mdx b/website/pages/cs/developing/creating-a-subgraph/ql-schema.mdx new file mode 100644 index 000000000000..befce4a22bf8 --- /dev/null +++ b/website/pages/cs/developing/creating-a-subgraph/ql-schema.mdx @@ -0,0 +1,312 @@ +--- +title: The Graph QL Schema +--- + +## Přehled + +The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. + +> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/querying/graphql-api/) section. + +### Defining Entities + +Before defining entities, it is important to take a step back and think about how your data is structured and linked. + +- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- It may be useful to imagine entities as "objects containing data", rather than as events or functions. +- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. +- Each type that should be an entity is required to be annotated with an `@entity` directive. +- By default, entities are mutable, meaning that mappings can load existing entities, modify them and store a new version of that entity. + - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. + - If changes happen in the same block in which the entity was created, then mappings can make changes to immutable entities. Immutable entities are much faster to write and to query so they should be used whenever possible. + +#### Dobrý příklad + +The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. + +```graphql +type Gravatar @entity(immutable: true) { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String + accepted: Boolean +} +``` + +#### Špatný příklad + +The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. + +```graphql +type GravatarAccepted @entity { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String +} + +type GravatarDeclined @entity { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String +} +``` + +#### Nepovinná a povinná pole + +Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If the field is a scalar field, you get an error when you try to store the entity. If the field references another entity then you get this error: + +``` +Vyřešení nulové hodnoty pro pole 'name', které není nulové +``` + +Each entity must have an `id` field, which must be of type `Bytes!` or `String!`. It is generally recommended to use `Bytes!`, unless the `id` contains human-readable text, since entities with `Bytes!` id's will be faster to write and query as those with a `String!` `id`. The `id` field serves as the primary key, and needs to be unique among all entities of the same type. For historical reasons, the type `ID!` is also accepted and is a synonym for `String!`. + +For some entity types the `id` for `Bytes!` is constructed from the id's of two other entities; that is possible using `concat`, e.g., `let id = left.id.concat(right.id) ` to form the id from the id's of `left` and `right`. Similarly, to construct an id from the id of an existing entity and a counter `count`, `let id = left.id.concatI32(count)` can be used. The concatenation is guaranteed to produce unique id's as long as the length of `left` is the same for all such entities, for example, because `left.id` is an `Address`. + +### Vestavěné typy skalárů + +#### Podporované skaláry GraphQL + +The following scalars are supported in the GraphQL API: + +| Typ | Popis | +| --- | --- | +| `Bytes` | Pole bajtů reprezentované jako hexadecimální řetězec. Běžně se používá pro hashe a adresy Ethereum. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | + +### Enums + +Výčty můžete vytvářet také v rámci schématu. Syntaxe enumů je následující: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner"`. The example below demonstrates what the Token entity would look like with an enum field: + +More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). + +### Vztahy entit + +Entita může mít vztah k jedné nebo více jiným entitám ve vašem schématu. Tyto vztahy lze procházet v dotazech. Vztahy v Graf jsou jednosměrné. Obousměrné vztahy je možné simulovat definováním jednosměrného vztahu na obou "koncích" vztahu. + +Vztahy se definují u entit stejně jako u jiných polí s tím rozdílem, že zadaný typ je typ jiné entity. + +#### Vztahy jeden na jednoho + +Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: + +```graphql +type Transaction @entity(immutable: true) { + id: Bytes! + transactionReceipt: TransactionReceipt +} + +type TransactionReceipt @entity(immutable: true) { + id: Bytes! + transaction: Transaction +} +``` + +#### Vztahy jeden k mnoha + +Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: + +```graphql +type Token @entity(immutable: true) { + id: Bytes! +} + +type TokenBalance @entity { + id: Bytes! + amount: Int! + token: Token! +} +``` + +### Zpětné vyhledávání + +Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. + +U vztahů typu "jeden k mnoha" by měl být vztah vždy uložen na straně "jeden" a strana "mnoho" by měla být vždy odvozena. Uložení vztahu tímto způsobem namísto uložení pole entit na straně "mnoho" povede k výrazně lepšímu výkonu jak při indexování, tak při dotazování na podgraf. Obecně platí, že ukládání polí entit je třeba se vyhnout, pokud je to praktické. + +#### Příklad + +We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: + +```graphql +type Token @entity(immutable: true) { + id: Bytes! + tokenBalances: [TokenBalance!]! @derivedFrom(field: "token") +} + +type TokenBalance @entity { + id: Bytes! + amount: Int! + token: Token! +} +``` + +#### Vztahy mnoho k mnoha + +Pro vztahy mnoho-více, jako jsou uživatelé, z nichž každý může patřit do libovolného počtu organizací, je nejjednodušší, ale obecně ne nejvýkonnější, modelovat vztah jako pole v každé z obou zúčastněných entit. Pokud je vztah symetrický, je třeba uložit pouze jednu stranu vztahu a druhou stranu lze odvodit. + +#### Příklad + +Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. + +```graphql +type Organization @entity { + id: Bytes! + name: String! + members: [User!]! +} + +type User @entity { + id: Bytes! + name: String! + organizations: [Organization!]! @derivedFrom(field: "members") +} +``` + +A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like + +```graphql +type Organization @entity { + id: Bytes! + name: String! + members: [UserOrganization!]! @derivedFrom(field: "organization") +} + +type User @entity { + id: Bytes! + name: String! + organizations: [UserOrganization!] @derivedFrom(field: "user") +} + +type UserOrganization @entity { + id: Bytes! # Set to `user.id.concat(organization.id)` + user: User! + organization: Organization! +} +``` + +Tento přístup vyžaduje, aby dotazy sestupovaly do další úrovně, aby bylo možné získat například organizace pro uživatele: + +```graphql +query usersWithOrganizations { + users { + organizations { + # this is a UserOrganization entity + organization { + name + } + } + } +} +``` + +Tento propracovanější způsob ukládání vztahů mnoho-více vede k menšímu množství dat uložených pro podgraf, a tedy k podgrafu, který je často výrazně rychlejší při indexování a dotazování. + +### Přidání komentářů do schématu + +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: + +```graphql +type MyFirstEntity @entity { + # unique identifier and primary key of the entity + id: Bytes! + address: Bytes! +} +``` + +## Definování polí fulltextového vyhledávání + +Fulltextové vyhledávací dotazy filtrují a řadí entity na základě textového vyhledávacího vstupu. Fulltextové dotazy jsou schopny vracet shody podobných slov tím, že zpracovávají vstupní text dotazu do kmenů před jejich porovnáním s indexovanými textovými daty. + +Definice fulltextového dotazu obsahuje název dotazu, jazykový slovník použitý ke zpracování textových polí, algoritmus řazení použitý k seřazení výsledků a pole zahrnutá do vyhledávání. Každý fulltextový dotaz může zahrnovat více polí, ale všechna zahrnutá pole musí být z jednoho typu entity. + +To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. + +```graphql +type _Schema_ + @fulltext( + name: "bandSearch" + language: en + algorithm: rank + include: [{ entity: "Band", fields: [{ name: "name" }, { name: "description" }, { name: "bio" }] }] + ) + +type Band @entity { + id: Bytes! + name: String! + description: String! + bio: String + wallet: Address + labels: [Label!]! + discography: [Album!]! + members: [Musician!]! +} +``` + +The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/querying/graphql-api#queries) for a description of the fulltext search API and more example usage. + +```graphql +query { + bandSearch(text: "breaks & electro & detroit") { + id + name + description + wallet + } +} +``` + +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. + +## Podporované jazyky + +Výběr jiného jazyka bude mít na rozhraní API fulltextového vyhledávání rozhodující, i když někdy nenápadný vliv. Pole zahrnutá do pole fulltextového dotazu jsou zkoumána v kontextu zvoleného jazyka, takže lexémy vytvořené analýzou a vyhledávacími dotazy se v jednotlivých jazycích liší. Například: při použití podporovaného tureckého slovníku je "token" odvozeno od "toke", zatímco anglický slovník jej samozřejmě odvozuje od "token". + +Podporované jazykové slovníky: + +| Code | Slovník | +| ---------- | ---------- | +| jednoduchý | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | Portuguese | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | + +### Algoritmy řazení + +Podporované algoritmy pro řazení výsledků: + +| Algorithm | Description | +| ------------- | ------------------------------------------------------------------------ | +| rank | Pro seřazení výsledků použijte kvalitu shody (0-1) fulltextového dotazu. | +| proximityRank | Similar to rank but also includes the proximity of the matches. | diff --git a/website/pages/cs/developing/creating-a-subgraph/starting-your-subgraph.mdx b/website/pages/cs/developing/creating-a-subgraph/starting-your-subgraph.mdx new file mode 100644 index 000000000000..68fba6498608 --- /dev/null +++ b/website/pages/cs/developing/creating-a-subgraph/starting-your-subgraph.mdx @@ -0,0 +1,21 @@ +--- +title: Starting Your Subgraph +--- + +## Přehled + +The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. + +When you create a [subgraph](/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. + +Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. + +### Start Building + +Start the process and build a subgraph that matches your needs: + +1. [Install the CLI](/developing/creating-a-subgraph/install-the-cli/) - Set up your infrastructure +2. [Subgraph Manifest](/developing/creating-a-subgraph/subgraph-manifest/) - Understand a subgraph's key component +3. [The Graph Ql Schema](/developing/creating-a-subgraph/ql-schema/) - Write your schema +4. [Writing AssemblyScript Mappings](/developing/creating-a-subgraph/assemblyscript-mappings/) - Write your mappings +5. [Advanced Features](/developing/creating-a-subgraph/advanced/) - Customize your subgraph with advanced features diff --git a/website/pages/cs/developing/creating-a-subgraph/subgraph-manifest.mdx b/website/pages/cs/developing/creating-a-subgraph/subgraph-manifest.mdx new file mode 100644 index 000000000000..84ff104974cf --- /dev/null +++ b/website/pages/cs/developing/creating-a-subgraph/subgraph-manifest.mdx @@ -0,0 +1,534 @@ +--- +title: Subgraph Manifest +--- + +## Přehled + +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. + +The **subgraph definition** consists of the following files: + +- `subgraph.yaml`: Contains the subgraph manifest + +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL + +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +### Subgraph Capabilities + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +repository: https://github.com/graphprotocol/graph-tooling +schema: + file: ./schema.graphql +indexerHints: + prune: auto +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + abi: Gravity + startBlock: 6175244 + endBlock: 7175245 + context: + foo: + type: Bool + data: true + bar: + type: String + data: 'bar' + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + - event: UpdatedGravatar(uint256,address,string,string) + handler: handleUpdatedGravatar + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCall + filter: + kind: call + file: ./src/mapping.ts +``` + +## Subgraph Entries + +> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/developing/creating-a-subgraph/ql-schema/). + +Důležité položky, které je třeba v manifestu aktualizovat, jsou: + +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. + +- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. + +- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. + +- `features`: a list of all used [feature](#experimental-features) names. + +- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. + +- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. + +- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. + +- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. + +- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. + +- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. + +- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. + +- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. + +- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. + +- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. + +A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. + +## Event Handlers + +Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. + +### Defining an Event Handler + +An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: Approval(address,address,uint256) + handler: handleApproval + - event: Transfer(address,address,uint256) + handler: handleTransfer + topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optional topic filter which filters only events with the specified topic. +``` + +## Zpracovatelé hovorů + +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. + +Obsluhy volání se spustí pouze v jednom ze dvou případů: když je zadaná funkce volána jiným účtem než samotnou smlouvou nebo když je v Solidity označena jako externí a volána jako součást jiné funkce ve stejné smlouvě. + +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. + +### Definice obsluhy volání + +To define a call handler in your manifest, simply add a `callHandlers` array under the data source you would like to subscribe to. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar +``` + +The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. + +### Funkce mapování + +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: + +```typescript +import { CreateGravatarCall } from '../generated/Gravity/Gravity' +import { Transaction } from '../generated/schema' + +export function handleCreateGravatar(call: CreateGravatarCall): void { + let id = call.transaction.hash + let transaction = new Transaction(id) + transaction.displayName = call.inputs._displayName + transaction.imageUrl = call.inputs._imageUrl + transaction.save() +} +``` + +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. + +## Obsluha bloků + +Kromě přihlášení k událostem smlouvy nebo volání funkcí může podgraf chtít aktualizovat svá data, když jsou do řetězce přidány nové bloky. Za tímto účelem může podgraf spustit funkci po každém bloku nebo po blocích, které odpovídají předem definovanému filtru. + +### Podporované filtry + +#### Filtr volání + +```yaml +filter: + kind: call +``` + +_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ + +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. + +Protože pro obsluhu bloku neexistuje žádný filtr, zajistí, že obsluha bude volána každý blok. Zdroj dat může obsahovat pouze jednu blokovou obsluhu pro každý typ filtru. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCallToContract + filter: + kind: call +``` + +#### Filtr dotazování + +> **Requires `specVersion` >= 0.0.8** +> +> **Note:** Polling filters are only available on dataSources of `kind: ethereum`. + +```yaml +blockHandlers: + - handler: handleBlock + filter: + kind: polling + every: 10 +``` + +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. + +#### Jednou Filtr + +> **Requires `specVersion` >= 0.0.8** +> +> **Note:** Once filters are only available on dataSources of `kind: ethereum`. + +```yaml +blockHandlers: + - handler: handleOnce + filter: + kind: once +``` + +Definovaný obslužná rutina s filtrem once bude zavolána pouze jednou před spuštěním všech ostatních rutin. Tato konfigurace umožňuje, aby podgraf používal obslužný program jako inicializační obslužný, který provádí specifické úlohy na začátku indexování. + +```ts +export function handleOnce(block: ethereum.Block): void { + let data = new InitialData(Bytes.fromUTF8('initial')) + data.data = 'Setup data here' + data.save() +} +``` + +### Funkce mapování + +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. + +```typescript +import { ethereum } from '@graphprotocol/graph-ts' + +export function handleBlock(block: ethereum.Block): void { + let id = block.hash + let entity = new Block(id) + entity.save() +} +``` + +## Anonymní události + +Pokud potřebujete v Solidity zpracovávat anonymní události, lze toho dosáhnout zadáním tématu 0 události, jak je uvedeno v příkladu: + +```yaml +eventHandlers: + - event: LogNote(bytes4,address,bytes32,bytes32,uint256,bytes) + topic0: '0x644843f351d3fba4abcd60109eaff9f54bac8fb8ccf0bab941009c21df21cf31' + handler: handleGive +``` + +An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. + +## Potvrzení transakcí v obslužných rutinách událostí + +Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. + +To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. + +```yaml +eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + receipt: true +``` + +Inside the handler function, the receipt can be accessed in the `Event.receipt` field. When the `receipt` key is set to `false` or omitted in the manifest, a `null` value will be returned instead. + +## Pořadí spouštěcích Handlers + +Spouštěče pro zdroj dat v rámci bloku jsou seřazeny podle následujícího postupu: + +1. Spouštěče událostí a volání jsou nejprve seřazeny podle indexu transakce v rámci bloku. +2. Spouštěče událostí a volání v rámci jedné transakce jsou seřazeny podle konvence: nejprve spouštěče událostí a poté spouštěče volání, přičemž každý typ dodržuje pořadí, v jakém jsou definovány v manifestu. +3. Spouštěče bloků jsou spuštěny po spouštěčích událostí a volání, v pořadí, v jakém jsou definovány v manifestu. + +Tato pravidla objednávání se mohou změnit. + +> **Note:** When new [dynamic data source](#data-source-templates-for-dynamically-created-contracts) are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. + +## Šablony zdrojů dat + +Běžným vzorem v inteligentních smlouvách kompatibilních s EVM je používání registrů nebo továrních smluv, kdy jedna smlouva vytváří, spravuje nebo odkazuje na libovolný počet dalších smluv, z nichž každá má svůj vlastní stav a události. + +The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. + +### Zdroj dat pro hlavní smlouvu + +First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.org) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on-chain by the factory contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: Factory + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - Directory + abis: + - name: Factory + file: ./abis/factory.json + eventHandlers: + - event: NewExchange(address,address) + handler: handleNewExchange +``` + +### Šablony zdrojů dat pro dynamicky vytvářené smlouvy + +Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a pre-defined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + # ... other source fields for the main contract ... +templates: + - name: Exchange + kind: ethereum/contract + network: mainnet + source: + abi: Exchange + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/exchange.ts + entities: + - Exchange + abis: + - name: Exchange + file: ./abis/exchange.json + eventHandlers: + - event: TokenPurchase(address,uint256,uint256) + handler: handleTokenPurchase + - event: EthPurchase(address,uint256,uint256) + handler: handleEthPurchase + - event: AddLiquidity(address,uint256,uint256) + handler: handleAddLiquidity + - event: RemoveLiquidity(address,uint256,uint256) + handler: handleRemoveLiquidity +``` + +### Instancování šablony zdroje dat + +In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + // Start indexing the exchange; `event.params.exchange` is the + // address of the new exchange contract + Exchange.create(event.params.exchange) +} +``` + +> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> +> Pokud předchozí bloky obsahují data relevantní pro nový zdroj dat, je nejlepší tato data indexovat načtením aktuálního stavu smlouvy a vytvořením entit reprezentujících tento stav v době vytvoření nového zdroje dat. + +### Kontext zdroje dat + +Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + let context = new DataSourceContext() + context.setString('tradingPair', event.params.tradingPair) + Exchange.createWithContext(event.params.exchange, context) +} +``` + +Inside a mapping of the `Exchange` template, the context can then be accessed: + +```typescript +import { dataSource } from '@graphprotocol/graph-ts' + +let context = dataSource.context() +let tradingPair = context.getString('tradingPair') +``` + +There are setters and getters like `setString` and `getString` for all value types. + +## Výchozí bloky + +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. + +```yaml +dataSources: + - kind: ethereum/contract + name: ExampleSource + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: ExampleContract + startBlock: 6627917 + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - User + abis: + - name: ExampleContract + file: ./abis/ExampleContract.json + eventHandlers: + - event: NewEvent(address,address) + handler: handleNewEvent +``` + +> **Note:** The contract creation block can be quickly looked up on Etherscan: +> +> 1. Vyhledejte smlouvu zadáním její adresy do vyhledávacího řádku. +> 2. Click on the creation transaction hash in the `Contract Creator` section. +> 3. Načtěte stránku s podrobnostmi o transakci, kde najdete počáteční blok pro danou smlouvu. + +## Tipy indexátor + +The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. + +> This feature is available from `specVersion: 1.0.0` + +### Prořezávat + +`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: + +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. Konkrétní číslo: Nastaví vlastní limit počtu historických bloků, které se mají zachovat. + +``` + indexerHints: + prune: auto +``` + +> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. + +History as of a given block is required for: + +- [Time travel queries](/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history +- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block +- Rewinding the subgraph back to that block + +If historical data as of the block has been pruned, the above capabilities will not be available. + +> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. + +For subgraphs leveraging [time travel queries](/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: + +Uchování určitého množství historických dat: + +``` + indexerHints: + prune: 1000 # Replace 1000 with the desired number of blocks to retain +``` + +Zachování kompletní historie entitních států: + +``` +indexerHints: + prune: never +``` diff --git a/website/pages/cs/developing/developer-faqs.mdx b/website/pages/cs/developing/developer-faqs.mdx index 828debf0f7a4..429f18be662c 100644 --- a/website/pages/cs/developing/developer-faqs.mdx +++ b/website/pages/cs/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: FAQs pro vývojáře --- -## 1. Co je to podgraf? +This page summarizes some of the most common questions for developers building on The Graph. -Podgraf je vlastní API postavené na datech blockchainu. Podgrafy jsou dotazovány pomocí dotazovacího jazyka GraphQL a jsou nasazeny do uzlu Graf pomocí Graf CLI. Po nasazení a zveřejnění v decentralizované síti Graf zpracovávají indexery podgrafy a zpřístupňují je k dotazování konzumentům podgrafů. +## Subgraph Related -## 2. Mohu svůj podgraf smazat? +### 1. Co je to podgraf? -Jednou vytvořené podgrafy není možné odstranit. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Mohu změnit název podgrafu? +### 2. What is the first step to create a subgraph? -Ne. Jakmile je podgraf vytvořen, nelze jeho název změnit. Před vytvořením podgrafu si to důkladně promyslete, aby byl snadno vyhledatelný a identifikovatelný ostatními dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Mohu změnit účet GitHub přidružený k mému podgrafu? +### 3. Can I still create a subgraph if my smart contracts don't have events? -Ne. Jakmile je podgraf vytvořen, nelze přidružený účet GitHub změnit. Než vytvoříte podgraf, důkladně si to promyslete. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Mohu vytvořit podgraf i v případě, že moje chytré smlouvy nemají události? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -Důrazně doporučujeme, abyste své chytré kontrakty strukturovali tak, aby měly události spojené s daty, na která se chcete dotazovat. Obsluhy událostí v podgrafu jsou spouštěny událostmi smlouva a jsou zdaleka nejrychlejším způsobem, jak získat užitečná data. +### 4. Mohu změnit účet GitHub přidružený k mému podgrafu? -Pokud smlouva, se kterými pracujete, neobsahují události, můžete ke spuštění indexování použít obsluhy volání a bloků. To se však nedoporučuje, protože výkon bude výrazně nižší. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Je možné nasadit jeden podgraf se stejným názvem pro více sítí? +### 5. How do I update a subgraph on mainnet? -Pro více sítí budete potřebovat samostatné názvy. I když nemůžete mít různé podgrafy pod stejným názvem, existují pohodlné způsoby, jak mít jednu kódovou základnu pro více sítí. Více informací o tom najdete v naší dokumentaci: [přemístění podgrafu](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. Jak se liší šablony od zdrojů dat? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Šablony umožňují vytvářet zdroje dat za běhu, zatímco se podgraf indexuje. Může se stát, že vaše smlouva bude vytvářet nové smlouvy, jak s ní budou lidé interagovat, a protože znáte tvar těchto smluv (ABI, události atd.) předem, můžete definovat, jak je chcete indexovat v šabloně, a když se vytvoří, váš podgraf vytvoří dynamický zdroj dat dodáním adresy smlouvy. +Podgraf musíte znovu nasadit, ale pokud se ID podgrafu (hash IPFS) nezmění, nebude se muset synchronizovat od začátku. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +V rámci podgrafu se události zpracovávají vždy v pořadí, v jakém se objevují v blocích, bez ohledu na to, zda se jedná o více smluv. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Podívejte se do části "Instancování šablony zdroje dat" na: [Šablony datových zdrojů](/developing/creating-a-subgraph#data-source-templates). -## 8. Jak se ujistím, že pro místní nasazení používám nejnovější verzi graph-node? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -Můžete spustit následující příkaz: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**POZNÁMKA:** docker / docker-compose vždy použije tu verzi graf uzlu, která byla stažena při prvním spuštění, takže je důležité to udělat, abyste se ujistili, že máte nejnovější verzi graf uzlu. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. Jak mohu z mapování podgrafů zavolat smluvní funkci nebo přistupovat k veřejné stavové proměnné? +Obsluhy událostí a volání jsou nejprve seřazeny podle indexu transakce v rámci bloku. Obsluhy událostí a volání v rámci téže transakce jsou seřazeny podle konvence: nejprve obsluhy událostí, pak obsluhy volání, přičemž každý typ dodržuje pořadí, v jakém jsou definovány v manifestu. Obsluhy bloků se spouštějí po obsluhách událostí a volání v pořadí, v jakém jsou definovány v manifestu. I tato pravidla řazení se mohou měnit. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +Při vytváření nových dynamických zdrojů dat se obslužné rutiny definované pro dynamické zdroje dat začnou zpracovávat až po zpracování všech existujících obslužných rutin zdrojů dat a budou se opakovat ve stejném pořadí, kdykoli budou spuštěny. -## 10. Je možné vytvořit podgraf pomocí `graph init` z `graph-cli` se dvěma smlouvami? Nebo mám po spuštění `graph init` ručně přidat další datový zdroj v `subgraph.yaml`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Ano. V samotném příkazu `graph init` můžete přidat více datových zdrojů zadáním smluv za sebou. Pro přidání nového datového zdroje můžete také použít příkaz `graph add`. +Můžete spustit následující příkaz: -## 11. Chci přispět nebo přidat problém na GitHub. Kde najdu repozitáře s otevřeným zdrojovým kódem? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. Jaký je doporučený způsob vytváření "automaticky generovaných" ids pro entity při zpracování událostí? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? Pokud je během události vytvořena pouze jedna entita a pokud není k dispozici nic lepšího, pak by hash transakce + index protokolu byly jedinečné. Můžete je obfuskovat tak, že je převedete na bajty a pak je proženete přes `crypto.keccak256`, ale tím se jejich jedinečnost nezvýší. -## 13. Je možné při poslechu více smluv zvolit pořadí smlouvy, ve kterém se mají události poslouchat? +### 15. Can I delete my subgraph? -V rámci podgrafu se události zpracovávají vždy v pořadí, v jakém se objevují v blocích, bez ohledu na to, zda se jedná o více smluv. +Yes, you can [delete](/managing/delete-a-subgraph/) and [transfer](/managing/transfer-a-subgraph/) your subgraph. -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +Seznam podporovaných sítí najdete [zde](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Ano, můžete to provést importováním `graph-ts` podle níže uvedeného příkladu: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Mohu do mapování podgrafů importovat ethers.js nebo jiné JS knihovny? - -V současné době ne, protože mapování jsou zapsána v AssemblyScript. Jedním z možných alternativních řešení je ukládat surová data do entit a logiku, která vyžaduje knihovny JS, provádět na klientovi. +## Indexing & Querying Related -## 17. Je možné určit, od kterého bloku se má indexování spustit? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Existují nějaké tipy, jak zvýšit výkon indexování? Synchronizace mého podgrafu trvá velmi dlouho +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Ano, měli byste se podívat na volitelnou funkci start bloku, která umožňuje zahájit indexování od bloku, ve kterém byla smlouva nasazena: [Start bloky](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Existuje způsob, jak se přímo zeptat podgrafu a zjistit poslední číslo bloku, který indexoval? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Ano! Vyzkoušejte následující příkaz, přičemž "organization/subgraphName" nahraďte názvem organizace, pod kterou je publikován, a názvem vašeho podgrafu: @@ -102,44 +121,27 @@ Ano! Vyzkoušejte následující příkaz, přičemž "organization/subgraphName curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. Jaké sítě podporuje Graf? - -Seznam podporovaných sítí najdete [zde](/developing/supported-networks). - -## 21. Je možné duplikovat podgraf do jiného účtu nebo koncového bodu, aniž by bylo nutné provést nové nasazení? - -Podgraf musíte znovu nasadit, ale pokud se ID podgrafu (hash IPFS) nezmění, nebude se muset synchronizovat od začátku. - -## 22. Je možné použít Apollo Federation nad graph-node? +### 22. Is there a limit to how many objects The Graph can return per query? -Federace zatím není podporována, i když ji chceme v budoucnu podporovat. V současné době můžete použít sešívání schémat, a to buď na klientovi, nebo prostřednictvím služby proxy. - -## 23. Je nějak omezeno, kolik objektů může Graf vrátit na jeden dotaz? - -Ve výchozím nastavení jsou odpovědi na dotazy omezeny na 100 položek na kolekci. Pokud chcete získat více, můžete jít až na 1000 položek na kolekci a nad tuto hranici můžete stránkovat pomocí: +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -## 24. Pokud můj frontend dapp používá pro dotazování The Graph, musím svůj dotazovací klíč zapsat přímo do frontend? Co když budeme za uživatele platit poplatky za dotazování - způsobí zlomyslní uživatelé, že naše poplatky za dotazování budou velmi vysoké? - -V současné době je doporučeným přístupem pro dapp přidání klíče do frontendu a jeho zpřístupnění koncovým uživatelům. Přitom můžete tento klíč omezit na název hostitele, například _yourdapp.io_ a podgraf. Bránu v současné době provozuje Edge & Node. Součástí odpovědnosti brány je monitorování zneužití a blokování provozu od škodlivých klientů. - -## 25. Kde najdu svůj aktuální podgraf v hostované službě? - -Přejděte do hostované služby, abyste našli podgrafy, které jste vy nebo jiní uživatelé nasadili do hostované služby. Najdete ji [zde](https://thegraph.com/hosted-service). - -## 26. Začne hostovaná služba účtovat poplatky za dotazy? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Graf nikdy nebude účtovat poplatky za hostovanou službu. Graf je decentralizovaný protokol a zpoplatnění centralizované služby není v souladu s hodnotami Graf. Hostovaná služba byla vždy dočasným krokem, který měl pomoci dostat se k decentralizované síti. Vývojáři budou mít dostatek času přejít na decentralizovanou síť, jak jim to bude vyhovovat. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 27. Jak mohu aktualizovat podgraf v síti mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. V jakém pořadí se spouštějí obsluhy událostí, bloků a volání pro zdroj dat? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Obsluhy událostí a volání jsou nejprve seřazeny podle indexu transakce v rámci bloku. Obsluhy událostí a volání v rámci téže transakce jsou seřazeny podle konvence: nejprve obsluhy událostí, pak obsluhy volání, přičemž každý typ dodržuje pořadí, v jakém jsou definovány v manifestu. Obsluhy bloků se spouštějí po obsluhách událostí a volání v pořadí, v jakém jsou definovány v manifestu. I tato pravidla řazení se mohou měnit. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -Při vytváření nových dynamických zdrojů dat se obslužné rutiny definované pro dynamické zdroje dat začnou zpracovávat až po zpracování všech existujících obslužných rutin zdrojů dat a budou se opakovat ve stejném pořadí, kdykoli budou spuštěny. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/cs/developing/graph-ts/api.mdx b/website/pages/cs/developing/graph-ts/api.mdx index d812f220a91b..d8c0004d37ef 100644 --- a/website/pages/cs/developing/graph-ts/api.mdx +++ b/website/pages/cs/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Poznámka: pokud jste vytvořili subgraf před verzí `graph-cli`/`graph-ts` `0.22.0`, používáte starší verzi jazyka AssemblyScript, doporučujeme se podívat do [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -Tato stránka dokumentuje, jaké vestavěné API lze použít při psaní mapování podgrafů. Dva druhy API jsou k dispozici hned po vybalení: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- kód generovaný ze souborů podgrafů pomocí `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -Jako závislosti je možné přidat i další knihovny, pokud jsou kompatibilní s [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Vzhledem k tomu, že mapování je psáno v tomto jazyce, je [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) dobrým zdrojem informací o funkcích jazyka a standardních knihoven. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## Reference API @@ -31,7 +33,7 @@ Knihovna `@graphprotocol/graph-ts` poskytuje následující API: | :-: | --- | | 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | | 0.0.8 | Přidá ověření existence polí ve schéma při ukládání entity. | -| 0.0.7 | Přidání tříd `TransactionReceipt` a `Log` do typů Ethereum\<0/Přidání pole `receipt` do objektu Ethereum událost | +| 0.0.7 | Přidání tříd `TransactionReceipt` a `Log` do typů Ethereum
Přidání pole `receipt` do objektu Ethereum událost | | 0.0.6 | Přidáno pole `nonce` do objektu Ethereum Transaction
Přidáno `baseFeePerGas` do objektu Ethereum bloku | | 0.0.5 | AssemblyScript povýšen na verzi 0.19.10 (obsahuje rozbíjející změny, viz [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` přejmenováno na `ethereum.transaction.gasLimit` | | 0.0.4 | Přidání pole `functionSignature` do objektu Ethereum SmartContractCall | @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { Pokud se při zpracování řetězce vyskytne událost `Transfer`, je předána obsluze události `handleTransfer` pomocí vygenerovaného typu `Transfer` (zde alias `TransferEvent`, aby nedošlo ke konfliktu názvů s typem entity). Tento typ umožňuje přístup k datům, jako je nadřazená transakce události a její parametr. -Každá entita musí mít jedinečné ID, aby nedocházelo ke kolizím s jinými entitami. Je poměrně běžné, že parametry událostí obsahují jedinečný identifikátor, který lze použít. Poznámka: Použití hashe transakce jako ID předpokládá, že žádné jiné události ve stejné transakci nevytvářejí entity s tímto hashem jako ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Načítání entity z úložiště @@ -268,15 +272,18 @@ if (transfer == null) { // Použijte entitu Transfer jako dříve ``` -Protože entita ještě nemusí v ukládat existovat, metoda `load` vrátí hodnotu typu `Transfer | null`. Proto může být nutné před použitím hodnoty zkontrolovat, zda se nejedná o případ `null`. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Poznámka:** Načtení entit je nutné pouze v případě, že změny provedené v mapování závisí na předchozích datech entity. Dva způsoby aktualizace existujících entit naleznete v následující části. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Vyhledávání entit vytvořených v rámci bloku Od verzí `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 a `@graphprotocol/graph-cli` v0.49.0 je metoda `loadInBlock` dostupná pro všechny typy entit. -API úložiště usnadňuje načítání entit, které byly vytvořeny nebo aktualizovány v aktuálním bloku. Typickou situací je, že jeden obslužný program vytvoří transakci z nějaké události v řetězci a pozdější obslužný program chce k této transakci přistupovat, pokud existuje. V případě, že transakce neexistuje, bude muset podgraf jít do databáze, jen aby zjistil, že entita neexistuje; pokud autor podgrafu již ví, že entita musela být vytvořena v tomtéž bloku, použitím funkce loadInBlock se této okružní cestě do databáze vyhne. U některých podgrafů mohou tato zmeškaná vyhledávání významně přispět k prodloužení doby indexace. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Jakákoli jiná smlouva, která je součástí podgrafu, může být importován #### Zpracování vrácených volání -Pokud se metody vaší smlouvy určené pouze pro čtení mohou vrátit, měli byste to řešit voláním vygenerované metody smlouvy s předponou `try_`. Například kontrakt Gravity vystavuje metodu `gravatarToOwner`. Tento kód by byl schopen zvládnout revert v této metodě: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Všimněte si, že uzel Graf připojený ke klientovi Geth nebo Infura nemusí detekovat všechny reverty, pokud na to spoléháte, doporučujeme použít uzel Graf připojený ke klientovi Parity. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Kódování/dekódování ABI diff --git a/website/pages/cs/developing/supported-networks.mdx b/website/pages/cs/developing/supported-networks.mdx index b9addda0b59e..cc2b778d6cad 100644 --- a/website/pages/cs/developing/supported-networks.mdx +++ b/website/pages/cs/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - Úplný seznam funkcí podporovaných v decentralizované síti najdete na [této stránce](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/cs/developing/unit-testing-framework.mdx b/website/pages/cs/developing/unit-testing-framework.mdx index 45decf979c6f..ef2cdd645f76 100644 --- a/website/pages/cs/developing/unit-testing-framework.mdx +++ b/website/pages/cs/developing/unit-testing-framework.mdx @@ -2,23 +2,32 @@ title: Rámec pro testování jednotek --- -Matchstick je framework pro jednotkové testování vyvinutý společností [LimeChain](https://limechain.tech/), který umožňuje vývojářům podgrafu testovat logika mapování v prostředí sandbox a spolehlivě nasazovat své podgraf! +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and sucessfully deploy their subgraphs. + +## Benefits of Using Matchstick + +- It's written in Rust and optimized for high performance. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. ## Začínáme -### Instalace závislostí +### Install Dependencies -Abyste mohli používat pomocné metody testy a spouštět testy, je třeba nainstalovat následující závislosti: +In order to use the test helper methods and run tests, you need to install the following dependencies: ```sh yarn add --dev matchstick-as ``` -❗ `graph-node` závisí na PostgreSQL, takže pokud jej ještě nemáte, musíte si jej nainstalovat. Důrazně doporučujeme použít níže uvedené příkazy, protože jeho přidání jiným způsobem může způsobit neočekávané chyby! +### Install PostgreSQL + +`graph-node` depends on PostgreSQL, so if you don't already have it, then you will need to install it. + +> Note: It's highly recommended to use the commands below to avoid unexpected errors. -#### MacOS +#### Using MacOS -Instalační příkaz Postgres: +Installation command: ```sh brew instalovat postgresql @@ -30,15 +39,15 @@ Vytvoření symlinku na nejnovější verzi libpq.5.lib _Musíte nejprve vytvoř ln -sf /usr/local/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/opt/postgresql/lib/libpq.5.dylib ``` -#### Linux +#### Using Linux -Instalační příkaz Postgres (závisí na distribuci): +Installation command (depends on your distro): ```sh sudo apt install postgresql ``` -### WSL (Subsystém Windows pro Linux) +### Using WSL (Windows Subsystem for Linux) Matchstick můžete na WSL používat jak pomocí přístupu Docker, tak pomocí binárního přístupu. Protože WSL může být trochu složitější, přinášíme několik tipů pro případ, že narazíte na problémy, jako např @@ -76,7 +85,7 @@ A konečně, nepoužívejte `graph test` (který používá globální instalaci } ``` -### Použití +### Using Matchstick Pro použití **Matchsticku** v projektu subgrafu stačí otevřít terminál, přejít do kořenové složky projektu a jednoduše spustit `graph test [options] ` - stáhne se nejnovější binární soubor **Matchsticku** a spustí se zadaný test nebo všechny testy ve složce testů (nebo všechny existující testy, pokud není zadán příznak datasource). @@ -1368,7 +1377,7 @@ Výstup protokolu obsahuje dobu trvání test. Zde je příklad: > Kritické: Nelze vytvořit WasmInstance z platného modulu s kontextem: neznámý import: wasi_snapshot_preview1::fd_write nebyl definován -To znamená, že jste ve svém kódu použili `console.log`, což není podporováno jazykem AssemblyScript. Zvažte prosím použití [Logging API](/developing/graph-ts/api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. > @@ -1384,6 +1393,10 @@ To znamená, že jste ve svém kódu použili `console.log`, což není podporov Neshoda v argumentech je způsobena neshodou v `graph-ts` a `matchstick-as`. Nejlepší způsob, jak opravit problémy, jako je tento, je aktualizovat vše na nejnovější vydanou verzi. +## Další zdroje + +For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). + ## Zpětná vazba Pokud máte nějaké dotazy, zpětnou vazbu, požadavky na funkce nebo se jen chcete ozvat, nejlépe na Graf Discord, kde máme pro Matchstick vyhrazený kanál s názvem 🔥| unit-testing. diff --git a/website/pages/cs/glossary.mdx b/website/pages/cs/glossary.mdx index b1e74d28b440..5e3859945744 100644 --- a/website/pages/cs/glossary.mdx +++ b/website/pages/cs/glossary.mdx @@ -10,11 +10,9 @@ title: Glosář - **Koncový bod**: URL, které lze použít k dotazu na podgraf. Testovací koncový bod pro Podgraf Studio je `https://api.studio.thegraph.com/query///` a koncový bod Graf Exploreru je `https://gateway.thegraph.com/api//subgraphs/id/`. Koncový bod Graf Explorer se používá k dotazování podgrafů v decentralizované síti Graf. -- **Podgraf**: Otevřené API, které získává data z blockchainu, zpracovává je a ukládá tak, aby bylo možné se na ně snadno dotazovat prostřednictvím GraphQL. Vývojáři mohou vytvářet, nasazovat a publikovat podgrafy v síti Graf Poté mohou indexátoři začít indexovat podgrafy, aby je kdokoli mohl vyhledávat. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hostovaná služba**: Dočasná lešenářská služba pro vytváření a dotazování podgrafů v době, kdy decentralizovaná síť Graf dozrává v oblasti nákladů na služby, kvality služeb a zkušeností vývojářů. - -- **Indexery**: Účastníci sítě, kteří provozují indexovací uzly pro indexování dat z blockchainů a obsluhu dotazů GraphQL. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Příjmy indexátorů**: Indexátoři jsou v GRT odměňováni dvěma složkami: slevami z poplatků za dotazy a odměnami za indexování. @@ -22,19 +20,19 @@ title: Glosář 2. **Odměny za indexování**: Odměny, které indexátory obdrží za indexování podgrafů. Odměny za indexování jsou generovány prostřednictvím nové emise 3% GRT ročně. -- **Vlastní vklad indexátora**: Částka GRT, kterou indexátoři vkládají, aby se mohli účastnit decentralizované sítě. Minimum je 100,000 GRT a horní hranice není stanovena. +- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade indexeru**: Dočasný indexer určený jako záložní pro dotazy na podgrafy, které nejsou obsluhovány jinými indexery v síti. Zajišťuje bezproblémový přechod pro podgrafy, které se upgradují z hostované služby na Síť Graf. Upgrade Indexer není konkurenční vůči ostatním Indexerům. Podporuje řadu blokových řetězců, které byly dříve dostupné pouze v hostované službě. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegátoři**: Účastníci sítě, kteří vlastní GRT a delegují své GRT na indexátory. To umožňuje Indexerům zvýšit svůj podíl v podgrafech v síti. Delegáti na oplátku dostávají část odměn za indexování, které indexátoři dostávají za zpracování podgrafů. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegační daň**: 0.5% poplatek, který platí delegáti, když delegují GRT na indexátory. GRT použitý k úhradě poplatku se spálí. -- **Kurátoři**: Účastníci sítě, kteří identifikují vysoce kvalitní podgrafy a "kurátorují" je (tj. signalizují na nich GRT) výměnou za kurátorské podíly. Když indexátoři požadují poplatky za dotaz na podgraf, 10% se rozdělí kurátorům tohoto podgrafu. Indexátoři získávají indexační odměny úměrné signálu na podgrafu. Vidíme korelaci mezi množstvím signalizovaných GRT a počtem indexátorů indexujících podgraf. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: Kurátoři platí 1% poplatek, když signalizují GRT na podgraf GRT použitý k zaplacení poplatku se spálí. -- **Podgraf Spotřebitel**: Jakákoli aplikace nebo uživatel, který se dotazuje na podgraf. +- **Data Consumer**: Any application or user that queries a subgraph. - **Vývojář podgrafů**: Vývojář, který vytváří a nasazuje subgraf do decentralizované sítě Grafu. @@ -46,15 +44,15 @@ title: Glosář 1. **Aktivní**: Alokace je považována za aktivní, když je vytvořena v řetězci. Tomu se říká otevření alokace a signalizuje síti, že indexátor aktivně indexuje a obsluhuje dotazy pro daný podgraf. Aktivní alokace získávají odměny za indexování úměrné signálu na podgrafu a množství alokovaného GRT. - 2. **Zavřeno**: Indexátor si může nárokovat odměny za indexaci daného podgrafu předložením aktuálního a platného dokladu o indexaci (POI). Tomuto postupu se říká uzavření přídělu. Alokace musí být otevřena minimálně jednu epochu, aby mohla být uzavřena. Maximální doba přidělení je 28 epoch. Pokud indexátor ponechá alokaci otevřenou déle než 28 epoch, je tato alokace označována jako zastaralá. Když je alokace ve stavu **uzavřeno**, může rybář stále otevřít spor a napadnout indexátor za podávání falešných dat. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Podgraf Studio**: Výkonná aplikace pro vytváření, nasazování a publikování podgrafů. -- **Rybáři**: Úloha v rámci sítě Grafu, kterou zastávají účastníci, kteří sledují přesnost a integritu dat poskytovaných indexátory. Pokud Rybář identifikuje odpověď na dotaz nebo POI, o které se domnívá, že je nesprávná, může iniciovat spor s Indexátorem. Pokud spor rozhodne ve prospěch Rybáře, je Indexátor vyřazen. Konkrétně indexátor přijde o 2.5 % svého vlastního podílu na GRT. Z této částky je 50% přiznáno Rybáři jako odměna za jeho bdělost a zbývajících 50% je staženo z oběhu (spáleno). Tento mechanismus je navržen tak, aby Rybáře motivoval k tomu, aby pomáhali udržovat spolehlivost sítě tím, že zajistí, aby Indexátoři nesli odpovědnost za data, která poskytují. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Rozhodčí**: Rozhodci jsou účastníci sítě jmenovaní v rámci procesu řízení. Úkolem arbitra je rozhodovat o výsledku sporů týkajících se indexace a dotazů. Jejich cílem je maximalizovat užitečnost a spolehlivost sítě Graf. -- **Slashing**: Indexerům může být snížen jejich vlastní GRT za poskytnutí nesprávného POI nebo za poskytnutí nepřesných dat. Procento slashingu je parametr protokolu, který je v současné době nastaven na 2.5% vlastního podílu indexátora. 5% z kráceného GRT připadne rybáři, který nepřesná data nebo nesprávné POI zpochybnil. Zbývajících 50% se spálí. +- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. - **Odměny za indexování**: Odměny, které indexátory obdrží za indexování podgrafů. Odměny za indexování se rozdělují v GRT. @@ -62,11 +60,11 @@ title: Glosář - **GRT**: Token pracovního nástroje Grafu. GRT poskytuje účastníkům sítě ekonomické pobídky za přispívání do sítě. -- **POI nebo Doklad o indexování**: Když indexátor uzavře svůj příděl a chce si nárokovat své naběhlé odměny za indexování na daném podgrafu, musí předložit platný a aktuální doklad o indexování (POI). Rybáři mohou POI poskytnuté indexátorem zpochybnit. Spor vyřešený ve prospěch lovce bude mít za následek snížení indexátoru. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Uzel grafu**: Uzel grafu je komponenta, která indexuje podgrafy a zpřístupňuje výsledná data pro dotazování prostřednictvím rozhraní GraphQL API. Jako takový je ústředním prvkem zásobníku indexátoru a správná činnost Uzel grafu je pro úspěšný provoz indexátoru klíčová. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Agent indexátoru**: Agent indexeru je součástí zásobníku indexeru. Usnadňuje interakce indexeru v řetězci, včetně registrace v síti, správy rozmístění podgrafů do jeho grafových uzlů a správy alokací. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **Klient grafu**: Knihovna pro decentralizované vytváření dapps na bázi GraphQL. @@ -76,12 +74,8 @@ title: Glosář - **Období vychladnutí**: Doba, která zbývá do doby, než indexátor, který změnil své parametry delegování, může tuto změnu provést znovu. -- **Nástroje pro přenos L2**: Chytré smlouvy a UI, které umožňují účastníkům sítě převádět aktiva související se sítí z mainnetu Ethereum do Arbitrum One. Účastníci sítě mohou převádět delegované GRT, podgrafy, kurátorské podíly a vlastní podíl Indexera. - -- **_Vylepšit_ podgrafu do Sítě grafů**: Proces přesunu podgrafu z hostované služby do Sítě grafů. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. -- **_Aktualizace_ podgrafu**: Proces vydání nové verze podgrafu s aktualizacemi manifestu, schématu nebo mapování podgrafu. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrace**: Proces sdílení kurátorů, při kterém se přechází ze staré verze podgrafu na novou verzi podgrafu (např. při aktualizaci verze v0.0.1 na verzi v0.0.2). - -- **Okno aktualizace**: Odpočet, kdy mohou uživatelé hostovaných služeb aktualizovat své podgrafy na síť The Graph Network, začíná 11, dubna a končí 12, června 2024. diff --git a/website/pages/cs/index.json b/website/pages/cs/index.json index a1bae4af6a25..4dd97a91d425 100644 --- a/website/pages/cs/index.json +++ b/website/pages/cs/index.json @@ -56,10 +56,6 @@ "graphExplorer": { "title": "Průzkumník grafů", "description": "Prozkoumání podgrafů a interakce s protokolem" - }, - "hostedService": { - "title": "Hostovaná služba", - "description": "Vytváření a zkoumání podgrafů v hostované službě" } } }, diff --git a/website/pages/cs/managing/delete-a-subgraph.mdx b/website/pages/cs/managing/delete-a-subgraph.mdx index 68ef0a37da75..fb6241fd8526 100644 --- a/website/pages/cs/managing/delete-a-subgraph.mdx +++ b/website/pages/cs/managing/delete-a-subgraph.mdx @@ -9,7 +9,9 @@ Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). ## Step-by-Step 1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). + 2. Click on the three-dots to the right of the "publish" button. + 3. Click on the option to "delete this subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) @@ -24,6 +26,6 @@ Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). ### Important Reminders - Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. +- Kurátoři již nebudou moci signalizovat na podgrafu. - Curators that already signaled on the subgraph can withdraw their signal at an average share price. - Deleted subgraphs will show an error message. diff --git a/website/pages/cs/managing/transfer-a-subgraph.mdx b/website/pages/cs/managing/transfer-a-subgraph.mdx index c4060284d5d9..19999c39b1e3 100644 --- a/website/pages/cs/managing/transfer-a-subgraph.mdx +++ b/website/pages/cs/managing/transfer-a-subgraph.mdx @@ -1,19 +1,17 @@ --- -title: Transfer and Deprecate a Subgraph +title: Transfer a Subgraph --- -## Transferring ownership of a subgraph - Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. -**Please note the following:** +## Reminders - Whoever owns the NFT controls the subgraph. - If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. - You can easily move control of a subgraph to a multi-sig. - A community member can create a subgraph on behalf of a DAO. -### View your subgraph as an NFT +## View Your Subgraph as an NFT To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: @@ -27,39 +25,18 @@ Or a wallet explorer like **Rainbow.me**: https://rainbow.me/your-wallet-addres ``` -### Step-by-Step +## Step-by-Step To transfer ownership of a subgraph, do the following: -1. Use the UI built into Subgraph Studio: +1. Use the UI built into Subgraph Studio: - ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the subgraph to: - ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: ![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) - -## Deprecating a subgraph - -Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. - -### Step-by-Step - -To deprecate your subgraph, do the following: - -1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). -2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. -3. Your subgraph will no longer appear in searches on Graph Explorer. - -**Please note the following:** - -- The owner's wallet should call the `deprecateSubgraph` function. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deprecated subgraphs will show an error message. - -> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/cs/network/benefits.mdx b/website/pages/cs/network/benefits.mdx index 6f786e54adfe..98d8bba2cb04 100644 --- a/website/pages/cs/network/benefits.mdx +++ b/website/pages/cs/network/benefits.mdx @@ -89,4 +89,4 @@ Decentralizovaná síť Grafu poskytuje uživatelům přístup ke geografické r Podtrženo a sečteno: Síť Graf je levnější, jednodušší na používání a poskytuje lepší výsledky než lokální provozování `graph-node`. -Začněte používat síť Graf ještě dnes a zjistěte, jak [upgradovat svůj podgraf do decentralizované sítě Grafu](/cookbook/upgrading-a-subgraph). +Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/quick-start). diff --git a/website/pages/cs/network/curating.mdx b/website/pages/cs/network/curating.mdx index 7b10db17c678..d0230e080bdf 100644 --- a/website/pages/cs/network/curating.mdx +++ b/website/pages/cs/network/curating.mdx @@ -8,9 +8,7 @@ Curators are critical to The Graph's decentralized economy. They use their knowl Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. -Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. - -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +16,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -30,11 +28,11 @@ Within the Curator tab in Graph Explorer, curators will be able to signal and un Kurátor si může zvolit, zda bude signalizovat na konkrétní verzi podgrafu, nebo zda se jeho signál automaticky přenese na nejnovější produkční sestavení daného podgrafu. Obě strategie jsou platné a mají své výhody i nevýhody. -Signalizace na konkrétní verzi je užitečná zejména tehdy, když jeden podgraf používá více dApps. Jedna dApp může potřebovat pravidelně aktualizovat podgraf o nové funkce. Jiná dApp může preferovat používání starší, dobře otestované verze podgrafu. Při počáteční kurátorské úpravě je účtována standardní daň ve výši 1 %. +Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. Automatická migrace signálu na nejnovější produkční sestavení může být cenná, protože zajistí, že se poplatky za dotazy budou neustále zvyšovat. Při každém kurátorství se platí 1% kurátorský poplatek. Při každé migraci také zaplatíte 0,5% kurátorskou daň. Vývojáři podgrafu jsou odrazováni od častého publikování nových verzí - musí zaplatit 0.5% kurátorskou daň ze všech automaticky migrovaných kurátorských podílů. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,8 +47,8 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Rizika 1. Trh s dotazy je v Graf ze své podstaty mladý a existuje riziko, že vaše %APY může být nižší, než očekáváte, v důsledku dynamiky rodícího se trhu. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. Podgraf může selhat kvůli chybě. Za neúspěšný podgraf se neúčtují poplatky za dotaz. V důsledku toho budete muset počkat, až vývojář chybu opraví a nasadí novou verzi. - Pokud jste přihlášeni k odběru nejnovější verze podgrafu, vaše sdílené položky se automaticky přemigrují na tuto novou verzi. Při tom bude účtována 0,5% kurátorská daň. - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. @@ -63,9 +61,9 @@ By signalling on a subgraph, you will earn a share of all the query fees that th ### 2. Jak se rozhodnu, které podgrafy jsou kvalitní a na kterých je třeba signalizovat? -Nalezení kvalitních podgrafů je složitý úkol, ale lze k němu přistupovat mnoha různými způsoby. Jako kurátor chcete hledat důvěryhodné podgrafy, které jsou zdrojem objemu dotazů. Důvěryhodný podgraf může být cenný, pokud je úplný, přesný a podporuje datové potřeby dApp. Špatně navržený podgraf může vyžadovat revizi nebo opětovné zveřejnění a může také skončit neúspěchem. Pro kurátory je zásadní, aby přezkoumali architekturu nebo kód podgrafu, aby mohli posoudit, zda je podgraf hodnotný. V důsledku toho: +Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: -- Kurátoři mohou využít své znalosti sítě k tomu, aby se pokusili předpovědět, jak může jednotlivý podgraf v budoucnu generovat vyšší nebo nižší objem dotazů +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. Jaké jsou náklady na aktualizaci podgrafu? @@ -78,50 +76,14 @@ Doporučujeme, abyste podgrafy neaktualizovali příliš často. Další podrobn ### 5. Mohu prodat své kurátorské podíly? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Křivka lepení 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Cena za akcii](/img/price-per-share.png) - -V důsledku toho se cena lineárně zvyšuje, což znamená, že nákup akcie bude v průběhu času dražší. Zde je příklad toho, co máme na mysli, viz níže uvedená vazební křivka: - -![Křivka lepení](/img/bonding-curve.png) - -Uvažujme, že máme dva kurátory, kteří mintují podíly pro podgraf - -- Kurátor A signalizuje jako první na podgrafu. Přidáním 120,000 GRT do křivky se jim podaří vydolovat 2000 akcií. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Vzhledem k tomu, že oba kurátoři mají polovinu všech kurátorských podílů, dostávali by stejnou částku kurátorských honorářů. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- Zbývající kurátor by nyní obdržel všechny kurátorské honoráře za tento podgraf. Pokud by své podíly spálili a vybrali GRT, získali by 120,000 GRT. -- **TLDR:** Ocenění kurátorských akcií GRT je určeno vazebnou křivkou a může být volatilní. Existuje potenciál pro vznik velkých ztrát. Včasná signalizace znamená, že do každé akcie vložíte méně GRT. V důsledku to znamená, že vyděláte více kurátorských poplatků za GRT než pozdější kurátoři za stejný podgraf. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -V případě Grafu se využívá [Bankorova implementace vzorce vazební křivky](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA). - Stále jste zmateni? Podívejte se na našeho videoprůvodce kurátorstvím níže diff --git a/website/pages/cs/network/delegating.mdx b/website/pages/cs/network/delegating.mdx index d444c8edf2b3..ef106bad0942 100644 --- a/website/pages/cs/network/delegating.mdx +++ b/website/pages/cs/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegování --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Průvodce delegáta -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly. The Ethereum community provides a comprehensive resource regarding wallets through the following link ([source](https://ethereum.org/en/wallets/)). There are three sections in this guide: @@ -24,15 +34,19 @@ Níže jsou uvedena hlavní rizika plynoucí z delegáta v protokolu. Delegáti nemohou být za špatné chování kráceni, ale existuje daň pro delegáty, která má odradit od špatného rozhodování, jež by mohlo poškodit integritu sítě. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### Konec období vázanosti delegací Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
![Zrušení vázanosti delegací](/img/Delegation-Unbonding.png) _Všimněte si 0.5% poplatku v UI delegací a 28denní lhůty. @@ -41,47 +55,65 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### Výběr důvěryhodného indexátora se spravedlivou odměnou pro delegáty -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *Nejlepší indexátor dává delegátům 90 % odměn. Na prostřední dává delegátům 20 % odměn. Spodní dává delegátům ~83 %.*
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Výpočet očekávaného výnosu delegátů +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- Technický delegát se může také podívat na schopnost indexátoru používat dostupné delegované tokeny. Pokud Indexátor nealokuje všechny dostupné tokeny, nevydělává pro sebe ani pro své Delegáty maximální možný zisk. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### S ohledem na snížení poplatků za dotaz a indexaci -Jak je popsáno v předchozích částech, měli byste si vybrat indexátor, který je transparentní a poctivý, pokud jde o nastavení snížení poplatků za dotaz a indexování. Delegovatel by se měl také podívat na dobu Cooldown parametrů, aby zjistil, jak velkou má časovou rezervu. Poté je poměrně jednoduché vypočítat výši odměn, které Delegátoři dostávají. Vzorec je následující: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegování Obrázek 3](/img/Delegation-Reward-Formula.png) ### Zohlednění fondu delegování indexátoru -Další věcí, kterou musí delegát zvážit, je, jakou část fondu delegátů vlastní. Všechny odměny za delegování se rozdělují rovnoměrně, přičemž jednoduché vyvážení fondu se určuje podle částky, kterou delegát do fondu vložil. Delegát tak získá podíl na fondu: +Delegators should consider the proportion of the Delegation Pool they own. -![Sdílet vzorec](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Sdílet vzorec](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Vzhledem ke kapacitě delegace -Další věcí, kterou je třeba zvážit, je kapacita delegování. V současné době je poměr delegování nastaven na 16. To znamená, že pokud indexátor vsadil 1,000,000 GRT, jeho delegační kapacita je 16,000,000 GRT delegovaných tokenů, které může v protokolu použít. Jakékoli delegované tokeny nad toto množství rozředí všechny odměny delegátora. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### Chyba MetaMask "Čekající transakce" -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Příklad -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Videoprůvodce UI sítě +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/cs/network/developing.mdx b/website/pages/cs/network/developing.mdx index 6d508e4a3b7a..106792c6b4ed 100644 --- a/website/pages/cs/network/developing.mdx +++ b/website/pages/cs/network/developing.mdx @@ -2,52 +2,29 @@ title: Vývoj --- -Vývojáři jsou poptávkovou stranou ekosystému Grafu. Vývojáři vytvářejí podgrafy a publikují je v síti Graf. Poté se dotazují na živé podgrafy pomocí GraphQL, aby mohli využívat své aplikace. +To start coding right away, go to [Developer Quick Start](/quick-start/). -## Životní cyklus podgrafů +## Přehled -Podgrafy nasazené do sítě mají definovaný životní cyklus. +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. -### Stavět lokálně +On The Graph, you can: -Stejně jako při vývoji všech podgrafů se začíná lokálním vývojem a testováním. Vývojáři mohou používat stejné místní nastavení, ať už vytvářejí pro síti Graf, hostovanou službu nebo místní uzel Grafu, a využívat při vytváření podgrafu `graph-cli` a `graph-ts`. Vývojářům se doporučuje používat nástroje, jako je [Matchstick](https://github.com/LimeChain/matchstick), pro testování jednotek, aby zvýšili robustnost svých podgrafů. +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. -> Síť Graf má určitá omezení, pokud jde o funkce a podporu sítě. Odměny za indexaci získají pouze podgrafy na [podporovaných sítích](/developing/supported-networks) a odměny za indexaci nemohou získat ani podgrafy, které načítají data z IPFS. +### What is GraphQL? -### Deploy to Subgraph Studio +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. +### Developer Actions -### Publikovat v síti +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. -Jakmile je vývojář se svým podgrafem spokojen, může jej zveřejnit v síti Grafu. Jedná se o akci v řetězci, která zaregistruje podgraf tak, aby jej indexery mohly objevit. Zveřejněné podgrafy mají odpovídající NFT, který je pak snadno přenositelný. Zveřejněný podgraf má přiřazená metadata, která poskytují ostatním účastníkům sítě užitečný kontext a informace. +### What are subgraphs? -### Signál na podporu indexování +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Publikované podgrafy pravděpodobně nebudou zachyceny indexátory bez přidání signálu. Signál je uzamčený GRT spojený s daným podgrafem, který indikuje indexátorům, že daný podgraf obdrží objem dotazů, a také přispívá k indexačním odměnám, které jsou k dispozici pro jeho zpracování. Vývojáři podgrafů obvykle přidávají ke svým podgrafům signál, aby podpořili indexování. Kurátoři třetích stran mohou také signalizovat daný podgraf, pokud se domnívají, že podgraf bude pravděpodobně vytvářet objem dotazů. - -### Dotazování & Vývoj aplikací - -Jakmile je podgraf zpracován indexery a je k dispozici pro dotazování, mohou jej vývojáři začít používat ve svých aplikacích. Vývojáři se dotazují na podgrafy prostřednictvím brány, která jejich dotazy předává indexeru, jenž podgraf zpracoval, a platí poplatky za dotazy v GRT. - -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. - -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. - -### Updating Subgraphs - -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. - -Jakmile je vývojář podgrafu připraven k aktualizaci, může iniciovat transakci, která jeho podgraf nasměruje na novou verzi. Aktualizace podgrafu migruje jakýkoli signál na novou verzi (za předpokladu, že uživatel, který signál aplikoval, zvolil "automatickou migraci"), čímž také vzniká migrační daň. Tato migrace signálu by měla přimět indexátory, aby začaly indexovat novou verzi podgrafu, takže by měl být brzy k dispozici pro dotazování. - -### Vyřazování podgrafů - -V určitém okamžiku se vývojář může rozhodnout, že publikovaný podgraf již nepotřebuje. V tu chvíli může podgraf vyřadit, čímž se kurátorům vrátí všechny signalizované GRT. - -### Různorodé role vývojáře - -Někteří vývojáři se zapojí do celého životního cyklu podgrafů v síti, publikují, dotazují se a iterují své vlastní podgrafy. Někteří se mohou zaměřit na vývoj podgrafů a vytvářet otevřené API, na kterém mohou stavět ostatní. Někteří se mohou zaměřit na aplikace a dotazovat se na podgrafy, které nasadili jiní. - -### Vývojáři a síťová ekonomika - -Vývojáři jsou v síti klíčovým ekonomickým subjektem, který blokuje GRT, aby podpořil indexování, a hlavně se dotazuje na podgrafy, což je hlavní výměna hodnot v síti. Vývojáři podgrafů také spalují GRT, kdykoli je podgraf aktualizován. +Check out the documentation on [subgraphs](/subgraphs/) to learn specifics. diff --git a/website/pages/cs/network/explorer.mdx b/website/pages/cs/network/explorer.mdx index 5d12bb618838..e501104f13e9 100644 --- a/website/pages/cs/network/explorer.mdx +++ b/website/pages/cs/network/explorer.mdx @@ -2,21 +2,35 @@ title: Průzkumník grafů --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Podgrafy -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Obrázek průzkumníka 1](/img/Subgraphs-Explorer-Landing.png) -Po kliknutí do podgrafu budete moci testovat dotazy na hřišti a využívat podrobnosti o síti k přijímání informovaných rozhodnutí. Budete také moci signalizovat GRT na svém vlastním podgrafu nebo podgrafech ostatních, aby si indexátory uvědomily jeho důležitost a kvalitu. To je velmi důležité, protože signalizace na podgrafu motivuje k jeho indexaci, což znamená, že se v síti objeví a nakonec bude sloužit dotazům. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Obrázek průzkumníka 2](/img/Subgraph-Details.png) -Na stránce věnované každému podgrafu se objeví několik podrobností. Patří mezi ně: +On each subgraph’s dedicated page, you can do the following: - Signál/nesignál na podgraf - Zobrazit další podrobnosti, například grafy, ID aktuálního nasazení a další metadata @@ -31,26 +45,32 @@ Na stránce věnované každému podgrafu se objeví několik podrobností. Pat ## Účastníci -Na této kartě získáte přehled o všech osobách, které se podílejí na činnostech sítě, jako jsou indexátoři, delegáti a kurátoři. Níže si podrobně rozebereme, co pro vás jednotlivé karty znamenají. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexery ![Obrázek průzkumníka 4](/img/Indexer-Pane.png) -Začněme u indexátorů. Základem protokolu jsou indexery, které sázejí na podgrafy, indexují je a obsluhují dotazy všech, kdo podgrafy spotřebovávají. V tabulce Indexers uvidíte parametry delegace indexerů, jejich podíl, kolik vsadili na jednotlivé podgrafy a kolik vydělali na poplatcích za dotazy a odměnách za indexování. Hlubší ponory níže: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - % slevy z poplatku za dotaz, které si indexátor ponechá při rozdělení s delegáty -- Efektivní snížení odměny - indexační snížení odměny použité na fond delegací. Pokud je záporná, znamená to, že indexátor odevzdává část svých odměn. Pokud je kladná, znamená to, že si indexátor ponechává část svých odměn -- Cooldown Remaining - doba, která zbývá do doby, kdy indexátor může změnit výše uvedené parametry delegování. Období Cooldown nastavují indexátory při aktualizaci parametrů delegování. -- Owned - Jedná se o uložený podíl indexátora, který může být zkrácen za škodlivé nebo nesprávné chování. -- Delegated - Podíl z delegátů, který může být přidělen indexátor, ale nemůže být zkrácen -- Allocated - Podíl, který indexátory aktivně alokují k indexovaným podgrafy -- Dostupná kapacita delegování - množství delegovaných podílů, které mohou indexátoři ještě obdržet, než dojde k jejich nadměrnému delegování +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Maximální kapacita delegování - maximální množství delegovaných podílů, které může indexátor produktivně přijmout. Nadměrný delegovaný podíl nelze použít pro alokace nebo výpočty odměn. -- Poplatky za dotazy - jedná se o celkové poplatky, které koncoví uživatelé zaplatili za dotazy z indexátoru za celou dobu +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Odměny indexátorů - jedná se o celkové odměny indexátorů, které indexátor a jeho delegáti získali za celou dobu. Odměny indexátorů jsou vypláceny prostřednictvím vydání GRT. -Indexátoři mohou získat jak poplatky za dotazy, tak odměny za indexování. Funkčně k tomu dochází, když účastníci sítě delegují GRT na indexátor. To indexátorům umožňuje získávat poplatky za dotazování a odměny v závislosti na parametrech indexátoru. Parametry indexování se nastavují kliknutím na pravou stranu tabulky nebo vstupem do profilu indexátora a kliknutím na tlačítko "Delegate". +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. Chcete-li se dozvědět více o tom, jak se stát indexátorem, můžete se podívat do [oficiální dokumentace](/network/indexing) nebo do [průvodců pro indexátory akademie graf.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ Chcete-li se dozvědět více o tom, jak se stát indexátorem, můžete se pod ### 2. Kurátoři -Kurátoři analyzují podgrafy, aby určili, které podgrafy jsou nejkvalitnější. Jakmile kurátor najde potenciálně atraktivní podgraf, může jej kurátorovi signalizovat na jeho vazební křivce. Kurátoři tak dávají indexátorům vědět, které podgrafy jsou vysoce kvalitní a měly by být indexovány. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Kurátory mohou být členové komunity, konzumenti dat nebo dokonce vývojáři podgrafů, kteří signalizují své vlastní podgrafy tím, že vkládají žetony GRT do vazební křivky. Vložením GRT kurátoři razí kurátorské podíly podgrafu. V důsledku toho mají kurátoři nárok vydělat část poplatků za dotazy, které signalizovaný podgraf generuje. Vázací křivka motivuje kurátory ke kurátorství datových zdrojů nejvyšší kvality. Tabulka kurátorů v této části vám umožní vidět: +In the The Curator table listed below you can see: - Datum, kdy kurátor zahájil kurátorskou činnost - Počet uložených GRT @@ -68,34 +92,36 @@ Kurátory mohou být členové komunity, konzumenti dat nebo dokonce vývojáři ![Obrázek průzkumníka 6](/img/Curation-Overview.png) -Pokud se chcete o roli kurátora dozvědět více, můžete tak učinit na následujících odkazech [The Graph Academy](https://thegraph.academy/curators/) nebo [oficiální dokumentace.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegáti -Delegáti hrají klíčovou roli při udržování bezpečnosti a decentralizace sítě Graf. Podílejí se na síti tím, že delegují (tj. "sází") tokeny GRT jednomu nebo více indexátorům. Bez delegátů mají indexátoři menší šanci získat významné odměny a poplatky. Proto se indexátoři snaží přilákat delegáty tím, že jim nabízejí část odměn za indexování a poplatků za dotazy, které získají. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegáti zase vybírají indexátory na základě řady různých proměnných, jako je výkonnost v minulosti, míra odměny za indexaci a snížení poplatků za dotaz. Svou roli může hrát i pověst v rámci komunity! Doporučujeme se s vybranými indexátory spojit prostřednictvím [Discord Grafu](https://discord.gg/graphprotocol) nebo [Fóra Grafu](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Obrázek průzkumníka 7](/img/Delegation-Overview.png) -Tabulka Delegáti vám umožní zobrazit aktivní delegáty v komunitě a také metriky, jako jsou: +In the Delegators table you can see the active Delegators in the community and important metrics: - Počet indexátorů, na které deleguje delegát - Původní delegace delegát - Odměny, které nashromáždili, ale z protokolu si je nevyzvedli - Realizované odměny odstranili z protokolu - Celkové množství GRT, které mají v současné době v protokolu -- Datum, kdy byly naposledy delegovány na +- The date they last delegated -Pokud se chcete dozvědět více o tom, jak se stát delegátem, už nemusíte hledat dál! Stačí, když se vydáte na [oficiální dokumentaci](/network/delegating) nebo [Akademii Graf](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Síť -V sekci Síť uvidíte globální klíčové ukazatele výkonnosti (KPI) a také možnost přepnout na základ epoch a detailněji analyzovat síťové metriky. Tyto podrobnosti vám poskytnou představu o tom, jak síť funguje v průběhu času. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Přehled -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - Současný celkový podíl v síti - Rozdělení stake mezi indexátory a jejich delegátory @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Parametry protokolu, jako je odměna za kurátorství, míra inflace a další - Odměny a poplatky současné epochy -Několik klíčových informací, které stojí za zmínku: +A few key details to note: -- **Poplatky za dotazy představují poplatky generované spotřebiteli** a indexátory si je mohou nárokovat (nebo ne) po uplynutí nejméně 7 epoch (viz níže) poté, co byly jejich příděly vůči podgraf uzavřeny a data, která obsluhovali, byla potvrzena spotřebiteli. -- ** Odměny za indexaci představují množství odměn, které indexátoři nárokovali ze síťové emise během epochy.** Ačkoli je emise protokolu pevně daná, odměny jsou vyraženy až poté, co indexátoři uzavřou své alokace vůči podgraf, které indexovali. Proto se počet odměn v jednotlivých epochách mění (tj. během některých epoch mohli indexátoři kolektivně uzavřít alokace, které byly otevřené mnoho dní). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Obrázek průzkumníka 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ V části Epochy můžete na základě jednotlivých epoch analyzovat metriky, j - Aktivní epocha je ta, ve které indexéry právě přidělují podíl a vybírají poplatky za dotazy - Epoch zúčtování jsou ty, ve kterých se zúčtovávají stavové kanály. To znamená, že indexátoři podléhají krácení, pokud proti nim spotřebitelé zahájí spory. - Distribuční epochy jsou epochy, ve kterých se vypořádávají státní kanály pro epochy a indexátoři si mohou nárokovat slevy z poplatků za dotazy. - - Finalizované epochy jsou epochy, u nichž indexátorům nezbývají žádné slevy z poplatků za dotaz, a jsou tedy finalizované. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Obrázek průzkumníka 9](/img/Epoch-Stats.png) ## Váš uživatelský profil -Nyní, když jsme si řekli něco o statistikách sítě, přejděme k vašemu osobnímu profilu. Váš osobní profil je místem, kde vidíte svou aktivitu v síti, ať už se jí účastníte jakýmkoli způsobem. Vaše kryptopeněženka bude fungovat jako váš uživatelský profil a pomocí uživatelského panelu si ji budete moci prohlédnout: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Přehled profilů -Zde se zobrazují všechny aktuální akce, které jste provedli. Zde také najdete informace o svém profilu, popis a webové stránky (pokud jste si je přidali). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Obrázek průzkumníka 10](/img/Profile-Overview.png) ### Tab Podgrafy -Pokud kliknete na kartu podgrafy, zobrazí se vaše publikované podgrafy. Nebudou zde zahrnuty žádné podgrafy nasazené pomocí CLI pro účely testování - podgrafy se zobrazí až po jejich zveřejnění v decentralizované síti. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Obrázek průzkumníka 11](/img/Subgraphs-Overview.png) ### Tab Indexování -Pokud kliknete na kartu Indexování, najdete tabulku se všemi aktivními a historickými alokacemi k dílčím grafy a také grafy, které můžete analyzovat a podívat se na svou minulou výkonnost jako indexátor. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Tato část bude také obsahovat podrobnosti o vašich čistých odměnách za indexování a čistých poplatcích za dotazy. Zobrazí se následující metriky: @@ -158,7 +189,9 @@ Tato část bude také obsahovat podrobnosti o vašich čistých odměnách za i ### Tab Delegování -Delegáti jsou pro síť Graf důležití. Delegát musí využít svých znalostí k výběru indexátora, který mu zajistí zdravou návratnost odměn. Zde najdete podrobnosti o svých aktivních a historických delegacích spolu s metrikami Indexátorů, ke kterým jste delegovali. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. V první polovině stránky vidíte graf delegování a také graf odměn. Vlevo vidíte klíčové ukazatele výkonnosti, které odrážejí vaše aktuální metriky delegování. diff --git a/website/pages/cs/network/indexing.mdx b/website/pages/cs/network/indexing.mdx index 7dbb2e7ced77..a3fb06a96484 100644 --- a/website/pages/cs/network/indexing.mdx +++ b/website/pages/cs/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Mnoho informačních panelů vytvořených komunitou obsahuje hodnoty čekajících odměn a lze je snadno zkontrolovat ručně podle následujících kroků: -1. Dotazem na podgraf [mainnet](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) získáte ID všech aktivních alokací: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -477,7 +477,7 @@ graph-indexer-agent start \ --index-node-ids default \ --indexer-management-port 18000 \ --metrics-port 7040 \ - --network-subgraph-endpoint https://gateway.network.thegraph.com/network \ + --network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \ --default-allocation-amount 100 \ --register true \ --inject-dai true \ @@ -512,7 +512,7 @@ graph-indexer-service start \ --postgres-username \ --postgres-password \ --postgres-database is_staging \ - --network-subgraph-endpoint https://gateway.network.thegraph.com/network \ + --network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \ | pino-pretty ``` @@ -545,7 +545,7 @@ Navrhovaným nástrojem pro interakci s **Indexer Management API** je **Indexer - `možná pravidla indexování grafů [možnosti] ` - Nastaví `decisionBasis` pro nasazení na `rules`, takže agent Indexer bude při rozhodování o indexování tohoto nasazení používat pravidla indexování. -- `Akce indexátoru grafu získají [možnosti] ` - Získá jednu nebo více akcí pomocí `all` nebo ponechá `action-id` prázdné pro získání všech akcí. Přídavný argument `--status` lze použít pro vypsání všech akcí určitého stavu. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Akce přidělení fronty @@ -810,7 +810,7 @@ To set the delegation parameters using Graph Explorer interface, follow these st ### Životnost přídělu -Po vytvoření indexátorem prochází zdravé přidělení čtyřmi stavy. +After being created by an Indexer a healthy allocation goes through two states. - **Aktivní** – Jakmile je alokace vytvořena v řetězci ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/ contract/staking/Staking.sol#L316)) je považován za **aktivní**. Část vlastního a/nebo delegovaného podílu indexeru je přidělena na nasazení podgrafu, což jim umožňuje nárokovat si odměny za indexování a obsluhovat dotazy pro toto nasazení podgrafu. Agent indexeru spravuje vytváření alokací na základě pravidel indexeru. diff --git a/website/pages/cs/network/overview.mdx b/website/pages/cs/network/overview.mdx index 0060dfc506a4..aeb16e0d488e 100644 --- a/website/pages/cs/network/overview.mdx +++ b/website/pages/cs/network/overview.mdx @@ -2,14 +2,20 @@ title: Přehled sítě --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Přehled +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Token Economics](/img/Network-roles@2x.png) -Pro zajištění ekonomické bezpečnosti sítě Graf a integrity dotazovaných dat účastníci sázejí a používají graf tokeny ([GRT](/tokenomics)). GRT je pracovní užitkový token, který má hodnotu ERC-20 a slouží k přidělování zdrojů v síti. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/cs/new-chain-integration.mdx b/website/pages/cs/new-chain-integration.mdx index 1c5466566491..e9f52eb42a93 100644 --- a/website/pages/cs/new-chain-integration.mdx +++ b/website/pages/cs/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrace nových sítí +title: New Chain Integration --- -Uzel grafu může v současné době indexovat data z následujících typů řetězců: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, prostřednictvím [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, prostřednictvím [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, prostřednictvím [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -Pokud máte zájem o některý z těchto řetězců, je integrace otázkou konfigurace a testování uzlu Graf. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -Pokud máte zájem o jiný typ řetězce, je třeba vytvořit novou integraci s Uzel Graf. Naším doporučeným přístupem je vytvoření nového Firehose pro daný řetězec a následná integrace tohoto Firehose s Uzel Graf. Více informací naleznete níže. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -Pokud je blockchain ekvivalentní EVM a klient/uzel vystavuje standardní EVM JSON-RPC API, měl by být Uzel Grafu schopen indexovat nový řetězec. Další informace naleznete v části [Testování EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testování EVM JSON-RPC -U řetězců, které nejsou založeny na EvM, musí Uzel Graf přijímat data blockchainu prostřednictvím gRPC a známých definic typů. To lze provést prostřednictvím [Firehose](firehose/), nové technologie vyvinuté společností [StreamingFast](https://www.streamingfast.io/), která poskytuje vysoce škálovatelné řešení indexování blockchainu pomocí přístupu založeného na souborech a streamování. Pokud potřebujete s vývojem Firehose pomoci, obraťte se na tým [StreamingFast](mailto:integrations@streamingfast.io/). +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Rozdíl mezi EVM JSON-RPC a Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` *(optionally required for Graph Node to support call handlers)* -Zatímco pro podgrafy jsou tyto dva typy vhodné, pro vývojáře, kteří chtějí vytvářet pomocí [Substreams](substreams/), jako je vytváření [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/), je vždy vyžadován Firehose. Firehose navíc umožňuje vyšší rychlost indexování ve srovnání s JSON-RPC. +### 2. Firehose Integration -Noví integrátoři řetězců EVM mohou také zvážit přístup založený na technologii Firehose vzhledem k výhodám substreamů a jejím masivním možnostem paralelizovaného indexování. Podpora obojího umožňuje vývojářům zvolit si mezi vytvářením substreamů nebo podgrafů pro nový řetězec. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **POZNÁMKA**: Integrace založená na Firehose pro řetězce EVM bude stále vyžadovat, aby indexátory spustily archivační uzel RPC řetězce, aby správně indexovaly podgrafy. Důvodem je neschopnost Firehose poskytovat stav inteligentních kontraktů typicky přístupný metodou `eth_call` RPC. (Stojí za to připomenout, že eth_call je [pro vývojáře není dobrou praxí](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testování EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -Aby mohl uzel Grafu přijímat data z řetězce EVM, musí uzel RPC zpřístupnit následující metody EVM JSON RPC: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(volitelně vyžadováno pro Uzel Graf, aby podporoval obsluhu volání)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Config uzlu grafu +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Začněte přípravou místního prostředí** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Config uzlu grafu + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Upravte [tento řádek](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) tak, aby obsahoval nový název sítě a URL adresu EVM kompatibilní s JSON RPC - > Samotný název env var neměňte. Musí zůstat `ethereum`, i když je název sítě jiný. -3. Spusťte uzel IPFS nebo použijte ten, který používá Graf: https://api.thegraph.com/ipfs/ -**Testování integrace lokálním nasazením podgrafu** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Vytvořte jednoduchý příklad podgrafu. Některé možnosti jsou uvedeny níže: - 1. Předpřipravený chytrá smlouva [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) a podgraf je dobrým výchozím bodem - 2. Zavedení lokálního podgrafu z jakéhokoli existujícího chytrého kontraktu nebo vývojového prostředí Solidity [pomocí Hardhat s plugin Graph](https://github.com/graphprotocol/hardhat-graph) -3. Upravte výsledný soubor `subgraph.yaml` změnou názvu `dataSources.network` na stejný, který byl dříve předán uzlu Graf. -4. Vytvořte podgraf v uzlu Graf: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Zveřejněte svůj podgraf v uzlu Graf: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Pokud nedošlo k chybám, měl by uzel Graf synchronizovat nasazený podgraf. Dejte mu čas na synchronizaci a poté odešlete několik dotazů GraphQL na koncový bod API vypsaný v protokolech. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrace nového řetězce s podporou služby Firehose +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Vytvořte jednoduchý příklad podgrafu. Některé možnosti jsou uvedeny níže: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Pokud nedošlo k chybám, měl by uzel Graf synchronizovat nasazený podgraf. Dejte mu čas na synchronizaci a poté odešlete několik dotazů GraphQL na koncový bod API vypsaný v protokolech. -Integrace nového řetězce je možná také pomocí přístupu Firehose. To je v současné době nejlepší možnost pro řetězce, které nejsou součástí EVM, a požadavek na podporu substreamů. Další dokumentace se zaměřuje na to, jak Firehose funguje, přidání podpory Firehose pro nový řetězec a jeho integraci s Uzel Graf. Doporučená dokumentace pro integrátory: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Přidání podpory Firehose pro nový řetězec](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrace graf uzlu s novým řetězcem přes Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/cs/querying/graphql-api.mdx b/website/pages/cs/querying/graphql-api.mdx index e1c7f3e566f7..194d477a23c1 100644 --- a/website/pages/cs/querying/graphql-api.mdx +++ b/website/pages/cs/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Dotazy +## What is GraphQL? -Ve schématu podgrafu definujete typy nazvané `Entity`. Pro každý typ `Entity` bude na nejvyšší úrovni typu `Query` vygenerováno pole `entity` a `entity`. Všimněte si, že `dotaz` nemusí být při použití Grafu zahrnut na vrcholu `graphql` dotazu. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Příklady @@ -21,7 +29,7 @@ Dotaz na jednu entitu `Token` definovanou ve vašem schématu: } ``` -> **Poznámka:** Při dotazování na jednu entitu je pole `id` povinné a musí to být řetězec. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Dotaz na všechny entity `Token`: @@ -36,7 +44,10 @@ Dotaz na všechny entity `Token`: ### Třídění -Při dotazování na kolekci lze parametr `orderBy` použít k seřazení podle určitého atributu. Kromě toho lze pomocí parametru `orderDirection` určit směr řazení, `asc` pro vzestupné nebo `desc` pro sestupné. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Příklad @@ -53,7 +64,7 @@ Při dotazování na kolekci lze parametr `orderBy` použít k seřazení podle Od verze Uzel grafu [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) lze entity třídit na základě vnořených entit. -V následujícím příkladu seřadíme tokeny podle jména jejich vlastníka: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ V následujícím příkladu seřadíme tokeny podle jména jejich vlastníka: ### Stránkování -Při dotazování na kolekci lze parametr `První` použít pro stránkování od začátku kolekce. Stojí za zmínku, že výchozí řazení je podle ID ve vzestupném alfanumerickém pořadí, nikoli podle času vytvoření. - -Dále lze parametr `skip` použít k přeskočení entit a stránkování, např. `first:100` zobrazí prvních 100 entit a `first:100, skip:100` zobrazí dalších 100 entit. +When querying a collection, it's best to: -Dotazy by se měly vyvarovat používání velmi velkých hodnot `přeskočit`, protože mají obecně nízkou výkonnost. Pro získání velkého počtu položek je mnohem lepší procházet entity na základě atributu, jak je uvedeno v posledním příkladu. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Příklad s použitím `first` @@ -106,7 +118,7 @@ Dotaz na 10 entit `Token`, posunutých o 10 míst od začátku kolekce: #### Příklad s použitím `first` a `id_ge` -Pokud klient potřebuje získat velký počet entit, je mnohem výkonnější založit dotazy na atributu a filtrovat podle něj. Klient by například pomocí tohoto dotazu získal velký počet tokenů: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -Poprvé by odeslal dotaz s `lastID = ""` a při dalších požadavcích by nastavil `lastID` na atribut `id` poslední entity v předchozím požadavku. Tento přístup bude fungovat podstatně lépe než použití rostoucích hodnot `skip`. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtrování -Pomocí parametru `where` můžete v dotazech filtrovat různé vlastnosti. V rámci parametru `kde` můžete filtrovat podle více hodnot. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Příklad s použitím `where` @@ -155,7 +168,7 @@ Pro porovnání hodnot můžete použít přípony jako `_gt`, `_lte`: #### Příklad pro filtrování bloků -Entity můžete filtrovat také pomocí `_change_block(number_gte: Int)` - filtruje entity, které byly aktualizovány v zadaném bloku nebo po něm. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. To může být užitečné, pokud chcete načíst pouze entity, které se změnily například od posledního dotazování. Nebo může být užitečná pro zkoumání nebo ladění změn entit v podgrafu (v kombinaci s blokovým filtrem můžete izolovat pouze entity, které se změnily v určitém bloku). @@ -193,7 +206,7 @@ Od verze Uzel grafu [`v0.30.0`](https://github.com/graphprotocol/graph-node/rele ##### Operátor `AND` -V následujícím příkladu filtrujeme výzvy s `outcome` `succeeded` a `number` větším nebo rovným `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -223,7 +236,7 @@ V následujícím příkladu filtrujeme výzvy s `outcome` `succeeded` a `number ##### Operátor `OR` -V následujícím příkladu filtrujeme výzvy s `outcome` `succeeded` nebo `number` větším nebo rovným `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) Můžete se dotazovat na stav entit nejen pro nejnovější blok, což je výchozí nastavení, ale také pro libovolný blok v minulosti. Blok, u kterého má dotaz proběhnout, lze zadat buď číslem bloku, nebo jeho blokovým hashem, a to tak, že do polí toplevel dotazů zahrnete argument `blok`. -Výsledek takového dotazu se v průběhu času nemění, tj. dotaz na určitý minulý blok vrátí stejný výsledek bez ohledu na to, kdy je proveden, s výjimkou toho, že pokud se dotazujete na blok velmi blízko hlavy řetězce, výsledek se může změnit, pokud se ukáže, že tento blok není v hlavním řetězci a řetězec se reorganizuje. Jakmile lze blok považovat za konečný, výsledek dotazu se nezmění. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Všimněte si, že současná implementace stále podléhá určitým omezením, která by mohla tyto záruky porušit. Implementace nemůže vždy zjistit, že daný blokový hash vůbec není v hlavním řetězci, nebo že výsledek dotazu podle blokového hashe na blok, který ještě nelze považovat za finální, může být ovlivněn reorganizací bloku probíhající současně s dotazem. Neovlivňují výsledky dotazů podle blokové hash, pokud je blok finální a je známo, že je v hlavním řetězci. [Toto Problém ](https://github.com/graphprotocol/graph-node/issues/1405) podrobně vysvětluje, jaká jsou tato omezení. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Příklad @@ -376,11 +389,11 @@ Uzel grafu implementuje ověření [založené na specifikacích](https://spec.g ## Schema -Schéma datového zdroje - tj. typy entit, hodnoty a vztahy, které jsou k dispozici pro dotazování - jsou definovány pomocí [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -Schéma GraphQL obecně definují kořenové typy pro `dotazy`, `odběry` a `mutace`. Graf podporuje pouze `dotazy`. Kořenový typ `Dotaz` pro váš podgraf je automaticky vygenerován ze schématu GraphQL, které je součástí manifestu podgrafu. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Poznámka:** Naše API nevystavuje mutace, protože se očekává, že vývojáři budou vydávat transakce přímo proti podkladovému blockchainu ze svých aplikací. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/cs/querying/managing-api-keys.mdx b/website/pages/cs/querying/managing-api-keys.mdx index 1a9626028815..0f5721e5cbcb 100644 --- a/website/pages/cs/querying/managing-api-keys.mdx +++ b/website/pages/cs/querying/managing-api-keys.mdx @@ -2,23 +2,33 @@ title: Správa klíčů API --- -Bez ohledu na to, zda jste vývojář dapp nebo podgraf, budete muset spravovat klíče API. To je důležité pro to, abyste se mohli dotazovat na podgrafy, protože klíče API zajišťují, že spojení mezi službami aplikace jsou platná a autorizovaná. To zahrnuje ověřování koncového uživatele a zařízení, které aplikaci používá. +## Přehled -The "API keys" table lists out existing API keys, which will give you the ability to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, as well as total query numbers. You can click the "three dots" menu to edit a given API key: +API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. + +### Create and Manage API Keys + +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. + +The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. + +You can click the "three dots" menu to the right of a given API key to: - Rename API key - Regenerate API key - Delete API key - Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +### API Key Details + You can click on an individual API key to view the Details page: -1. Sekce **Přehled** vám umožní: +1. Under the **Overview** section, you can: - Úprava názvu klíče - Regenerace klíčů API - Zobrazení aktuálního využití klíče API se statsi: - Počet dotazů - Výše vynaložených GRT -2. V části **Zabezpečení** budete moci zvolit nastavení zabezpečení podle úrovně kontroly, kterou chcete mít nad klíči API. V této části můžete: +2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - Zobrazení a správa názvů domén oprávněných používat váš klíč API - Přiřazení podgrafů, na které se lze dotazovat pomocí klíče API diff --git a/website/pages/cs/querying/querying-best-practices.mdx b/website/pages/cs/querying/querying-best-practices.mdx index f2fb16bcf8b7..85f43c8b931d 100644 --- a/website/pages/cs/querying/querying-best-practices.mdx +++ b/website/pages/cs/querying/querying-best-practices.mdx @@ -2,17 +2,15 @@ title: Osvědčené postupy dotazování --- -Graf poskytuje decentralizovaný způsob dotazování na data z blockchainů. +The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Data sítě Graf jsou zpřístupněna prostřednictvím GraphQL API, což usnadňuje dotazování na data pomocí jazyka GraphQL. - -Tato stránka vás provede základními pravidly jazyka GraphQL a osvědčenými postupy pro dotazy GraphQL. +Learn the essential GraphQL language rules and best practices to optimize your subgraph. --- ## Dotazování GraphQL API -### Anatomie dotazu GraphQL +### The Anatomy of a GraphQL Query Na rozdíl od rozhraní REST API je GraphQL API postaveno na schématu, které definuje, jaké dotazy lze provádět. @@ -52,7 +50,7 @@ query [operationName]([variableName]: [variableType]) { } ``` -I když je seznam syntaktických doporučení a doporučení dlouhý, zde jsou základní pravidla, která je třeba mít na paměti, pokud jde o psaní dotazů GraphQL: +## Rules for Writing GraphQL Queries - Každý `název dotazu` smí být při jedné operaci použit pouze jednou. - Každé `pole` musí být ve výběru použito pouze jednou (pod `token` se nemůžeme dvakrát dotazovat na `id`) @@ -61,24 +59,24 @@ I když je seznam syntaktických doporučení a doporučení dlouhý, zde jsou z - V daném seznamu proměnných musí být každá z nich jedinečná. - Musí být použity všechny definované proměnné. -Nedodržení výše uvedených pravidel skončí chybou Graf API. +> Note: Failing to follow these rules will result in an error from The Graph API. -Kompletní seznam pravidel s příklady kódu naleznete v naší příručce [GraphQL Validations](/release-notes/graphql-validations-migration-guide/). +For a complete list of rules with code examples, check out [GraphQL Validations guide](/release-notes/graphql-validations-migration-guide/). ### Odeslání dotazu na GraphQL API -GraphQL je jazyk a sada konvencí, které se přenášejí přes protokol HTTP. +GraphQL is a language and set of conventions that transport over HTTP. -To znamená, že se můžete dotazovat na GraphQL API pomocí standardního `fetch` (nativně nebo pomocí `@whatwg-node/fetch` nebo `isomorphic-fetch`). +It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -Jak je však uvedeno v části ["Dotazování z aplikace"](/querying/querying-from-an-application), doporučujeme používat našeho `graf-klienta`, který podporuje jedinečné funkce, jako např: +However, as mentioned in ["Querying from an Application"](/querying/querying-from-an-application), it's recommended to use `graph-client`, which supports the following unique features: - Manipulace s podgrafy napříč řetězci: Dotazování z více podgrafů v jednom dotazu - [Automatické sledování](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatické stránkování](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Plně zadaný výsledekv -Zde se dozvíte, jak zadat dotaz do Grafu pomocí `graph-client`: +Here's how to query The Graph with `graph-client`: ```tsx import { execute } from '../.graphclient' @@ -102,9 +100,7 @@ async function main() { main() ``` -Další alternativy klienta GraphQL jsou popsány v ["Dotazování z aplikace"](/querying/querying-from-an-application). - -Nyní, když jsme se seznámili se základními pravidly syntaxe dotazů GraphQL, se podíváme na osvědčené postupy psaní dotazů GraphQL. +More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). --- @@ -112,7 +108,7 @@ Nyní, když jsme se seznámili se základními pravidly syntaxe dotazů GraphQL ### Vždy pište statické dotazy -Běžnou (špatnou) praxí je dynamické vytváření řetězců dotazů následujícím způsobem: +A common (bad) practice is to dynamically build query strings as follows: ```tsx const id = params.id @@ -128,14 +124,14 @@ query GetToken { // Execute query... ``` -Výše uvedený úryvek sice vytvoří platný dotaz GraphQL, ale **má mnoho nevýhod**: +While the above snippet produces a valid GraphQL query, **it has many drawbacks**: - je **těžší porozumět** dotazu jako celku - vývojáři jsou **zodpovědní za bezpečnou úpravu interpolace řetězců** - neposílat hodnoty proměnných jako součást parametrů požadavku **zabránit případnému ukládání do mezipaměti na straně serveru** - **zabraňuje nástrojům staticky analyzovat dotaz** (např.: Linter nebo nástroje pro generování typů) -Z tohoto důvodu se doporučuje psát dotazy vždy jako statické řetězce: +For this reason, it is recommended to always write queries as static strings: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -157,18 +153,18 @@ const result = await execute(query, { }) ``` -To přináší **mnoho výhod**: +Doing so brings **many advantages**: - **Snadné čtení a údržba** dotazů - GraphQL **server zpracovává sanitizaci proměnných** - **Proměnné lze ukládat do mezipaměti** na úrovni serveru - **Nástroje mohou staticky analyzovat dotazy** (více v následujících kapitolách) -**Poznámka: Jak podmíněně zahrnout pole do statických dotazů** +### How to include fields conditionally in static queries -Pole `vlastník` můžeme chtít zahrnout pouze při splnění určité podmínky. +You might want to include the `owner` field only on a particular condition. -K tomu můžeme využít direktivu `@include(if:...)` takto: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,19 +187,18 @@ const result = await execute(query, { }) ``` -Poznámka: Opačným direktivou je `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want -GraphQL se proslavil sloganem „Požádejte o to, co chcete“. - -Z tohoto důvodu neexistuje způsob, jak v GraphQL získat všechna dostupná pole, aniž byste je museli vypisovat jednotlivě. +GraphQL became famous for its "Ask for what you want" tagline. -Při dotazování na GraphQL vždy myslete na to, abyste dotazovali pouze pole, která budou skutečně použita. +For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -Častou příčinou nadměrného načítání jsou kolekce entit. Ve výchozím nastavení dotazy načtou 100 entit v kolekci, což je obvykle mnohem více, než kolik se skutečně použije, např. pro zobrazení uživateli. Dotazy by proto měly být téměř vždy nastaveny explicitně jako první a měly by zajistit, aby načítaly pouze tolik entit, kolik skutečně potřebují. To platí nejen pro kolekce nejvyšší úrovně v dotazu, ale ještě více pro vnořené kolekce entit. +- Při dotazování na GraphQL vždy myslete na to, abyste dotazovali pouze pole, která budou skutečně použita. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. -Například v následujícím dotazu: +For example, in the following query: ```graphql query listTokens { @@ -218,9 +213,9 @@ query listTokens { } ``` -Odpověď může obsahovat 100 transakcí pro každý ze 100 tokenů. +The response could contain 100 transactions for each of the 100 tokens. -Pokud aplikace potřebuje pouze 10 transakcí, měl by dotaz explicitně nastavit parametr `first: 10` v poli transakcí. +If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field. ### Use a single query to request multiple records @@ -256,7 +251,7 @@ query ManyRecords { ### Combine multiple queries in a single request -Vaše aplikace může vyžadovat dotazování na více typů dat takto: +Your application might require querying multiple types of data as follows: ```graphql import { execute } from "your-favorite-graphql-client" @@ -286,9 +281,9 @@ const [tokens, counters] = Promise.all( ) ``` -Přestože je tato implementace zcela platná, bude vyžadovat dva požadavky na GraphQL API. +While this implementation is totally valid, it will require two round trips with the GraphQL API. -Naštěstí je také možné odeslat více dotazů v jednom požadavku GraphQL, a to následujícím způsobem: +Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows: ```graphql import { execute } from "your-favorite-graphql-client" @@ -309,13 +304,13 @@ query GetTokensandCounters { const { result: { tokens, counters } } = execute(query) ``` -Tento přístup **zlepší celkový výkon** tím, že zkrátí čas strávený na síti (ušetří vám cestu k API) a poskytne **stručnější implementaci**. +This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**. ### Využití fragmentů GraphQL -Užitečnou funkcí pro psaní dotazů GraphQL je GraphQL Fragment. +A helpful feature to write GraphQL queries is GraphQL Fragment. -Při pohledu na následující dotaz si všimnete, že některá pole se opakují ve více výběrových sadách (`{ ... }`): +Looking at the following query, you will notice that some fields are repeated across multiple Selection-Sets (`{ ... }`): ```graphql query { @@ -335,12 +330,12 @@ query { } ``` -Taková opakovaná pole (`id`, `active`, `status`) přinášejí mnoho problémů: +Such repeated fields (`id`, `active`, `status`) bring many issues: -- hůře čitelné pro rozsáhlejší dotazy -- při použití nástrojů, které generují typy TypeScript na základě dotazů (_více o tom v poslední části_), budou `newDelegate` a `oldDelegate` mít za následek dvě samostatné inline rozhraní. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. -Přepracovaná verze dotazu by byla následující: +A refactored version of the query would be the following: ```graphql query { @@ -364,15 +359,15 @@ fragment DelegateItem on Transcoder { } ``` -Použití GraphQL `fragment` zlepší čitelnost (zejména v měřítku), ale také povede k lepšímu generování typůTypeScript. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. -Při použití nástroje pro generování typů vygeneruje výše uvedený dotaz vhodný typ `DelegateItemFragment` (_viz poslední část "Nástroje"_). +When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### Co dělat a nedělat s fragmenty GraphQL -**Základem fragmentu musí být typ** +### Základem fragmentu musí být typ -Fragment nemůže být založen na nepoužitelném typu, zkrátka **na typu, který nemá pole**: +A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: ```graphql fragment MyFragment on BigInt { @@ -380,11 +375,11 @@ fragment MyFragment on BigInt { } ``` -`BigInt` je **skalární** (nativní "jednoduchý" typ), který nelze použít jako základ fragmentu. +`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**Jak šířit fragment** +#### Jak šířit fragment -Fragmenty jsou definovány na konkrétních typech a podle toho by se měly používat v dotazech. +Fragments are defined on specific types and should be used accordingly in queries. Příklad: @@ -409,18 +404,18 @@ fragment VoteItem on Vote { `newDelegate` and `oldDelegate` are of type `Transcoder`. -Fragment typu `Vote` zde není možné šířit. +It is not possible to spread a fragment of type `Vote` here. -**Definice fragmentu jako atomické obchodní jednotky dat** +#### Definice fragmentu jako atomické obchodní jednotky dat -Fragment GraphQL musí být definován na základě jejich použití. +GraphQL `Fragment`s must be defined based on their usage. -Pro většinu případů použití stačí definovat jeden fragment pro každý typ (v případě opakovaného použití polí nebo generování typů). +For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Zde je praktický postup pro použití Fragmentu: +Here is a rule of thumb for using fragments: -- pokud se v dotazu opakují pole stejného typu, seskupte je do fragmentu -- pokud se opakují podobná, ale ne stejná pole, vytvořte více fragmentů, např: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,51 +438,51 @@ fragment VoteWithPoll on Vote { --- -## Základní nástroje +## The Essential Tools ### Weboví průzkumníci GraphQL -Iterace dotazů jejich spouštěním v aplikaci může být obtížná. Z tohoto důvodu neváhejte použít [Graph Explorer](https://thegraph.com/explorer) k testování dotazů před jejich přidáním do aplikace. Průzkumník grafů vám poskytne předkonfigurované hřiště GraphQL pro testování vašich dotazů. +Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries. -Pokud hledáte flexibilnější způsob ladění/testování dotazů, jsou k dispozici další podobné web nástroje, například [Altair](https://altairgraphql.dev/) a [GraphiQL](https://graphiql-online.com/graphiql). +If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql). ### GraphQL Linting -Abyste mohli dodržovat výše uvedené osvědčené postupy a syntaktická pravidla, doporučujeme používat následující workflow a nástroje IDE. +In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools. **GraphQL ESLint** -[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) vám pomůže udržet si přehled o nejlepších postupech GraphQL bez většího úsilí. +[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort. -[Nastavení konfigurace "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) prosadí základní pravidla, jako jsou: +[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as: - `@graphql-eslint/fields-on-correct-type`: je pole použito na správném typu? - `@graphql-eslint/no-unused variables`: má daná proměnná zůstat nepoužitá? - a další! -To vám umožní **odhalit chyby i bez testování dotazů** na hřišti nebo jejich spuštění ve výrobě! +This will allow you to **catch errors without even testing queries** on the playground or running them in production! ### IDE zásuvné -**VSCode a GraphQL** +**VSCode and GraphQL** -Rozšíření [GraphQL VSCode](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) je vynikajícím doplňkem vašeho vývojového pracovního postup: +The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- zvýraznění syntaxe -- návrhy automatického dokončování -- validace proti schéma -- snippets -- přejít na definici fragmentů a vstupních typů +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types -Pokud používáte `graphql-eslint`, je rozšíření [ESLint VSCode](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) nutností pro správnou vizualizaci chyb a varování v kódu. +If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. -**WebStorm/Intellij a GraphQL** +**WebStorm/Intellij and GraphQL** -Zásuvný modul [JS GraphQL](https://plugins.jetbrains.com/plugin/8097-graphql/) výrazně zlepší vaše zkušenosti při práci s GraphQL tím, že poskytuje: +The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- zvýraznění syntaxe -- návrhy automatického dokončování -- validace proti schématu -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -Další informace najdete v tomto [článku o WebStormu](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/), který představuje všechny hlavní funkce zásuvného. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/cs/querying/querying-from-an-application.mdx b/website/pages/cs/querying/querying-from-an-application.mdx index 89f7bbab0067..20538749ff25 100644 --- a/website/pages/cs/querying/querying-from-an-application.mdx +++ b/website/pages/cs/querying/querying-from-an-application.mdx @@ -2,42 +2,46 @@ title: Dotazování z aplikace --- -Po nasazení podgrafu do aplikace Podgraf Studio nebo Graf Explorer se zobrazí koncový bod GraphQL API, který by měl vypadat takto: +Learn how to query The Graph from your application. -**Podgraf Studio (testovací koncový bod)** +## Getting GraphQL Endpoint -```sh -Queries (HTTP) +Once a subgraph is deployed to [Subgraph Studio](https://thegraph.com/studio/) or [Graph Explorer](https://thegraph.com/explorer), you will be given the endpoint for your GraphQL API that should look something like this: + +### Podgraf Studio + +``` https://api.studio.thegraph.com/query/// ``` -**Průzkumník grafů** +### Průzkumník grafů -```sh -Queries (HTTP) +``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -Pomocí koncového bodu GraphQL můžete použít různé knihovny GraphQL Client k dotazování podgrafu a naplnění aplikace daty indexovanými podgraf. - -Zde je několik nejoblíbenějších klientů GraphQL v ekosystému a návod, jak je používat: +With your GraphQL endpoint, you can use various GraphQL Client libraries to query the subgraph and populate your app with data indexed by the subgraph. -## Klienti GraphQL +## Using Popular GraphQL Clients -### Graf klient +### Graph Client -Graf poskytuje vlastního klienta GraphQL, `graph-client`, který podporuje jedinečné funkce, jako jsou: +The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: - Manipulace s podgrafy napříč řetězci: Dotazování z více podgrafů v jednom dotazu - [Automatické sledování](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatické stránkování](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Plně zadaný výsledekv -Je také integrován s populárními klienty GraphQL, jako jsou Apollo a URQL, a je kompatibilní se všemi prostředími (React, Angular, Node.js, React Native).Použití `graph-client` vám poskytne nejlepší zážitek z interakce s Graf. +> Note: `graph-client` is integrated with other popular GraphQL clients such as Apollo and URQL, which are compatible with environments such as React, Angular, Node.js, and React Native. As a result, using `graph-client` will provide you with an enhanced experience for working with The Graph. + +### Fetch Data with Graph Client + +Let's look at how to fetch data from a subgraph with `graph-client`: -Podívejme se, jak načíst data z podgrafu pomocí `graphql-client`. +#### Krok 1 -Chcete-li začít, nezapomeňte si do projektu nainstalovat Graf Client CLI: +Install The Graph Client CLI in your project: ```sh yarn add -D @graphprotocol/client-cli @@ -45,6 +49,8 @@ yarn add -D @graphprotocol/client-cli npm install --save-dev @graphprotocol/client-cli ``` +#### Krok 2 + Definujte svůj dotaz v souboru `.graphql` (nebo v souboru `.js` nebo `.ts`): ```graphql @@ -72,7 +78,9 @@ query ExampleQuery { } ``` -Poté vytvořte konfigurační soubor (nazvaný `.graphclientrc.yml`) a odkažte v něm například na koncové body GraphQL poskytnuté službou Graf: +#### Krok 3 + +Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: ```yaml # .graphclientrc.yml @@ -90,13 +98,17 @@ documents: - ./src/example-query.graphql ``` -Spuštěním následujícího příkazu Graf Client CLI se vygeneruje kód JavaScriptu připravený k použití: +#### Step 4 + +Run the following The Graph Client CLI command to generate typed and ready to use JavaScript code: ```sh graphclient build ``` -Nakonec aktualizujte soubor `.ts` tak, aby používal vygenerované dokumenty GraphQL: +#### Step 5 + +Update your `.ts` file to use the generated typed GraphQL documents: ```tsx import React, { useEffect } from 'react' @@ -134,33 +146,35 @@ function App() { export default App ``` -**⚠️ Důležité upozornění** +> **Important Note:** `graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you can [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). However, if you choose to go with another client, keep in mind that **you won't be able to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. -`graph-client` je dokonale integrován s dalšími klienty GraphQL, jako je klient Apollo, URQL nebo React Query; [příklady najdete v oficiálním repozitáři](https://github.com/graphprotocol/graph-client/tree/main/examples). +### Apollo Client -Pokud se však rozhodnete pro jiného klienta, mějte na paměti, že **nebudete moci používat funkci Cross-chain podgraf Obsluha nebo Automatické pagination, což jsou základní funkce pro dotazování v Grafu**. +[Apollo client](https://www.apollographql.com/docs/) is a common GraphQL client on front-end ecosystems. It's available for React, Angular, Vue, Ember, iOS, and Android. -### Klient Apollo +Although it's the heaviest client, it has many features to build advanced UI on top of GraphQL: -[Klient Apollo](https://www.apollographql.com/docs/) je všudypřítomný klient GraphQL v ekosystému front-end. +- Advanced error handling +- Stránkování +- Data prefetching +- Optimistic UI +- Local state management -Klient Apollo je k dispozici pro React, Angular, Vue, Ember, iOS a Android, ačkoli je nejtěžším klientem, přináší mnoho funkcí pro budování pokročilého UI na základě GraphQL: +### Fetch Data with Apollo Client -- pokročilé zpracování chyb -- stránkování -- přednačítání dat -- optimistické UI -- místní státní správa +Let's look at how to fetch data from a subgraph with Apollo client: -Podívejme se, jak načíst data z podgrafu pomocí klienta Apollo ve web projektu. +#### Krok 1 -Nejprve nainstalujte `@apollo/client` a `graphql`: +Install `@apollo/client` and `graphql`: ```sh npm install @apollo/client graphql ``` -Pak se můžete dotazovat API pomocí následujícího kódu: +#### Krok 2 + +Query the API with the following code: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -193,6 +207,8 @@ client }) ``` +#### Krok 3 + Chcete-li použít proměnné, můžete dotazu předat argument `variables`: ```javascript @@ -224,24 +240,30 @@ client }) ``` -### URQL +### URQL Overview -Další možností je [URQL](https://formidable.com/open-source/urql/), která je k dispozici v prostředích Node.js, React/Preact, Vue a Svelte a má pokročilejší funkce: +[URQL](https://formidable.com/open-source/urql/) is available within Node.js, React/Preact, Vue, and Svelte environments, with some more advanced features: - Flexibilní systém mezipaměti - Rozšiřitelný design (usnadňuje přidávání nových funkcí) - Lehký svazek (~5x lehčí než Apollo Client) - Podpora nahrávání souborů a režimu offline -Podívejme se, jak načíst data z podgrafu pomocí jazyka URQL ve web projektu. +### Fetch data with URQL + +Let's look at how to fetch data from a subgraph with URQL: -Nejprve nainstalujte `urql` a `graphql`: +#### Krok 1 + +Install `urql` and `graphql`: ```sh npm install urql graphql ``` -Pak se můžete dotazovat API pomocí následujícího kódu: +#### Krok 2 + +Query the API with the following code: ```javascript import { createClient } from 'urql' diff --git a/website/pages/cs/querying/querying-the-graph.mdx b/website/pages/cs/querying/querying-the-graph.mdx index ac2e872c87d2..b6baece6bdaa 100644 --- a/website/pages/cs/querying/querying-the-graph.mdx +++ b/website/pages/cs/querying/querying-the-graph.mdx @@ -2,7 +2,7 @@ title: Dotazování na graf --- -When a subgraph is published to The Graph Network, you can visit its subgraph details page on [Graph Explorer](https://thegraph.com/explorer) and use the "Playground" tab to explore the deployed GraphQL API for the subgraph, issuing queries and viewing the schema. +When a subgraph is published to The Graph Network, you can visit its subgraph details page on [Graph Explorer](https://thegraph.com/explorer) and use the "query" tab to explore the deployed GraphQL API for the subgraph, issuing queries and viewing the schema. > Please see the [Query API](/querying/graphql-api) for a complete reference on how to query the subgraph's entities. You can learn about GraphQL querying best practices [here](/querying/querying-best-practices) @@ -10,7 +10,9 @@ When a subgraph is published to The Graph Network, you can visit its subgraph de Each subgraph published to The Graph Network has a unique query URL in Graph Explorer for making direct queries that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. -![Podokno podgrafů dotazů](/img/query-subgraph-pane.png) +![Query Subgraph Button](/img/query-button-screenshot.png) + +![Query Subgraph URL](/img/query-url-screenshot.png) Learn more about querying from an application [here](/querying/querying-from-an-application). diff --git a/website/pages/cs/quick-start.mdx b/website/pages/cs/quick-start.mdx index 458443a8e3dd..a9a090bca4d6 100644 --- a/website/pages/cs/quick-start.mdx +++ b/website/pages/cs/quick-start.mdx @@ -2,24 +2,26 @@ title: Rychlé Začít --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily build, publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ujistěte se, že váš podgraf bude indexovat data z [podporované sítě](/developing/supported-networks). - -Tato příručka je napsána za předpokladu, že máte: +## Požadavky - Kryptopeněženka -- Adresa chytrého kontraktu v síti podle vašeho výběru +- A smart contract address on a [supported network](/developing/supported-networks/) +- [Node.js](https://nodejs.org/) installed +- A package manager of your choice (`npm`, `yarn` or `pnpm`) + +## How to Build a Subgraph -## 1. Vytvoření podgrafu v Subgraph Studio +### 1. Create a subgraph in Subgraph Studio -Přejděte do [Subgraph Studio](https://thegraph.com/studio/) a připojte peněženku. +Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -## 2. Nainstalujte Graph CLI +Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +### 2. Nainstalujte Graph CLI V místním počítači spusťte jeden z následujících příkazů: @@ -35,133 +37,148 @@ Použitím [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 3. Initialize your subgraph + +> Příkazy pro konkrétní podgraf najdete na stránce podgrafu v [Subgraph Studio](https://thegraph.com/studio/). + +The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. -Initialize your subgraph from an existing contract by running the initialize command: +The following command initializes your subgraph from an existing contract: ```sh -graph init --studio +graph init ``` -> Příkazy pro konkrétní podgraf najdete na stránce podgrafu v [Subgraph Studio](https://thegraph.com/studio/). +If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. -Při inicializaci podgrafu vás nástroj CLI požádá o následující informace: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protokol: vyberte protokol, ze kterého bude váš podgraf indexovat data. -- Slug podgrafu: vytvořte název podgrafu. Váš podgraf slug je identifikátor vašeho podgrafu. -- Adresář pro vytvoření podgrafu: vyberte místní adresář. -- Ethereum síť (nepovinné): možná budete muset zadat, ze které sítě kompatibilní s EVM bude váš subgraf indexovat data. -- Adresa zakázky: Vyhledejte adresu chytré smlouvy, ze které se chcete dotazovat na data. -- ABI: Pokud se ABI nevyplňuje automaticky, je třeba jej zadat ručně jako soubor JSON. -- Počáteční blok: Doporučuje se zadat počáteční blok, abyste ušetřili čas, zatímco váš subgraf indexuje data blockchainu. Počáteční blok můžete vyhledat tak, že najdete blok, ve kterém byl váš kontrakt nasazen. -- Název smlouvy: zadejte název své smlouvy. -- Indexovat události smlouvy jako entity: doporučujeme nastavit tuto hodnotu na true, protože se automaticky přidá mapování do vašeho subgrafu pro každou emitovanou událost -- Přidat další smlouvu(nepovinné): můžete přidat další smlouvu +- **Protocol**: Choose the protocol your subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- **Directory**: Choose a directory to create your subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Contract address**: Locate the smart contract address you’d like to query data from. +- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Contract Name**: Input the name of your contract. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Add another contract** (optional): You can add another contract. Na následujícím snímku najdete příklad toho, co můžete očekávat při inicializaci podgrafu: -![Subgraph command](/img/subgraph-init-example.png) +![Subgraph command](/img/CLI-Example.png) + +### 4. Edit your subgraph + +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. + +When making changes to the subgraph, you will mainly work with three files: -## 4. Write your subgraph +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -Předchozí příkazy vytvořily podgraf lešení, který můžete použít jako výchozí bod pro sestavení podgrafu. Při provádění změn v podgrafu budete pracovat především se třemi soubory: +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -- Manifest (`subgraph.yaml`) - Manifest definuje, jaké datové zdroje budou vaše podgrafy indexovat. -- Schéma (`schema.graphql`) - Schéma GraphQL definuje, jaká data chcete z podgrafu získat. -- AssemblyScript Mapování (`mapping.ts`) - Jedná se o kód, který převádí data z vašich datových zdrojů na entity definované ve schématu. +### 5. Deploy your subgraph -Další informace o zápisu podgrafu naleznete v části [Creating a Subgraph](/developing/creating-a-subgraph). +Remember, deploying is not the same as publishing. -## 5. Deploy to Subgraph Studio +When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. + +When you publish a subgraph, you are publishing it onchain to the decentralized network. Jakmile je podgraf napsán, spusťte následující příkazy: +```` ```sh -$ graph codegen -$ graph build +graph codegen && graph build ``` +```` + +Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. + +![Deploy key](/img/subgraph-studio-deploy-key.jpg) + +```` +```sh + +graph auth + +graph deploy +``` +```` + +The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -- Ověřte a nasaďte svůj podgraf. Klíč k nasazení najdete na stránce Subgraph ve Studiu Subgraph. +### 6. Review your subgraph +If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: + +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: + + ![Subgraph logs](/img/subgraph-logs-image.png) + +### 7. Publish your subgraph to The Graph Network + +Publishing a subgraph to the decentralized network is an onchain action that makes your subgraph available for [Curators](/network/curating/) to curate it and [Indexers](/network/indexing/) to index it. + +#### Publishing with Subgraph Studio + +To publish your subgraph, click the Publish button in the dashboard. + +![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) + +Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +Open the `graph-cli`. + +Use the following commands: + +```` ```sh -$ graph auth --studio -$ graph deploy --studio +graph codegen && graph build ``` -Budete vyzváni k zadání štítku verze. Důrazně se doporučuje použít [semver](https://semver.org/) pro označení verzí jako `0.0.1`. Přesto můžete jako verzi zvolit libovolný řetězec, například:`v1`, `version1`, `asdf`. - -## 6. Otestujte svůj podgraf - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -V protokolech se dozvíte, zda se v podgrafu vyskytly nějaké chyby. Protokoly funkčního podgrafu budou vypadat takto: - -![Subgraph logs](/img/subgraph-logs-image.png) - -Pokud podgraf selhává, můžete se na stav podgrafu zeptat pomocí nástroje GraphiQL Playground. Všimněte si, že můžete využít níže uvedený dotaz a zadat ID nasazení vašeho podgrafu. V tomto případě je `Qm...` ID nasazení (které najdete na stránce podgrafu v části **Podrobnosti**). Níže uvedený dotaz vás informuje o selhání podgrafu, takže můžete podle toho provádět ladění: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +Then, + +```sh +graph publish ``` +```` + +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). -## 7. Publish your subgraph to The Graph’s Decentralized Network +#### Přidání signálu do podgrafu -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. -Vyberte síť, do které chcete podgraf publikovat. Doporučujeme publikovat podgrafy do sítě Arbitrum One, abyste mohli využít výhod [vyšší rychlost transakcí a nižší náklady na plyn](/arbitrum/arbitrum-faq). +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. -Pro vyšší kvalitu služeb a silnější redundanci můžete svůj podgraf upravit tak, aby přilákal více indexátorů. V době psaní tohoto článku je doporučeno, abyste svůj podgraf kurátorovali s alespoň 3,000 GRT, abyste zajistili, že 3-5 dalších Indexerů začne obsluhovat dotazy na vašem podgrafu. +To learn more about curation, read [Curating](/network/curating/). -Abyste ušetřili náklady na benzín, můžete svůj subgraf kurátorovat ve stejné transakci, v níž jste ho publikovali, a to výběrem tohoto tlačítka při publikování subgrafu do decentralizované sítě The Graph: +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: -![Subgraph publish](/img/publish-and-signal-tx.png) +![Subgraph publish](/img/studio-publish-modal.png) -## 8. Query your subgraph +### 8. Query your subgraph -Nyní se můžete dotazovat na svůj podgraf odesláním dotazů GraphQL na adresu URL dotazu podgrafu, kterou najdete kliknutím na tlačítko dotazu. +You now have access to 100,000 free queries per month with your subgraph on The Graph Network! -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -Další informace o dotazování na data z podgrafu najdete [zde](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/cs/sps/introduction.mdx b/website/pages/cs/sps/introduction.mdx index 3e50521589af..12e3f81c6d53 100644 --- a/website/pages/cs/sps/introduction.mdx +++ b/website/pages/cs/sps/introduction.mdx @@ -14,6 +14,6 @@ It is really a matter of where you put your logic, in the subgraph or the Substr Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: -- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application/solana) -- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application/evm) -- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application/injective) +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/cs/sps/triggers-example.mdx b/website/pages/cs/sps/triggers-example.mdx index d8d61566295e..70dda12c61fd 100644 --- a/website/pages/cs/sps/triggers-example.mdx +++ b/website/pages/cs/sps/triggers-example.mdx @@ -2,7 +2,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' --- -## Prerequisites +## Požadavky Before starting, make sure to: @@ -11,6 +11,8 @@ Before starting, make sure to: ## Step 1: Initialize Your Project + + 1. Open your Dev Container and run the following command to initialize your project: ```bash @@ -18,6 +20,7 @@ Before starting, make sure to: ``` 2. Select the "minimal" project option. + 3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: ```yaml @@ -87,17 +90,7 @@ type MyTransfer @entity { This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. -## Step 4: Generate Protobuf Files - -To generate Protobuf objects in AssemblyScript, run the following command: - -```bash -npm run protogen -``` - -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. - -## Step 5: Handle Substreams Data in `mappings.ts` +## Step 4: Handle Substreams Data in `mappings.ts` With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: @@ -120,7 +113,7 @@ export function handleTriggers(bytes: Uint8Array): void { entity.designation = event.transfer!.accounts!.destination if (event.transfer!.accounts!.signer!.single != null) { - entity.signers = [event.transfer!.accounts!.signer!.single.signer] + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] } else if (event.transfer!.accounts!.signer!.multisig != null) { entity.signers = event.transfer!.accounts!.signer!.multisig!.signers } @@ -130,7 +123,17 @@ export function handleTriggers(bytes: Uint8Array): void { } ``` -## Conclusion +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Závěr You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. diff --git a/website/pages/cs/subgraphs.mdx b/website/pages/cs/subgraphs.mdx index 27b452211477..386ea043e174 100644 --- a/website/pages/cs/subgraphs.mdx +++ b/website/pages/cs/subgraphs.mdx @@ -1,5 +1,5 @@ --- -title: Subgraphs +title: Podgrafy --- ## What is a Subgraph? @@ -24,7 +24,13 @@ The **subgraph definition** consists of the following files: - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each of subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). + +## Životní cyklus podgrafů + +Here is a general overview of a subgraph’s lifecycle: + +![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development @@ -34,8 +40,47 @@ To learn more about each of subgraph component, check out [creating a subgraph]( 4. [Publish a subgraph](/publishing/publishing-a-subgraph/) 5. [Signal on a subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) -## Subgraph Lifecycle +### Build locally -Here is a general overview of a subgraph’s lifecycle: +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. -![Subgraph Lifecycle](/img/subgraph-lifecycle.png) +### Deploy to Subgraph Studio + +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: + +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. + +### Publish to the Network + +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. + +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. + +### Add Curation Signal for Indexing + +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. + +#### What is signal? + +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. + +### Querying & Application Development + +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). + +Learn more about [querying subgraphs](/querying/querying-the-graph/). + +### Updating Subgraphs + +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. + +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. + +### Deleting & Transferring Subgraphs + +If you no longer need a published subgraph, you can [delete](/managing/delete-a-subgraph/) or [transfer](/managing/transfer-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/cs/substreams.mdx b/website/pages/cs/substreams.mdx index 7cc86d6a0f04..03213f73f903 100644 --- a/website/pages/cs/substreams.mdx +++ b/website/pages/cs/substreams.mdx @@ -4,29 +4,31 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## Jak funguje Substreams ve 4 krocích -1. **Napíšete program v jazyce Rust, který definuje transformace, jež chcete aplikovat na data blockchainu.** Například následující funkce v jazyce Rust extrahuje relevantní informace z bloku Ethereum (číslo, hash a nadřazený hash). +1. **You write a Rust program, which defines the transformations that you want to apply to the blockchain data.** For example, the following Rust function extracts relevant information from an Ethereum block (number, hash, and parent hash). -```rust -fn get_my_block(blk: Block) -> Result { - let header = blk.header.as_ref().unwrap(); + ```rust + fn get_my_block(blk: Block) -> Result { + let header = blk.header.as_ref().unwrap(); - Ok(MyBlock { - number: blk.number, - hash: Hex::encode(&blk.hash), - parent_hash: Hex::encode(&header.parent_hash), - }) -} -``` + Ok(MyBlock { + number: blk.number, + hash: Hex::encode(&blk.hash), + parent_hash: Hex::encode(&header.parent_hash), + }) + } + ``` -2. **Program Rust zabalíte do modulu WASM pouhým spuštěním jediného příkazu CLI.** +2. **You wrap up your Rust program into a WASM module just by running a single CLI command.** -3. **Kontejner WASM je odeslán na koncový bod Substreams k provedení.** Poskytovatel Substreams dodá kontejneru WASM data blockchainu a jsou aplikovány transformace. +3. **The WASM container is sent to a Substreams endpoint for execution.** The Substreams provider feeds the WASM container with the blockchain data and the transformations are applied. 4. **You select a [sink](https://substreams.streamingfast.io/documentation/consume/other-sinks), a place where you want to send the transformed data** (a Postgres database or a Subgraph, for example). @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Rozšiřte své znalosti - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/cs/sunrise.mdx b/website/pages/cs/sunrise.mdx index 157bab9d09e9..75076fb51020 100644 --- a/website/pages/cs/sunrise.mdx +++ b/website/pages/cs/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Časté dotazy po východu slunce + aktualizace na síť Graf --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Poznámka: Východ slunce decentralizovaných dat skončil 12. června 2024. -## Jaký je východ slunce decentralizovaných dat? +## Jaký byl úsvit decentralizovaných dat? -Východ slunce decentralizovaných dat je iniciativa, za kterou stojí společnost Edge & Node. Jejím cílem je umožnit vývojářům podgrafů bezproblémový přechod na decentralizovanou síť Graf. +Úsvit decentralizovaných dat byla iniciativa, kterou vedla společnost Edge & Node. Tato iniciativa umožnila vývojářům podgrafů bezproblémově přejít na decentralizovanou síť Graf. -Tento plán vychází z mnoha předchozích změn v ekosystému Graf, včetně vylepšeného indexeru pro obsluhu dotazů na nově publikované podgrafy a možnosti integrovat do Graf nové blockchainové sítě. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### Jaké jsou fáze východu Slunce? +### Co se stalo s hostovanou službou? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +Koncové body dotazů hostované služby již nejsou k dispozici a vývojáři nemohou v hostované službě nasadit nové podgrafy. -## Aktualizace podgrafů do sítě grafů +Během procesu aktualizace mohli vlastníci podgrafů hostovaných služeb aktualizovat své podgrafy na síť Graf. Vývojáři navíc mohli nárokovat automatickou aktualizaci podgrafů. -### Kdy přestanou být podgrafy hostovaných služeb k dispozici? +### Měla tato aktualizace vliv na Podgraf Studio? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +Ne, na Podgraf Studio neměl Sunrise vliv. Podgrafy byly okamžitě k dispozici pro dotazování, a to díky aktualizačnímu indexeru, který využívá stejnou infrastrukturu jako hostovaná služba. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Proč byly podgrafy zveřejněny na Arbitrum, začalo indexovat jinou síť? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Bude můj podgraf hostované služby podporován v síti Graf? - -Ano, nástroj Indexer pro upgrade bude automaticky podporovat všechny podgrafy hostovaných služeb publikované v síti Graf pro bezproblémový upgrade. - -### Jak mohu aktualizovat podgraf hostované služby? - -> Poznámka: Upgrade podgrafu na síť grafů nelze vrátit zpět. - - - -Chcete-li aktualizovat podgraf hostované služby, můžete navštívit ovládací panel podgrafu na adrese [hostovaná služba](https://thegraph.com/hosted-service). - -1. Vyberte podgraf nebo podgrafy, které chcete aktualizovat. -2. Vyberte přijímající peněženku (peněženku, která se stane vlastníkem podgrafu). -3. Klikněte na tlačítko "Upgrade". - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### Jak mohu získat podporu pro proces aktualizace? - -Komunita Graf je zde, aby podporovala vývojáře při přechodu na síť Graf. Připojte se k [serveru Discord](https://discord.gg/vtvv7FP) společnosti The Graph a požádejte o pomoc v kanálu #upgrade-decentralized-network. - -### Jak lze zajistit vysokou kvalitu služeb a redundanci podgrafů v síti Graf? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Členové těchto blockchainových komunit jsou vyzýváni k integraci svého řetězce prostřednictvím [procesu integrace řetězce](/chain-integration-overview/). - -### Jak mohu publikovat nové verze do sítě? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade na nejnovější verzi [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Aktualizace příkazu deploy - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publikování vyžaduje Arbitrum ETH - při upgradu vašeho subgrafu se také uvolní malá částka, která vám usnadní první interakce s protokolem 🧑‍🚀 - -### Používám podgraf vytvořený někým jiným, jak mohu zajistit, aby nedošlo k přerušení mé služby? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### Co se stane, když svůj podgraf neaktualizuji? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### Jak se mohu začít dotazovat na podgrafy v síti grafů? - -Dostupné podgrafy můžete prozkoumat na stránce [Graph Explorer](https://thegraph.com/explorer). [Více informací o dotazování na podgrafy na Graf](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## O Upgrade Indexer -### Co je to upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> Aktualizace Indexer je v současné době aktivní. -### Jaké řetězce podporuje upgrade Indexer? +Upgrade Indexer byl implementován za účelem zlepšení zkušeností s upgradem podgrafů z hostované služby do sit' Graf a podpory nových verzí stávajících podgrafů, které dosud nebyly indexovány. -Upgrade Indexeru podporuje řetězce, které byly dříve dostupné pouze v hostované službě. +### Co dělá upgrade Indexer? -Úplný seznam podporovaných řetěz najdete [zde](/developing/supported-networks/). +- Zavádí řetězce, které ještě nezískaly odměnu za indexaci v síti Graf, a zajišťuje, aby byl po zveřejnění podgrafu co nejrychleji k dispozici indexátor pro obsluhu dotazů. +- Podporuje řetězce, které byly dříve dostupné pouze v hostované službě. Úplný seznam podporovaných řetězců najdete [zde](/developing/supported-networks/). +- Indexátoři, kteří provozují upgrade indexátoru, tak činí jako veřejnou službu pro podporu nových podgrafů a dalších řetězců, kterým chybí indexační odměny, než je Rada grafů schválí. ### Proč Edge & Node spouští aktualizaci Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historicky udržovaly hostovanou službu, a proto již mají synchronizovaná data pro podgrafy hostované služby. ### Co znamená upgrade indexeru pro stávající indexery? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Řetězce, které byly dříve podporovány pouze v hostované službě, byly vývojářům zpřístupněny v síti Graf nejprve bez odměn za indexování. + +Tato akce však uvolnila poplatky za dotazy pro všechny zájemce o indexování a zvýšila počet podgrafů zveřejněných v síti Graf. V důsledku toho mají indexátoři více příležitostí indexovat a obsluhovat tyto podgrafy výměnou za poplatky za dotazy, a to ještě předtím, než jsou odměny za indexování pro řetězec povoleny. -Upgrade Indexer také poskytuje komunitě Indexer informace o potenciální poptávce po podgraf nových řetězcích v síti grafů. +Upgrade Indexer také poskytuje komunitě Indexer informace o potenciální poptávce po podgrafech a nových řetězcích v síti grafů. ### Co to znamená pro delegáti? -Upgrade Indexer nabízí delegátům velkou příležitost. Jakmile bude více podgrafů upgradováno z hostované služby do sítě Graf, budou mít delegáti prospěch ze zvýšené aktivity v síti. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Bude upgrade Indexeru soutěžit o odměny se stávajícími Indexery? +### Did the upgrade Indexer compete with existing Indexers for rewards? -Ne, indexátor aktualizace přidělí pouze minimální částku na podgraf a nebude vybírat odměny za indexování. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### Jak to ovlivní vývojáře podgrafů? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### Jaký to má přínos pro spotřebitele dat? +### How does the upgrade Indexer benefit data consumers? Aktualizace Indexeru umožňuje podporu blockchainů v síti, které byly dříve dostupné pouze v rámci hostované služby. Tímto se rozšiřuje rozsah a dostupnost dat, která lze v síti dotazovat. -### Jak bude aktualizace Indexer oceňovat dotazy? - -Upgrade Indexer stanoví cenu dotazů podle tržní sazby, aby neovlivňoval trh s poplatky za dotazy. - -### Jaká jsou kritéria pro to, aby nástroj Indexer přestal podporovat podgraf? - -Aktualizační indexátor bude obsluhovat podgraf, dokud nebude dostatečně a úspěšně obsloužen konzistentními dotazy obsluhovanými alespoň třemi dalšími indexátory. - -Kromě toho indexátor aktualizace přestane podgraf podporovat, pokud se na něj v posledních 30 dnech nikdo nezeptal. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## O síti grafů - -### Musím provozovat vlastní infrastrukturu? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Jakmile váš podgraf dosáhne dostatečného kurátorského signálu a ostatní indexátory jej začnou podporovat, upgrade indexátoru se postupně sníží a umožní ostatním indexátorům vybírat odměny za indexování a poplatky za dotazy. - -### Měl bych hostovat vlastní indexovací infrastrukturu? - -Provozování infrastruktury pro vlastní projekt je [výrazně náročnější na zdroje](/network/benefits/) ve srovnání s používáním sit' Graf. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -Pokud přesto máte zájem o provozování [Graph Node](https://github.com/graphprotocol/graph-node), zvažte možnost připojit se k síti The Graph Network [jako indexátor](https://thegraph.com/blog/how-to-become-indexer/) a získávat odměny za indexování a poplatky za dotazy tím, že budete poskytovat data na svém podgrafu a dalších. - -### Měl bych používat centralizovaného poskytovatele indexování? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Zde je podrobný přehled výhod Graf oproti centralizovanému hosting: +### How does the upgrade Indexer price queries? -- **Odolnost a redundance**: Decentralizované systémy jsou díky své distribuované povaze ze své podstaty robustnější a odolnější. Data nejsou uložena na jediném serveru nebo místě. Místo toho je obsluhují stovky nezávislých indexérů po celém světě. Tím se snižuje riziko ztráty dat nebo přerušení služby v případě selhání jednoho uzlu, což vede k výjimečné provozuschopnosti (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Kvalita služeb**: Kromě působivé doby provozu se Sit' Graf vyznačuje průměrnou rychlostí dotazů (latence) ~106 ms a vyšší úspěšností dotazů ve srovnání s hostovanými alternativami. Více informací naleznete v [tomto blogu](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Stejně jako jste si vybrali blockchainovou síť kvůli její decentralizované povaze, bezpečnosti a transparentnosti, je volba sit' Graf rozšířením stejných principů. Sladěním své datové infrastruktury s těmito hodnotami zajistíte soudržné, odolné a důvěryhodné vývojové prostředí. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/cs/tap.mdx b/website/pages/cs/tap.mdx index 0a41faab9c11..6594ce05e1f5 100644 --- a/website/pages/cs/tap.mdx +++ b/website/pages/cs/tap.mdx @@ -4,7 +4,7 @@ title: TAP Migration Guide Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. -## Overview +## Přehled [TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: @@ -45,21 +45,21 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed ### Contracts -| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| Contract | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) | | ------------------- | -------------------------------------------- | -------------------------------------------- | -| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | -| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | -| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | +| TAP Verifier | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | +| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | +| Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | ### Gateway -| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| Component | Edge and Node Mainnet (Aribtrum Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) | | ---------- | --------------------------------------------- | --------------------------------------------- | | Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Požadavky In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. @@ -168,7 +168,7 @@ max_amount_willing_to_lose_grt = 20 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" ``` -Notes: +Poznámky: - Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). - Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. @@ -190,4 +190,4 @@ You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs ### Launchpad -Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/main/charts/graph-network-indexer) diff --git a/website/pages/cs/tokenomics.mdx b/website/pages/cs/tokenomics.mdx index f884f2e3257a..01a258d0fdcd 100644 --- a/website/pages/cs/tokenomics.mdx +++ b/website/pages/cs/tokenomics.mdx @@ -1,25 +1,25 @@ --- title: Tokenomics sítě grafů -description: Síť grafů je motivována výkonnou tokenomikou. Zde se dozvíte, jak funguje GRT, nativní token pracovní užitečnost grafu. +description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. --- -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +## Přehled -- Adresa tokenu GRT na Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. -Graf je decentralizovaný protokol, který umožňuje snadný přístup k datům blockchainu. +## Specifics -Je to podobný model jako B2B2C, jen je poháněn decentralizovanou sítí účastníků. Účastníci sítě spolupracují na poskytování dat koncovým uživatelům výměnou za odměny GRT. GRT je token pracovní utility, který koordinuje poskytovatele a spotřebitele dat. GRT slouží jako utilita pro koordinaci poskytovatelů a spotřebitelů dat v rámci sítě a motivuje účastníky protokolu k efektivní organizaci dat. +The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. -Pomocí Graf mohou uživatelé snadno přistupovat k datům z blockchainu a platit pouze za konkrétní informace, které potřebují. Graf dnes využívá mnoho [populárních dapps](https://thegraph.com/explorer) v ekosystému web3. +The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/billing/). -Graf indexuje data blockchainu podobně jako Google indexuje web. Možná už Graf používáte, aniž byste si to uvědomovali. Pokud jste si prohlíželi front end dapp, která získává svá data z podgrafu, dotazovali jste se na data z podgrafu! +- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -Graf hraje klíčovou roli při zpřístupňování blockchainových dat a umožnění trhu pro jejich výměnu. +- Adresa tokenu GRT na Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -## Role účastníků sítě +## The Roles of Network Participants -V síti jsou čtyři hlavní účastníci: +There are four primary network participants: 1. Delegáti - delegování GRT na indexátory & zabezpečení sítě @@ -29,82 +29,74 @@ V síti jsou čtyři hlavní účastníci: 4. Indexery - páteř blockchainových dat -Fishermen a Arbitrátoři jsou také nedílnou součástí úspěchu sítě díky svým dalším příspěvkům, které podporují práci ostatních primárních rolí účastníků. Pro více informací o rolích v síti si přečtěte tento článek. +Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). -![Tokenomický diagram](/img/updated-tokenomics-image.png) +![Tokenomics diagram](/img/updated-tokenomics-image.png) -## Delegáti (pasivně vydělávají GRT) +## Delegators (Passively earn GRT) -Delegáti delegují indexátorům GRT, čímž zvyšují podíl indexátorů na podgraf v síti. Delegáti na oplátku získávají od indexátorů procenta ze všech poplatků za dotazy a odměn za indexování. Každý indexátor si nezávisle na sobě stanoví podíl, který bude delegátům odměněn, čímž vzniká mezi indexátory soutěž o získání delegátů. Většina indexátorů nabízí 9-12% ročně. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. -Pokud by například delegát delegoval 15 tisíc GRT na indexátora nabízejícího 10%, obdržel by delegát ročně odměnu ~1500 GRT. +For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. -Existuje 0.5% daň z delegování, která se spálí vždy, když delegát deleguje GRT v síti. Pokud se delegát rozhodne stáhnout své delegované GRT, musí počkat na 28-epoch lhůtu pro zrušení vazby. Každá epocha je 6 646 bloků, což znamená, že 28 epoch je přibližně 26 dní. +There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. -Pokud čtete tento článek, můžete se právě teď stát delegátem tak, že přejdete na stránku [účastníci sítě](https://thegraph.com/explorer/participants/indexers) a delegujete GRT na vybraného indexátora. +If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. -## Kurátoři (Získat GRT) +## Curators (Earn GRT) -Kurátoři identifikují vysoce kvalitní podgrafy a "curate" je (tj. signalizují na nich GRT), aby získali kurátorské podíly, které zaručují procento ze všech budoucích poplatků za dotazy generované tímto podgraf. Ačkoli kurátorem může být každý nezávislý účastník sítě, obvykle jsou vývojáři podgrafů mezi prvními kurátory svých vlastních podgrafů, protože chtějí zajistit, aby byl jejich podgraf indexován. +Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. -As of April 11th, 2024, subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Kurátoři platí 1% kurátorskou daň, když vytvoří nový podgraf. Tato kurátorská daň se spálí, čímž se sníží nabídka GRT. +Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. -## Vývojáři +## Developers -Vývojáři vytvářejí podgrafy a dotazují se na ně, aby získali data blockchainu. Vzhledem k tomu, že podgrafy jsou open source, mohou vývojáři dotazovat existující podgrafy a načítat data blockchainu do svých dapps. Vývojáři platí za dotazy, které provádějí v GRT a které jsou distribuovány účastníkům sítě. +Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. ### Vytvoření podgrafu -Vývojáři mohou [vytvořit subgraf](/developing/creating-a-subgraph/) pro indexování dat v blockchainu. Podgrafy jsou pokyny pro indexátory, která data mají být doručena spotřebitelům. +Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Jakmile vývojáři vytvoří a otestují svůj podgraf, mohou [zveřejnit svůj podgraf](/publishing/publishing-a-subgraph/) v decentralizované síti Graf. +Once developers have built and tested their subgraph, they can [publish their subgraph](/publishing/publishing-a-subgraph/) on The Graph's decentralized network. ### Dotazování na existující podgraf Once a subgraph is [published](/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. -Podgrafy jsou [vyhledávány pomocí GraphQL](/querying/querying-the-graph/) a poplatky za dotazy jsou hrazeny pomocí GRT v [Subgraph Studio](https://thegraph.com/studio/). Poplatky za dotazy se rozdělují mezi účastníky sítě na základě jejich příspěvků do protokolu. - -1 % poplatků za dotazy zaplacených síti se spálí. - -## Indexéry (Získat GRT) - -Základem Graf jsou indexátoři. Provozují nezávislý hardware a software, který pohání decentralizovanou síť Graf. Indexery servírují data spotřebitelům na základě pokynů z dílčích grafů. - -Indexátoři mohou získat odměny GRT dvěma způsoby: +Subgraphs are [queried using GraphQL](/querying/querying-the-graph/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. -1. Poplatky za dotazy: GRT, které platí vývojáři nebo uživatelé za dotazy na data podgrafu. Poplatky za dotazy jsou rozdělovány přímo indexátorům podle exponenciální funkce rabat (viz GIP [zde](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1% of the query fees paid to the network are burned. -2. Odměny za indexování: 3% roční odměna se rozděluje indexátorům podle počtu indexovaných podgrafů. Tyto odměny motivují indexátory k indexování podgrafů, občas před zahájením poplatků za dotazování, k akumulaci a předkládání důkazů o indexaci (POI), které ověřují, že indexovali data přesně. +## Indexers (Earn GRT) -Každému podgrafu je přidělena část z celkové emise síťových tokenů, a to na základě množství kurátorského signálu podgrafu. Tato částka je pak odměněna indexátorům na základě jejich přiděleného podílu na podgrafu. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. -Aby bylo možné spustit indexovací uzel, musí indexátory vsadit do sítě 100,000 GRT nebo více. Indexátoři jsou motivováni k tomu, aby sázeli GRT úměrně množství dotazů, které obsluhují. +Indexers can earn GRT rewards in two ways: -Indexátoři mohou zvýšit své alokace GRT na podgrafy přijetím delegování GRT od delegátů a mohou přijmout až 16násobek svého původního podílu. Pokud se indexátor stane "nadměrně delegovaným" (tj. více než 16násobek svého původního podílu), nebude moci využít dodatečné GRT od delegátů, dokud nezvýší svůj podíl v síti. +1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -Výše odměny, kterou indexátor obdrží, se může lišit v závislosti na počátečním vkladu, přijatém delegování, kvalitě služeb a mnoha dalších faktorech. V následujícím grafu jsou veřejně dostupné údaje aktivního indexera v decentralizované síti Graf. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -### Indexer podíl & odměna allnodes-com.eth +Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. -![Indexování podílu a odměn](/img/indexing-stake-and-income.png) +In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Tyto údaje se týkají období od února 2021 do září 2022. +Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. -> Upozorňujeme, že tato situace se zlepší, až bude dokončena [Arbitrum migrace](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551), takže náklady na plyn budou pro účastníky sítě výrazně nižší. +The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. -## Dodávka žetonů: Burning & Vydání +## Token Supply: Burning & Issuance -Počáteční nabídka tokenů je 10 miliard GRT, přičemž cílem je vydávat 3 % nových tokenů ročně jako odměnu indexátorům za přidělování podílů na subgrafech. To znamená, že celková nabídka tokenů GRT se bude každý rok zvyšovat o 3 %, protože nové tokeny budou vydávány Indexerům za jejich příspěvek do sítě. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -Graf je navržen s několika mechanismy spalování, které kompenzují vydávání nových tokenů. Přibližně 1 % zásoby GRT je ročně spáleno prostřednictvím různých aktivit v síti a toto číslo se zvyšuje s tím, jak aktivita sítě stále roste. Mezi tyto spalovací činnosti patří 0.5% daň z delegování, kdykoli delegátor deleguje GRT na indexátora, 1% daň z kurátorství, když kurátoři signalizují na subgrafu, a 1% poplatek za dotaz na data v blockchainu. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. -![Celková spálená GRT](/img/total-burned-grt.jpeg) +![Total burned GRT](/img/total-burned-grt.jpeg) -Kromě těchto pravidelně se vyskytujících činností vypalování má token GRT také mechanismus slashing, který má trestat zlomyslné nebo nezodpovědné chování indexátorů. Pokud je indexátor slashován, spálí se 50 % jeho odměny za indexaci v dané epoše (zatímco druhá polovina připadne rybáři) a jeho vlastní podíl se sníží o 2.5 %, přičemž polovina této částky se spálí. To pomáhá zajistit, aby indexátoři měli silnou motivaci jednat v nejlepším zájmu sítě a přispívat k její bezpečnosti a stabilitě. +In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. -## Zlepšení protokolu +## Improving the Protocol -Síť Graf se neustále vyvíjí a ekonomický návrh protokolu se neustále vylepšuje, aby poskytoval co nejlepší služby všem účastníkům sítě. Na změny protokolu dohlíží Rada Graf a členové komunity jsou vyzýváni k účasti. Zapojte se do zlepšování protokolu v [Fórum Graf](https://forum.thegraph.com/). +The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). diff --git a/website/pages/de/about.mdx b/website/pages/de/about.mdx index 36c6a49f8fbc..8c088d6a53d1 100644 --- a/website/pages/de/about.mdx +++ b/website/pages/de/about.mdx @@ -1,47 +1,67 @@ --- -title: About The Graph +title: Über The Graph --- -This page will explain what The Graph is and how you can get started. +## Was ist The Graph? -## What is The Graph? +The Graph ist ein leistungsstarkes dezentrales Protokoll, das eine nahtlose Abfrage und Indizierung von Blockchain-Daten ermöglicht. Es vereinfacht den komplexen Prozess der Abfrage von Blockchain-Daten und macht die App-Entwicklung schneller und einfacher. -The Graph is a decentralized protocol for indexing and querying blockchain data. The Graph makes it possible to query data that is difficult to query directly. +## Grundlagen verstehen -Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. +Projekte mit komplexen Smart Contracts wie [Uniswap](https://uniswap.org/) und NFTs-Initiativen wie [Bored Ape Yacht Club](https://boredapeyachtclub.com/) speichern Daten auf der Ethereum-Blockchain, was es sehr schwierig macht, etwas anderes als grundlegende Daten direkt von der Blockchain zu lesen. -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +### Herausforderungen ohne The Graph -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Im Fall des oben aufgeführten konkreten Beispiels, Bored Ape Yacht Club, können Sie grundlegende Leseoperationen auf [dem Vertrag](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) durchführen. Sie können den Besitzer eines bestimmten Ape auslesen, die Inhalts-URI eines Ape anhand seiner ID lesen oder das Gesamtangebot auslesen. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +- Dies ist möglich, da diese Lesevorgänge direkt in den Smart Contract selbst programmiert sind. Allerdings sind fortgeschrittene, spezifische und reale Abfragen und Operationen wie Aggregation, Suche, Beziehungen und nicht-triviale Filterung **nicht möglich**. -**Indexing blockchain data is really, really hard.** +- Wenn Sie sich beispielsweise nach Apes erkundigen möchten, die einer bestimmten Adresse gehören, und Ihre Suche anhand eines bestimmten Merkmals verfeinern möchten, können Sie diese Informationen nicht durch direkte Interaktion mit dem Vertrag selbst erhalten. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +- Um mehr Daten zu erhalten, müsste man jedes einzelne [`Übertragungsereignis`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746), das jemals gesendet wurde, verarbeiten, die Metadaten aus IPFS unter Verwendung der Token-ID und des IPFS-Hashs lesen und dann zusammenfassen. -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +### Warum ist das ein Problem? -## How The Graph Works +Es würde **Stunden oder sogar Tage** dauern, bis eine dezentrale Anwendung (dapp), die in einem Browser läuft, eine Antwort auf diese einfachen Fragen erhält. -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +Alternativ können Sie einen eigenen Server einrichten, die Transaktionen verarbeiten, sie in einer Datenbank speichern und einen API-Endpunkt zur Abfrage der Daten erstellen. Diese Option ist jedoch [Ressourcen-intensiv](/network/benefits/), muss gewartet werden, stellt einen Single Point of Failure dar und bricht wichtige Sicherheitseigenschaften, die für die Dezentralisierung erforderlich sind. -Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. +Blockchain-Eigenschaften wie Endgültigkeit, Umstrukturierung der Kette und nicht gesperrte Blöcke erhöhen die Komplexität des Prozesses und machen es zeitaufwändig und konzeptionell schwierig, genaue Abfrageergebnisse aus Blockchain-Daten zu erhalten. -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +## The Graph bietet eine Lösung -![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) +The Graph löst diese Herausforderung mit einem dezentralen Protokoll, das Blockchain-Daten indiziert und eine effiziente und leistungsstarke Abfrage ermöglicht. Diese APIs (indizierte „Subgraphen“) können dann mit einer Standard-GraphQL-API abgefragt werden. -The flow follows these steps: +Heute gibt es ein dezentralisiertes Protokoll, das durch die Open-Source-Implementierung von [Graph Node](https://github.com/graphprotocol/graph-node) unterstützt wird und diesen Prozess ermöglicht. -1. A dapp adds data to Ethereum through a transaction on a smart contract. -2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +### Die Funktionsweise von The Graph -## Next Steps +Die Indizierung von Blockchain-Daten ist sehr schwierig, aber The Graph macht es einfach. The Graph lernt, wie man Ethereum-Daten mit Hilfe von Subgraphen indiziert. Subgraphs sind benutzerdefinierte APIs, die auf Blockchain-Daten aufgebaut sind. Sie extrahieren Daten aus einer Blockchain, verarbeiten sie und speichern sie so, dass sie nahtlos über GraphQL abgefragt werden können. -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +#### Besonderheiten -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +- The Graph verwendet Subgraph-Beschreibungen, die als Subgraph Manifest innerhalb des Subgraphen bekannt sind. + +- Die Beschreibung des Subgraphs beschreibt die Smart Contracts, die für einen Subgraph von Interesse sind, die Ereignisse innerhalb dieser Verträge, auf die man sich konzentrieren sollte, und wie man die Ereignisdaten den Daten zuordnet, die The Graph in seiner Datenbank speichern wird. + +- Wenn Sie einen Subgraphen erstellen, müssen Sie ein Subgraph Manifest schreiben. + +- Nachdem Sie das `Subgraph Manifest` geschrieben haben, können Sie das Graph CLI verwenden, um die Definition im IPFS zu speichern und einen Indexer anzuweisen, mit der Indizierung der Daten für diesen Subgraphen zu beginnen. + +Das nachstehende Diagramm enthält detailliertere Informationen über den Datenfluss, nachdem ein Subgraph Manifest mit Ethereum-Transaktionen bereitgestellt worden ist. + +![Eine graphische Darstellung, die erklärt, wie The Graph Graph Node verwendet, um Abfragen an Datenkonsumenten zu stellen](/img/graph-dataflow.png) + +Der Ablauf ist wie folgt: + +1. Eine Dapp fügt Ethereum durch eine Transaktion auf einem Smart Contract Daten hinzu. +2. Der Smart Contract gibt während der Verarbeitung der Transaktion ein oder mehrere Ereignisse aus. +3. Graph Node scannt Ethereum kontinuierlich nach neuen Blöcken und den darin enthaltenen Daten für Ihren Subgraphen. +4. Graph Node findet Ethereum-Ereignisse für Ihren Subgraphen in diesen Blöcken und führt die von Ihnen bereitgestellten Mapping-Handler aus. Das Mapping ist ein WASM-Modul, das die Dateneinheiten erstellt oder aktualisiert, die Graph Node als Reaktion auf Ethereum-Ereignisse speichert. +5. Die Dapp fragt den Graph Node über den [GraphQL-Endpunkt](https://graphql.org/learn/) des Knotens nach Daten ab, die von der Blockchain indiziert wurden. Der Graph Node wiederum übersetzt die GraphQL-Abfragen in Abfragen für seinen zugrundeliegenden Datenspeicher, um diese Daten abzurufen, wobei er die Indexierungsfunktionen des Speichers nutzt. Die Dapp zeigt diese Daten in einer reichhaltigen Benutzeroberfläche für die Endnutzer an, mit der diese dann neue Transaktionen auf Ethereum durchführen können. Der Zyklus wiederholt sich. + +## Nächste Schritte + +In den folgenden Abschnitten werden die Subgraphen, ihr Einsatz und die Datenabfrage eingehender behandelt. + +Bevor Sie Ihren eigenen Subgraphen schreiben, sollten Sie den [Graph Explorer](https://thegraph.com/explorer) erkunden und sich einige der bereits vorhandenen Subgraphen ansehen. Die Seite jedes Subgraphen enthält eine GraphQL- Playground, mit der Sie seine Daten abfragen können. diff --git a/website/pages/de/arbitrum/arbitrum-faq.mdx b/website/pages/de/arbitrum/arbitrum-faq.mdx index 67fffeeb677c..7e48874081e2 100644 --- a/website/pages/de/arbitrum/arbitrum-faq.mdx +++ b/website/pages/de/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum-FAQ Klicken Sie [hier](#billing-on-arbitrum-faqs), wenn Sie zu den Arbitrum Billing FAQs springen möchten. -## Warum implementiert The Graph eine L2-Lösung? +## Warum hat The Graph eine L2-Lösung eingeführt? -Durch die Skalierung von The Graph auf L2 können die Netzwerkteilnehmer erwarten: +Durch die Skalierung von The Graph auf L2 können die Netzwerkteilnehmer nun von folgenden Vorteilen profitieren: - Bis zu 26-fache Einsparungen bei den Gebühren für Gas @@ -14,26 +14,26 @@ Durch die Skalierung von The Graph auf L2 können die Netzwerkteilnehmer erwarte - Von Ethereum übernommene Sicherheit -Die Skalierung der Smart Contracts des Protokolls auf L2 ermöglicht den Netzwerkteilnehmern eine häufigere Interaktion zu geringeren Kosten in Form von Gasgebühren. Zum Beispiel könnten Indexer Zuweisungen öffnen und schließen, um eine größere Anzahl von Subgraphen mit größerer Häufigkeit zu indexieren, Entwickler könnten Subgraphen mit größerer Leichtigkeit bereitstellen und aktualisieren, Delegatoren könnten GRT mit größerer Häufigkeit delegieren und Kuratoren könnten Signale zu einer größeren Anzahl von Subgraphen hinzufügen oder entfernen - Aktionen, die zuvor als zu kostenintensiv angesehen wurden, um sie häufig auszuführen. +Die Skalierung der Smart Contracts des Protokolls auf L2 ermöglicht den Netzwerkteilnehmern eine häufigere Interaktion zu geringeren Kosten in Form von Gasgebühren. So können Indexer beispielsweise häufiger Zuweisungen öffnen und schließen, um eine größere Anzahl von Subgraphen zu indexieren. Entwickler können Subgraphen leichter bereitstellen und aktualisieren, und Delegatoren können GRT häufiger delegieren. Kuratoren können einer größeren Anzahl von Subgraphen Signale hinzufügen oder entfernen - Aktionen, die bisher aufgrund der Kosten zu kostspielig waren, um sie häufig durchzuführen. -DieThe Graph-Community beschloss letztes Jahr nach dem Ergebnis der [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)-Diskussion, mit Arbitrum weiterzumachen. +Die The Graph-Community beschloss letztes Jahr nach dem Ergebnis der [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)-Diskussion, mit Arbitrum weiterzumachen. ## Was muss ich tun, um The Graph auf L2 zu nutzen? -The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One. +Das Abrechnungssystem von The Graph akzeptiert GRT auf Arbitrum, und die Nutzer benötigen ETH auf Arbitrum, um ihr Gas zu bezahlen. Während das The Graph-Protokoll auf dem Ethereum Mainnet begann, finden alle Aktivitäten, einschließlich der Abrechnungsverträge, nun auf Arbitrum One statt. -Consequently, to pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this: +Um Abfragen zu bezahlen brauchen Sie also GRT auf Arbitrum. Hier sind ein paar Möglichkeiten, dies zu erreichen: -- If you already have GRT on Ethereum, you can bridge it to Arbitrum. You can do this via the GRT bridging option provided in Subgraph Studio or by using one of the following bridges: +- Wenn Sie bereits GRT auf Ethereum haben, können Sie es zu Arbitrum überbrücken. Sie können dieses über GRT-Bridging-Option in Subgraph Studio tun oder eine der folgenden Bridges verwenden: - - [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) + - [Die Arbitrum-Brücke] (https://bridge.arbitrum.io/?l2ChainId=42161) - [TransferTo](https://transferto.xyz/swap) -- If you have other assets on Arbitrum, you can swap them for GRT through a swapping protocol like Uniswap. +- Wenn du andere Vermögenswerte auf Arbitrum hast, kannst du sie über ein Swapping-Protokoll wie Uniswap in GRT tauschen. -- Alternatively, you can acquire GRT directly on Arbitrum through a decentralized exchange. +- Alternativ können Sie GRT auch direkt auf Arbitrum über einen dezentralen Handelsplatz erwerben. -Once you have GRT on Arbitrum, you can add it to your billing balance. +Sobald Sie GRT auf Arbitrum haben, können Sie es zu Ihrem Guthaben hinzufügen. Um die Vorteile von The Graph auf L2 zu nutzen, verwenden Sie diesen Dropdown-Schalter, um zwischen den Ketten umzuschalten. @@ -41,27 +41,21 @@ Um die Vorteile von The Graph auf L2 zu nutzen, verwenden Sie diesen Dropdown-Sc ## Was muss ich als Entwickler von Subgraphen, Datenkonsument, Indexer, Kurator oder Delegator jetzt tun? -Es besteht kein unmittelbarer Handlungsbedarf, jedoch werden die Netzwerkteilnehmer ermutigt, mit der Umstellung auf Arbitrum zu beginnen, um von den Vorteilen von L2 zu profitieren. +Die Netzwerkteilnehmer müssen zu Arbitrum wechseln, um weiterhin am The Graph Netzwerk teilzunehmen. Bitte lesen Sie den [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) für zusätzliche Unterstützung. -Kernentwicklerteams arbeiten an der Erstellung von L2-Transfer-Tools, die die Übertragung von Delegation, Kuration und Subgraphen auf Arbitrum erheblich erleichtern werden. Netzwerkteilnehmer können davon ausgehen, dass L2-Transfer-Tools bis zum Sommer 2023 verfügbar sein werden. - -Ab dem 10. April 2023 werden 5% aller Indexierungs-Rewards auf Arbitrum geprägt. Mit zunehmender Beteiligung des Netzwerks und der Zustimmung des Rates werden die Indexierungsprämien schrittweise von Ethereum auf Arbitrum und schließlich vollständig auf Arbitrum umgestellt. - -## Was muss ich tun, wenn ich am L2-Netz teilnehmen möchte? - -Bitte helfen Sie [test the network](https://testnet.thegraph.com/explorer) auf L2 und berichten Sie über Ihre Erfahrungen in [Discord](https://discord.gg/graphprotocol). +Alle Indexierungsprämien sind jetzt vollständig auf Arbitrum. ## Sind mit der Skalierung des Netzes auf L2 irgendwelche Risiken verbunden? -All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). +Alle Smart Contracts wurden gründlich [audited] (https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Alles wurde gründlich getestet, und es gibt einen Notfallplan, um einen sicheren und nahtlosen Übergang zu gewährleisten. Einzelheiten finden Sie [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Werden die bestehenden Subgraphen auf Ethereum weiterhin funktionieren? +## Funktionieren die vorhandenen Subgraphen auf Ethereum? -Ja, die The Graph Netzwerk-Verträge werden parallel sowohl auf Ethereum als auch auf Arbitrum laufen, bis sie zu einem späteren Zeitpunkt vollständig auf Arbitrum umgestellt werden. +Alle Subgraphen sind jetzt auf Arbitrum. Bitte lesen Sie den [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/), um sicherzustellen, dass Ihre Subgraphen reibungslos funktionieren. -## Wird GRT einen neuen Smart Contract auf Arbitrum bereitstellen? +## Verfügt GRT über einen neuen Smart Contract, der auf Arbitrum eingesetzt wird? Ja, GRT hat einen zusätzlichen [Smart Contract auf Arbitrum] (https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). Der Ethereum-Hauptnetz-[GRT-Vertrag](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) wird jedoch weiterhin funktionieren. @@ -83,4 +77,4 @@ Die Brücke wurde [umfangreich geprüft] (https://code4rena.com/contests/2022-10 Das Hinzufügen von GRT zu Ihrem Arbitrum-Abrechnungssaldo kann mit nur einem Klick in [Subgraph Studio] (https://thegraph.com/studio/) erfolgen. Sie können Ihr GRT ganz einfach mit Arbitrum verbinden und Ihre API-Schlüssel in einer einzigen Transaktion füllen. -Visit the [Billing page](/billing/) for more detailed instructions on adding, withdrawing, or acquiring GRT. +Besuchen Sie die [Abrechnungsseite] (/) für detaillierte Anweisungen zum Hinzufügen, Abheben oder Erwerben von GRT. diff --git a/website/pages/de/arbitrum/l2-transfer-tools-faq.mdx b/website/pages/de/arbitrum/l2-transfer-tools-faq.mdx index eb4fda3fc003..5ff21e03c171 100644 --- a/website/pages/de/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/pages/de/arbitrum/l2-transfer-tools-faq.mdx @@ -2,23 +2,23 @@ title: L2-Übertragungs-Tools FAQ --- -## General +## Allgemein ### Was sind L2-Transfer-Tools? -The Graph has made it 26x cheaper for contributors to participate in the network by deploying the protocol to Arbitrum One. The L2 Transfer Tools were created by core devs to make it easy to move to L2. +The Graph hat die Teilnahme am Netzwerk für Mitwirkende um das 26-fache kostengünstiger gemacht, indem das Protokoll auf Arbitrum One bereitgestellt wurde. Die L2-Transfer-Tools wurden von den Kernentwicklern entwickelt, um den Wechsel zu L2 zu erleichtern. -For each network participant, a set of L2 Transfer Tools are available to make the experience seamless when moving to L2, avoiding thawing periods or having to manually withdraw and bridge GRT. +Für jeden Netzwerkteilnehmer stehen eine Reihe von L2-Transfer-Tools zur Verfügung, die einen nahtlosen Übergang zu L2 ermöglichen, ohne dass Auftauzeiten entstehen oder GRT manuell entnommen und überbrückt werden müssen. -These tools will require you to follow a specific set of steps depending on what your role is within The Graph and what you are transferring to L2. +Für diese Tools müssen Sie eine Reihe von Schritten befolgen, je nachdem, welche Rolle Sie bei The Graph spielen und was Sie auf L2 übertragen. ### Kann ich dieselbe Wallet verwenden, die ich im Ethereum Mainnet benutze? Wenn Sie eine [EOA](https://ethereum.org/en/developers/docs/accounts/#types-of-account) Wallet verwenden, können Sie dieselbe Adresse verwenden. Wenn Ihr Ethereum Mainnet Wallet ein Kontrakt ist (z.B. ein Multisig), dann müssen Sie eine [Arbitrum Wallet Adresse](/arbitrum/arbitrum-faq/#what-do-i-need-to-do-to-use-the-graph-on-l2) angeben, an die Ihr Transfer gesendet wird. Bitte überprüfen Sie die Adresse sorgfältig, da Überweisungen an eine falsche Adresse zu einem dauerhaften Verlust führen können. Wenn Sie einen Multisig auf L2 verwenden möchten, stellen Sie sicher, dass Sie einen Multisig-Vertrag auf Arbitrum One einsetzen. -Wallets on EVM blockchains like Ethereum and Arbitrum are a pair of keys (public and private), that you create without any need to interact with the blockchain. So any wallet that was created for Ethereum will also work on Arbitrum without having to do anything else. +Wallets auf EVM-Blockchains wie Ethereum und Arbitrum bestehen aus einem Paar von Schlüsseln (öffentlich und privat), die Sie erstellen, ohne mit der Blockchain interagieren zu müssen. Jede Wallet, die für Ethereum erstellt wurde, funktioniert also auch auf Arbitrum, ohne dass Sie etwas anderes tun müssen. -The exception is with smart contract wallets like multisigs: these are smart contracts that are deployed separately on each chain, and get their address when they are deployed. If a multisig was deployed to Ethereum, it won't exist with the same address on Arbitrum. A new multisig must be created first on Arbitrum, and may get a different address. +Die Ausnahme sind Smart-Contract-Wallets wie Multisigs: Das sind Smart Contracts, die auf jeder Kette separat eingesetzt werden und ihre Adresse erhalten, wenn sie eingesetzt werden. Wenn ein Multisig auf Ethereum bereitgestellt wurde, wird er nicht mit der gleichen Adresse auf Arbitrum existieren. Ein neuer Multisig muss zuerst auf Arbitrum erstellt werden und kann eine andere Adresse erhalten. ### Was passiert, wenn ich meinen Transfer nicht innerhalb von 7 Tagen abschließe? @@ -28,7 +28,7 @@ Wenn Sie Ihre Vermögenswerte (Subgraph, Anteil, Delegation oder Kuration) an L2 Dies ist der so genannte "Bestätigungsschritt" in allen Übertragungswerkzeugen - er wird in den meisten Fällen automatisch ausgeführt, da die automatische Ausführung meist erfolgreich ist, aber es ist wichtig, dass Sie sich vergewissern, dass die Übertragung erfolgreich war. Wenn dies nicht gelingt und es innerhalb von 7 Tagen keine erfolgreichen Wiederholungsversuche gibt, verwirft die Arbitrum-Brücke das Ticket, und Ihre Assets (Subgraph, Pfahl, Delegation oder Kuration) gehen verloren und können nicht wiederhergestellt werden. Die Entwickler des Graph-Kerns haben ein Überwachungssystem eingerichtet, um diese Situationen zu erkennen und zu versuchen, die Tickets einzulösen, bevor es zu spät ist, aber es liegt letztendlich in Ihrer Verantwortung, sicherzustellen, dass Ihr Transfer rechtzeitig abgeschlossen wird. Wenn Sie Probleme mit der Bestätigung Ihrer Transaktion haben, wenden Sie sich bitte an [dieses Formular] (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) und die Entwickler des Kerns werden Ihnen helfen. -### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? +### Ich habe mit der Übertragung meiner Delegation/des Einsatzes/der Kuration begonnen und bin mir nicht sicher, ob sie an L2 weitergeleitet wurde. Wie kann ich bestätigen, dass sie korrekt übertragen wurde? If you don't see a banner on your profile asking you to finish the transfer, then it's likely the transaction made it safely to L2 and no more action is needed. If in doubt, you can check if Explorer shows your delegation, stake or curation on Arbitrum One. diff --git a/website/pages/de/billing.mdx b/website/pages/de/billing.mdx index 37f9c840d00b..83193509b958 100644 --- a/website/pages/de/billing.mdx +++ b/website/pages/de/billing.mdx @@ -2,212 +2,212 @@ title: Billing --- -## Subgraph Billing Plans +## Subgraph Abrechnungspläne -There are two plans to use when querying subgraphs on The Graph Network. +Es gibt zwei Pläne für die Abfrage von Subgraphen in The Graph Network. -- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. +- **Free Plan**: Der Free Plan beinhaltet 100.000 kostenlose monatliche Abfragen mit vollem Zugriff auf die Subgraph Studio Testumgebung. Dieser Plan ist für Hobbyisten, Hackathon-Teilnehmer und diejenigen mit Nebenprojekten gedacht, die The Graph ausprobieren möchten, bevor sie ihre Dapp skalieren. -- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +- **Growth Plan**: Der Growth Plan beinhaltet alles, was im Free Plan enthalten ist, wobei alle Abfragen nach 100.000 monatlichen Abfragen eine Zahlung mit GRT oder Kreditkarte erfordern. Der Growth Plan ist flexibel genug, um Teams abzudecken, die Dapps für eine Vielzahl von Anwendungsfällen entwickelt haben. -## Query Payments with credit card +## Abfrage Zahlungen mit Kreditkarte -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) - 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). +- Um die Abrechnung mit Kredit-/Debitkarten einzurichten, müssen die Benutzer Subgraph Studio (https://thegraph.com/studio/) aufrufen + 1. Rufen Sie die [Subgraph Studio Abrechnungsseite](https://thegraph.com/studio/billing/) auf. 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". - 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. - 4. To choose a credit card payment, choose “Credit card” as the payment method and fill out your credit card information. Those who have used Stripe before can use the Link feature to autofill their details. -- Invoices will be processed at the end of each month and require an active credit card on file for all queries beyond the free plan quota. + 3. Wählen Sie „ Upgrade Plan“, wenn Sie vom Free Plan upgraden oder wählen Sie „Manage Plan“, wenn Sie GRT bereits in der Vergangenheit zu Ihrem Abrechnungssaldo hinzugefügt haben. Als Nächstes können Sie die Anzahl der Abfragen schätzen, um einen Kostenvoranschlag zu erhalten, dieser Schritt ist jedoch nicht erforderlich. + 4. Um eine Zahlung per Kreditkarte zu wählen, wählen Sie „Kreditkarte“ als Zahlungsmethode und geben Sie Ihre Kreditkartendaten ein. Diejenigen, die Stripe bereits verwendet haben, können die Funktion „Link“ verwenden, um ihre Daten automatisch auszufüllen. +- Die Rechnungen werden am Ende eines jeden Monats erstellt. Für alle Abfragen, die über das kostenlose Kontingent hinausgehen, muss eine aktive Kreditkarte hinterlegt sein. -## Query Payments with GRT +## Abfrage von Zahlungen mit GRT -Subgraph users can use The Graph Token (or GRT) to pay for queries on The Graph Network. With GRT, invoices will be processed at the end of each month and require a sufficient balance of GRT to make queries beyond the Free Plan quota of 100,000 monthly queries. You'll be required to pay fees generated from your API keys. Using the billing contract, you'll be able to: +Subgraph-Nutzer können The Graph Token (oder GRT) verwenden, um für Abfragen im The Graph Network zu bezahlen. Mit GRT werden Rechnungen am Ende eines jeden Monats bearbeitet und erfordern ein ausreichendes Guthaben an GRT, um Abfragen über die Free-Plan-Quote von 100.000 monatlichen Abfragen hinaus durchzuführen. Sie müssen die von Ihren API-Schlüsseln generierten Gebühren bezahlen. Mit dem Abrechnungsvertrag können Sie: - Add and withdraw GRT from your account balance. - Keep track of your balances based on how much GRT you have added to your account balance, how much you have removed, and your invoices. - Automatically pay invoices based on query fees generated, as long as there is enough GRT in your account balance. -### GRT on Arbitrum or Ethereum +### GRT auf Arbitrum oder Ethereum -The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One. +Das Abrechnungssystem von The Graph akzeptiert GRT auf Arbitrum, und die Nutzer benötigen ETH auf Arbitrum, um ihr Gas zu bezahlen. Während das The Graph-Protokoll auf dem Ethereum Mainnet begann, finden alle Aktivitäten, einschließlich der Abrechnungsverträge, nun auf Arbitrum One statt. -To pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this: +Um für Abfragen zu bezahlen, brauchen Sie GRT auf Arbitrum. Hier sind ein paar verschiedene Möglichkeiten, dies zu erreichen: -- If you already have GRT on Ethereum, you can bridge it to Arbitrum. You can do this via the GRT bridging option provided in Subgraph Studio or by using one of the following bridges: +- Wenn Sie bereits GRT auf Ethereum haben, können Sie es zu Arbitrum überbrücken. Sie können dieses über GRT-Bridging-Option in Subgraph Studio tun oder eine der folgenden Bridges verwenden: -- [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) -- [TransferTo](https://transferto.xyz/swap) +- [Die Arbitrum Brücke](https://bridge.arbitrum.io/?l2ChainId=42161) +- [Übertragen auf](https://transferto.xyz/swap) -- If you already have assets on Arbitrum, you can swap them for GRT via a swapping protocol like Uniswap. +- Wenn du bereits Assets auf Arbitrum hast, kannst du sie über ein Swapping-Protokoll wie Uniswap in GRT tauschen. -- Alternatively, you acquire GRT directly on Arbitrum through a decentralized exchange. +- Alternativ können Sie GRT auch direkt auf Arbitrum über einen dezentralen Handelsplatz erwerben. -> This section is written assuming you already have GRT in your wallet, and you're on Arbitrum. If you don't have GRT, you can learn how to get GRT [here](#getting-grt). +> In diesem Abschnitt wird davon ausgegangen, dass du bereits GRT in deiner Wallet hast und auf Arbitrum bist. Wenn Sie keine GRT haben, können Sie erfahren, wie Sie GRT [hier](#getting-grt) bekommen. -Once you bridge GRT, you can add it to your billing balance. +Sobald Sie GRT überbrücken, können Sie es zu Ihrem Rechnungssaldo hinzufügen. -### Adding GRT using a wallet +### Hinzufügen von GRT mit einer Wallet -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). +1. Rufen Sie die [Subgraph Studio Abrechnungsseite](https://thegraph.com/studio/billing/) auf. 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". -3. Select the "Manage" button near the top right corner. First time users will see an option to "Upgrade to Growth plan" while returning users will click "Deposit from wallet". -4. Use the slider to estimate the number of queries you expect to make on a monthly basis. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. -5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. -6. Select the number of months you would like to prepay. - - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. -7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. -8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. -9. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. - -- Note that GRT deposited from Arbitrum will process within a few moments while GRT deposited from Ethereum will take approximately 15-20 minutes to process. Once the transaction is confirmed, you'll see the GRT added to your account balance. - -### Withdrawing GRT using a wallet - -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. -4. Enter the amount of GRT you would like to withdraw. -5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. -6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. +3. Wählen Sie die Schaltfläche „ Manage “ in der oberen rechten Ecke. Erstmalige Nutzer sehen die Option „Upgrade auf den Wachstumsplan“, während wiederkehrende Nutzer auf „Von der Wallet einzahlen“ klicken. +4. Verwenden Sie den Slider, um die Anzahl der Abfragen zu schätzen, die Sie monatlich erwarten. + - Vorschläge für die Anzahl der Abfragen, die Sie verwenden können, finden Sie auf unserer Seite **Häufig gestellte Fragen**. +5. Wählen Sie „Kryptowährung“. GRT ist derzeit die einzige Kryptowährung, die im The Graph Network akzeptiert wird. +6. Wählen Sie die Anzahl der Monate, die Sie im Voraus bezahlen möchten. + - Die Zahlung im Voraus verpflichtet Sie nicht zu einer zukünftigen Nutzung. Ihnen wird nur das berechnet, was Sie verbrauchen, und Sie können Ihr Guthaben jederzeit abheben. +7. Wählen Sie das Netzwerk, über das Sie Ihr GRT einzahlen. GRT auf Arbitrum oder Ethereum sind beide akzeptabel. +8. Klicken Sie auf „GRT-Zugriff zulassen“ und geben Sie dann die Menge an GRT an, die von Ihrer Wallet genommen werden kann. + - Wenn Sie für mehrere Monate im Voraus bezahlen, müssen Sie den Zugriff auf den Betrag erlauben, der diesem Betrag entspricht. Diese Interaktion kostet kein Gas. +9. Zum Schluss klicken Sie auf „GRT zu Rechnungssaldo hinzufügen“. Für diese Transaktion wird ETH auf Arbitrum benötigt, um die Gaskosten zu decken. + +- Beachten Sie, dass GRT, die von Arbitrum eingezahlt werden, innerhalb weniger Augenblicke verarbeitet werden, während GRT, die von Ethereum eingezahlt werden, etwa 15-20 Minuten zur Verarbeitung benötigen. Sobald die Transaktion bestätigt wurde, wird das GRT zu Ihrem Kontostand hinzugefügt. + +### GRT über eine Wallet abheben + +1. Rufen Sie die [Subgraph Studio Abrechnungsseite](https://thegraph.com/studio/billing/) auf. +2. Klicken Sie auf die Schaltfläche „Connect Wallet“ in der oberen rechten Ecke der Seite. Wählen Sie Ihre Wallet aus und klicken Sie auf „Verbinden“. +3. Klicken Sie auf die Schaltfläche „Verwalten“ in der oberen rechten Ecke der Seite. Wählen Sie „GRT abheben“. Ein Seitenfenster wird angezeigt. +4. Geben Sie den Betrag der GRT ein, den Sie abheben möchten. +5. Klicken Sie auf „GRT abheben“, um die GRT von Ihrem Kontostand abzuheben. Unterschreiben Sie die zugehörige Transaktion in Ihrer Wallet. Dies kostet Gas. Die GRT werden an Ihre Arbitrum Wallet gesendet. +6. Sobald die Transaktion bestätigt ist, werden die GRT von Ihrem Kontostand in Ihrem Arbitrum Wallet abgezogen. ### Adding GRT using a multisig wallet -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas. -3. Select the "Manage" button near the top right corner. First time users will see an option to "Upgrade to Growth plan" while returning users will click "Deposit from wallet". -4. Use the slider to estimate the number of queries you expect to make on a monthly basis. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. -5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. -6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. -7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. -8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. +1. Rufen Sie die [Subgraph Studio Abrechnungsseite](https://thegraph.com/studio/billing/) auf. +2. Klicke auf die Schaltfläche „Wallet verbinden“ in der oberen rechten Ecke der Seite. Wähle deine Wallet aus und klicke auf „Verbinden“. Wenn du die [Gnosis-Safe](https://gnosis-safe.io/) verwendest, kannst du sowohl deine Multisig-Wallet als auch deine Signatur-Wallet verbinden. Anschließend unterschreibe die zugehörige Nachricht. Dies verursacht keine Gasgebühren. +3. Wählen Sie die Schaltfläche „ Manage “ in der oberen rechten Ecke. Erstmalige Nutzer sehen die Option „Upgrade auf den Wachstumsplan“, während wiederkehrende Nutzer auf „Von der Wallet einzahlen“ klicken. +4. Verwenden Sie den Slider, um die Anzahl der Abfragen zu schätzen, die Sie monatlich erwarten. + - Vorschläge für die Anzahl der Abfragen, die Sie verwenden können, finden Sie auf unserer Seite **Häufig gestellte Fragen**. +5. Wählen Sie „Kryptowährung“. GRT ist derzeit die einzige Kryptowährung, die im The Graph Network akzeptiert wird. +6. Wählen Sie die Anzahl der Monate, die Sie im Voraus bezahlen möchten. + - Die Zahlung im Voraus verpflichtet Sie nicht zu einer zukünftigen Nutzung. Ihnen wird nur das berechnet, was Sie verbrauchen, und Sie können Ihr Guthaben jederzeit abheben. +7. Wählen Sie das Netzwerk, über das Sie Ihr GRT einzahlen. GRT auf Arbitrum oder Ethereum sind beide akzeptabel. 8. Klicken Sie auf „GRT-Zugang erlauben“ und geben Sie dann den Betrag an GRT an, der von Ihrer Wallet genommen werden kann. + - Wenn Sie für mehrere Monate im Voraus bezahlen, müssen Sie den Zugriff auf den Betrag erlauben, der diesem Betrag entspricht. Diese Interaktion kostet kein Gas. +8. Zum Schluss klicken Sie auf „GRT zu Rechnungssaldo hinzufügen“. Für diese Transaktion wird ETH auf Arbitrum benötigt, um die Gaskosten zu decken. -- Note that GRT deposited from Arbitrum will process within a few moments while GRT deposited from Ethereum will take approximately 15-20 minutes to process. Once the transaction is confirmed, you'll see the GRT added to your account balance. +- Beachten Sie, dass GRT, die von Arbitrum eingezahlt werden, innerhalb weniger Augenblicke verarbeitet werden, während GRT, die von Ethereum eingezahlt werden, etwa 15-20 Minuten zur Verarbeitung benötigen. Sobald die Transaktion bestätigt wurde, wird das GRT zu Ihrem Kontostand hinzugefügt. -## Getting GRT +## GRT abrufen -This section will show you how to get GRT to pay for query fees. +In diesem Abschnitt erfahren Sie, wie Sie GRT dazu bringen können, die Abfragegebühren zu bezahlen. ### Coinbase -This will be a step by step guide for purchasing GRT on Coinbase. +Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. -2. Once you have created an account, you will need to verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. -3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy/Sell" button on the top right of the page. -4. Select the currency you want to purchase. Select GRT. -5. Select the payment method. Select your preferred payment method. -6. Select the amount of GRT you want to purchase. -7. Review your purchase. Review your purchase and click "Buy GRT". -8. Confirm your purchase. Confirm your purchase and you will have successfully purchased GRT. -9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - To transfer the GRT to your wallet, click on the "Accounts" button on the top right of the page. - - Click on the "Send" button next to the GRT account. - - Enter the amount of GRT you want to send and the wallet address you want to send it to. - - Click "Continue" and confirm your transaction. -Please note that for larger purchase amounts, Coinbase may require you to wait 7-10 days before transferring the full amount to a wallet. +1. Gehen Sie zu [Coinbase](https://www.coinbase.com/) und erstellen Sie ein Konto. +2. Sobald Sie ein Konto erstellt haben, müssen Sie Ihre Identität durch ein Verfahren verifizieren, das als KYC (oder Know Your Customer) bekannt ist. Dies ist ein Standardverfahren für alle zentralisierten oder verwahrten Krypto-Börsen. +3. Sobald Sie Ihre Identität überprüft haben, können Sie GRT kaufen. Dazu klicken Sie auf die Schaltfläche „Kaufen/Verkaufen“ oben rechts auf der Seite. +4. Wählen Sie die Währung, die Sie kaufen möchten. Wählen Sie GRT. +5. Wählen Sie die Zahlungsmethode aus. Wählen Sie Ihre bevorzugte Zahlungsmethode aus. +6. Wählen Sie die Menge an GRT, die Sie kaufen möchten. +7. Überprüfen Sie Ihren Einkauf. Überprüfen Sie Ihren Einkauf und klicken Sie auf „GRT kaufen“. +8. Bestätigen Sie Ihren Kauf. Bestätigen Sie Ihren Kauf und Sie haben GRT erfolgreich gekauft. +9. Sie können die GRT von Ihrem Konto auf Ihre Wallet wie [MetaMask](https://metamask.io/) übertragen. + - Um GRT auf Ihre Wallet zu übertragen, klicken Sie auf die Schaltfläche „Konten“ oben rechts auf der Seite. + - Klicken Sie auf die Schaltfläche „Senden“ neben dem GRT Konto. + - Geben Sie den Betrag an GRT ein, den Sie senden möchten, und die Wallet-Adresse, an die Sie ihn senden möchten. + - Klicken Sie auf „Weiter“ und bestätigen Sie Ihre Transaktion. -Bitte beachten Sie, dass Coinbase Sie bei größeren Kaufbeträgen möglicherweise 7-10 Tage warten lässt, bevor Sie den vollen Betrag in eine Krypto-Wallet überweisen. -You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Sie können mehr über den Erwerb von GRT auf Coinbase [hier](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency) erfahren. ### Binance -This will be a step by step guide for purchasing GRT on Binance. +Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. -2. Once you have created an account, you will need to verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. -3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy Now" button on the homepage banner. -4. You will be taken to a page where you can select the currency you want to purchase. Select GRT. -5. Select your preferred payment method. You'll be able to pay with different fiat currencies such as Euros, US Dollars, and more. -6. Select the amount of GRT you want to purchase. -7. Review your purchase and click "Buy GRT". -8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. -9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. - - Click on the "wallet" button, click withdraw, and select GRT. - - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - - Click "Continue" and confirm your transaction. +1. Gehen Sie auf [Binance](https://www.binance.com/en) und erstellen Sie ein Konto. +2. Sobald Sie ein Konto erstellt haben, müssen Sie Ihre Identität durch ein Verfahren verifizieren, das als KYC (oder Know Your Customer) bekannt ist. Dies ist ein Standardverfahren für alle zentralisierten oder verwahrten Krypto-Börsen. +3. Sobald Sie Ihre Identität überprüft haben, können Sie GRT kaufen. Dazu klicken Sie auf die Schaltfläche „Jetzt kaufen“ auf dem Banner der Homepage. +4. Sie werden zu einer Seite weitergeleitet, auf der Sie die Währung auswählen können, die Sie kaufen möchten. Wählen Sie GRT. +5. Wählen Sie Ihre bevorzugte Zahlungsmethode. Sie können mit verschiedenen Fiat-Währungen wie Euro, US-Dollar und mehr bezahlen. +6. Wählen Sie die Menge an GRT, die Sie kaufen möchten. +7. Überprüfen Sie Ihren Kauf und klicken Sie auf „GRT kaufen“. +8. Bestätigen Sie Ihren Kauf und Sie werden Ihr GRT in Ihrer Binance Spot Wallet sehen können. +9. Sie können GRT von Ihrem Konto auf Ihre Wallet wie [MetaMask](https://metamask.io/) abheben. + - [Um GRT auf Ihr Wallet abzuheben](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570), fügen Sie die Adresse Ihres Wallets zur Whitelist für Abhebungen hinzu. + - Klicken Sie auf die Schaltfläche „Wallet“, klicken Sie auf Abheben und wählen Sie GRT. + - Geben Sie den GRT-Betrag ein, den Sie senden möchten, und die Wallet-Adresse, die auf der Whitelist steht. + - Klicken Sie auf „Weiter“ und bestätigen Sie Ihre Transaktion. -You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Sie können mehr über den Erwerb von GRT auf Binance [hier](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582) erfahren. ### Uniswap -This is how you can purchase GRT on Uniswap. +So können Sie GRT auf Uniswap kaufen. -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. -2. Select the token you want to swap from. Select ETH. -3. Select the token you want to swap to. Select GRT. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -4. Enter the amount of ETH you want to swap. -5. Click "Swap". -6. Confirm the transaction in your wallet and you wait for the transaction to process. +1. Gehen Sie auf [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) und verbinden Sie Ihre Wallet. +2. Wählen Sie den Token, von dem Sie tauschen möchten. Wählen Sie ETH. +3. Wählen Sie den Token, in den Sie tauschen möchten. Wählen Sie GRT. + - Stelle sicher, dass du den richtigen Token tauschst. Die Smart-Contract-Adresse von GRT auf Arbitrum One ist: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +4. Geben Sie den Betrag an ETH ein, den Sie tauschen möchten. +5. Klicken Sie auf „Swap“. +6. Bestätigen Sie die Transaktion in Ihrer Wallet und warten Sie auf die Abwicklung der Transaktion. -You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). +Sie können mehr über den Erwerb von GRT auf Uniswap [hier](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-) erfahren. -## Getting Ether +## Ether erhalten -This section will show you how to get Ether (ETH) to pay for transaction fees or gas costs. ETH is necessary to execute operations on the Ethereum network such as transferring tokens or interacting with contracts. +In diesem Abschnitt erfahren Sie, wie Sie Ether (ETH) erhalten können, um Transaktionsgebühren oder Gaskosten zu bezahlen. ETH ist notwendig, um Operationen im Ethereum-Netzwerk auszuführen, wie z. B. die Übertragung von Token oder die Interaktion mit Verträgen. ### Coinbase -This will be a step by step guide for purchasing ETH on Coinbase. - -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. -2. Once you have created an account, verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. -3. Once you have verified your identity, purchase ETH by clicking on the "Buy/Sell" button on the top right of the page. -4. Select the currency you want to purchase. Select ETH. -5. Select your preferred payment method. -6. Enter the amount of ETH you want to purchase. -7. Review your purchase and click "Buy ETH". -8. Confirm your purchase and you will have successfully purchased ETH. -9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). - - To transfer the ETH to your wallet, click on the "Accounts" button on the top right of the page. - - Click on the "Send" button next to the ETH account. - - Enter the amount of ETH you want to send and the wallet address you want to send it to. - - Ensure that you are sending to your Ethereum wallet address on Arbitrum One. - - Click "Continue" and confirm your transaction. - -You can learn more about getting ETH on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Coinbase. + +1. Gehen Sie zu [Coinbase](https://www.coinbase.com/) und erstellen Sie ein Konto. +2. Sobald Sie ein Konto erstellt haben, müssen Sie Ihre Identität durch ein Verfahren verifizieren, das als KYC (oder Know Your Customer) bekannt ist. Dies ist ein Standardverfahren für alle zentralisierten oder verwahrten Krypto-Börsen. +3. Sobald Sie Ihre Identität bestätigt haben, können Sie ETH kaufen, indem Sie auf die Schaltfläche „Kaufen/Verkaufen“ oben rechts auf der Seite klicken. +4. Wählen Sie die Währung, die Sie kaufen möchten. Wählen Sie ETH. +5. Wählen Sie die gewünschte Zahlungsmethode. +6. Geben Sie die Menge an ETH ein, die Sie kaufen möchten. +7. Überprüfen Sie Ihren Kauf und klicken Sie auf „ETH kaufen“. +8. Bestätigen Sie Ihren Kauf und Sie haben erfolgreich ETH gekauft. +9. Sie können die ETH von Ihrem Coinbase-Konto auf Ihr Wallet wie [MetaMask](https://metamask.io/) übertragen. + - Um die ETH auf Ihre Wallet zu übertragen, klicken Sie auf die Schaltfläche „Konten“ oben rechts auf der Seite. + - Klicken Sie auf die Schaltfläche „Senden“ neben dem ETH-Konto. + - Geben Sie den ETH-Betrag ein, den Sie senden möchten, und die Wallet-Adresse, an die Sie ihn senden möchten. + - Stellen Sie sicher, dass Sie an Ihre Ethereum Wallet Adresse auf Arbitrum One senden. + - Klicken Sie auf „Weiter“ und bestätigen Sie Ihre Transaktion. + +Sie können mehr über den Erwerb von ETH auf Coinbase [hier](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency) erfahren. ### Binance -This will be a step by step guide for purchasing ETH on Binance. +Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von ETH auf Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. -2. Once you have created an account, verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. -3. Once you have verified your identity, purchase ETH by clicking on the "Buy Now" button on the homepage banner. -4. Select the currency you want to purchase. Select ETH. -5. Select your preferred payment method. -6. Enter the amount of ETH you want to purchase. -7. Review your purchase and click "Buy ETH". -8. Confirm your purchase and you will see your ETH in your Binance Spot Wallet. -9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). - - To withdraw the ETH to your wallet, add your wallet's address to the withdrawal whitelist. - - Click on the "wallet" button, click withdraw, and select ETH. - - Enter the amount of ETH you want to send and the whitelisted wallet address you want to send it to. - - Ensure that you are sending to your Ethereum wallet address on Arbitrum One. - - Click "Continue" and confirm your transaction. +1. Gehen Sie auf [Binance](https://www.binance.com/en) und erstellen Sie ein Konto. +2. Sobald Sie ein Konto erstellt haben, müssen Sie Ihre Identität durch ein Verfahren verifizieren, das als KYC (oder Know Your Customer) bekannt ist. Dies ist ein Standardverfahren für alle zentralisierten oder verwahrten Krypto-Börsen. +3. Sobald Sie Ihre Identität verifiziert haben, kaufen Sie ETH, indem Sie auf die Schaltfläche „Jetzt kaufen“ auf dem Banner der Homepage klicken. +4. Wählen Sie die Währung, die Sie kaufen möchten. Wählen Sie ETH. +5. Wählen Sie die gewünschte Zahlungsmethode. +6. Geben Sie die Menge an ETH ein, die Sie kaufen möchten. +7. Überprüfen Sie Ihren Kauf und klicken Sie auf „ETH kaufen“. +8. Bestätigen Sie Ihren Kauf und Sie werden Ihre ETH in Ihrer Binance Spot Wallet sehen. +9. Sie können die ETH von Ihrem Konto auf Ihr Wallet wie [MetaMask](https://metamask.io/) abheben. + - Um die ETH auf Ihre Wallet abzuheben, fügen Sie die Adresse Ihrer Wallet zur Abhebungs-Whitelist hinzu. + - Klicken Sie auf die Schaltfläche „Wallet“, klicken Sie auf „withdraw“ und wählen Sie ETH. + - Geben Sie den ETH-Betrag ein, den Sie senden möchten, und die Adresse der Wallet, die auf der Whitelist steht, an die Sie den Betrag senden möchten. + - Stellen Sie sicher, dass Sie an Ihre Ethereum Wallet Adresse auf Arbitrum One senden. + - Klicken Sie auf „Weiter“ und bestätigen Sie Ihre Transaktion. -You can learn more about getting ETH on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Sie können mehr über den Erwerb von ETH auf Binance [hier](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582) erfahren. -## Billing FAQs +## FAQs zur Rechnungsstellung -### How many queries will I need? +### Wie viele Abfragen benötige ich? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +Sie müssen nicht im Voraus wissen, wie viele Abfragen Sie benötigen werden. Ihnen wird nur das berechnet, was Sie verbrauchen, und Sie können jederzeit GRT von Ihrem Konto abheben. -We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. +Wir empfehlen Ihnen, die Anzahl der Abfragen, die Sie benötigen, zu überschlagen, damit Sie Ihr Guthaben nicht häufig aufstocken müssen. Eine gute Schätzung für kleine bis mittelgroße Anwendungen ist, mit 1 Mio. bis 2 Mio. Abfragen pro Monat zu beginnen und die Nutzung in den ersten Wochen genau zu überwachen. Bei größeren Anwendungen ist es sinnvoll, die Anzahl der täglichen Besuche auf Ihrer Website mit der Anzahl der Abfragen zu multiplizieren, die Ihre aktivste Seite beim Öffnen auslöst. -Of course, both new and existing users can reach out to Edge & Node's BD team for a consult to learn more about anticipated usage. +Natürlich können sich sowohl neue als auch bestehende Nutzer an das BD-Team von Edge & Node wenden, um mehr über die voraussichtliche Nutzung zu erfahren. -### Can I withdraw GRT from my billing balance? +### Kann ich GRT von meinem Rechnungssaldo abheben? -Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). +Ja, Sie können jederzeit GRT, die nicht bereits für Abfragen verwendet wurden, von Ihrem Abrechnungskonto abheben. Der Abrechnungsvertrag ist nur dafür gedacht, GRT aus dem Ethereum-Mainnet in das Arbitrum-Netzwerk zu übertragen. Wenn Sie Ihr GRT von Arbitrum zurück ins Ethereum-Mainnet transferieren möchten, müssen Sie den [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### Was passiert, wenn mein Guthaben aufgebraucht ist? Werde ich eine Warnung erhalten? -You will receive several email notifications before your billing balance runs out. +Sie erhalten mehrere E-Mail-Benachrichtigungen, bevor Ihr Guthaben aufgebraucht ist. diff --git a/website/pages/de/chain-integration-overview.mdx b/website/pages/de/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/de/chain-integration-overview.mdx +++ b/website/pages/de/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/de/cookbook/arweave.mdx b/website/pages/de/cookbook/arweave.mdx index ec3eca650e4f..6e68ee97cf34 100644 --- a/website/pages/de/cookbook/arweave.mdx +++ b/website/pages/de/cookbook/arweave.mdx @@ -155,7 +155,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash -graph deploy --studio --access-token +graph deploy --access-token ``` ## Querying an Arweave Subgraph diff --git a/website/pages/de/cookbook/avoid-eth-calls.mdx b/website/pages/de/cookbook/avoid-eth-calls.mdx index 446b0e8ecd17..8897ecdbfdc7 100644 --- a/website/pages/de/cookbook/avoid-eth-calls.mdx +++ b/website/pages/de/cookbook/avoid-eth-calls.mdx @@ -99,4 +99,18 @@ Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0 ## Conclusion -We can significantly improve indexing performance by minimizing or eliminating `eth_calls` in our subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/de/cookbook/cosmos.mdx b/website/pages/de/cookbook/cosmos.mdx index 6401744e7940..c37bc95e625e 100644 --- a/website/pages/de/cookbook/cosmos.mdx +++ b/website/pages/de/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and Die Handler für die Ereignisverarbeitung sind in [AssemblyScript](https://www.assemblyscript.org/) geschrieben. -Die Cosmos-Indizierung führt Cosmos-spezifische Datentypen in die [AssemblyScript-API](/developing/graph-ts/api/) ein. +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { @@ -203,7 +203,7 @@ Once your subgraph has been created, you can deploy your subgraph by using the ` Visit the Subgraph Studio to create a new subgraph. ```bash -graph deploy --studio subgraph-name +graph deploy subgraph-name ``` **Local Graph Node (based on default configuration):** diff --git a/website/pages/de/cookbook/derivedfrom.mdx b/website/pages/de/cookbook/derivedfrom.mdx index 69dd48047744..09ba62abde3f 100644 --- a/website/pages/de/cookbook/derivedfrom.mdx +++ b/website/pages/de/cookbook/derivedfrom.mdx @@ -69,6 +69,20 @@ This will not only make our subgraph more efficient, but it will also unlock thr ## Conclusion -Adopting the `@derivedFrom` directive in subgraphs effectively handles dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. -To learn more detailed strategies to avoid large arrays, read this blog from Kevin Jones: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). +For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/de/cookbook/enums.mdx b/website/pages/de/cookbook/enums.mdx index a10970c1539f..8db81193d949 100644 --- a/website/pages/de/cookbook/enums.mdx +++ b/website/pages/de/cookbook/enums.mdx @@ -50,7 +50,7 @@ type Token @entity { In this schema, `TokenStatus` is a simple string with no specific, allowed values. -#### Why is this a problem? +#### Warum ist das ein Problem? - There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. - It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. diff --git a/website/pages/de/cookbook/grafting-hotfix.mdx b/website/pages/de/cookbook/grafting-hotfix.mdx index 4be0a0b07790..040e3a8209d5 100644 --- a/website/pages/de/cookbook/grafting-hotfix.mdx +++ b/website/pages/de/cookbook/grafting-hotfix.mdx @@ -1,5 +1,5 @@ --- -Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment --- ## TLDR @@ -173,14 +173,14 @@ By incorporating grafting into your subgraph development workflow, you can enhan ## Subgraph Best Practices 1-6 -1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/de/cookbook/grafting.mdx b/website/pages/de/cookbook/grafting.mdx index d6a88a506760..4e2311f14da2 100644 --- a/website/pages/de/cookbook/grafting.mdx +++ b/website/pages/de/cookbook/grafting.mdx @@ -22,7 +22,7 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic usecase. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ In this tutorial, we will be covering a basic usecase. We will replace an existi ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. ## Grafting Manifest Definition @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Additional Resources -If you want more experience with grafting, here's a few examples for popular contracts: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/de/cookbook/immutable-entities-bytes-as-ids.mdx b/website/pages/de/cookbook/immutable-entities-bytes-as-ids.mdx index f38c33385604..541212617f9f 100644 --- a/website/pages/de/cookbook/immutable-entities-bytes-as-ids.mdx +++ b/website/pages/de/cookbook/immutable-entities-bytes-as-ids.mdx @@ -174,3 +174,17 @@ Query Response: Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/de/cookbook/near.mdx b/website/pages/de/cookbook/near.mdx index 0b5436f36152..6ab983b22272 100644 --- a/website/pages/de/cookbook/near.mdx +++ b/website/pages/de/cookbook/near.mdx @@ -194,8 +194,8 @@ The node configuration will depend on where the subgraph is being deployed. ### Subgraph Studio ```sh -graph auth --studio -graph deploy --studio +graph auth +graph deploy ``` ### Local Graph Node (based on default configuration) diff --git a/website/pages/de/cookbook/pruning.mdx b/website/pages/de/cookbook/pruning.mdx index f22a2899f1de..d86bf50edf42 100644 --- a/website/pages/de/cookbook/pruning.mdx +++ b/website/pages/de/cookbook/pruning.mdx @@ -39,3 +39,17 @@ dataSources: ## Conclusion Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/de/cookbook/subgraph-uncrashable.mdx b/website/pages/de/cookbook/subgraph-uncrashable.mdx index 989310a3f9a0..0cc91a0fa2c3 100644 --- a/website/pages/de/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/de/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Safe Subgraph Code Generator - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. These logs can be viewed in the The Graph's hosted service under the 'Logs' section. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. diff --git a/website/pages/de/cookbook/timeseries.mdx b/website/pages/de/cookbook/timeseries.mdx index 88ee70005a6e..2ce0ce266ccf 100644 --- a/website/pages/de/cookbook/timeseries.mdx +++ b/website/pages/de/cookbook/timeseries.mdx @@ -181,14 +181,14 @@ By adopting this pattern, developers can build more efficient and scalable subgr ## Subgraph Best Practices 1-6 -1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/de/cookbook/transfer-to-the-graph.mdx b/website/pages/de/cookbook/transfer-to-the-graph.mdx index 287cd7d81b4b..3844b7a94142 100644 --- a/website/pages/de/cookbook/transfer-to-the-graph.mdx +++ b/website/pages/de/cookbook/transfer-to-the-graph.mdx @@ -12,15 +12,15 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment ### Create a Subgraph in Subgraph Studio -- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und verbinden Sie Ihre Wallet. - Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". > Note: After publishing, the subgraph name will be editable but requires on-chain action each time, so name it properly. @@ -31,7 +31,7 @@ You must have [Node.js](https://nodejs.org/) and a package manager of your choic On your local machine, run the following command: -Using [npm](https://www.npmjs.com/): +Verwendung von [npm](https://www.npmjs.com/): ```sh npm install -g @graphprotocol/graph-cli@latest @@ -48,7 +48,7 @@ graph init --product subgraph-studio In The Graph CLI, use the auth command seen in Subgraph Studio: ```sh -graph auth --studio +graph auth ``` ## 2. Deploy Your Subgraph to Studio @@ -58,7 +58,7 @@ If you have your source code, you can easily deploy it to Studio. If you don't h In The Graph CLI, run the following command: ```sh -graph deploy --studio --ipfs-hash +graph deploy --ipfs-hash ``` diff --git a/website/pages/de/deploying/deploy-using-subgraph-studio.mdx b/website/pages/de/deploying/deploy-using-subgraph-studio.mdx index 502169b4ccfa..6b2f6a058019 100644 --- a/website/pages/de/deploying/deploy-using-subgraph-studio.mdx +++ b/website/pages/de/deploying/deploy-using-subgraph-studio.mdx @@ -1,106 +1,104 @@ --- -title: Deploy Using Subgraph Studio +title: Bereitstellung mit Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Erfahren Sie, wie Sie Ihren Subgraph in Subgraph Studio bereitstellen können. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it on-chain. +> Hinweis: Wenn Sie einen Subgraph bereitstellen, schieben Sie ihn zu Subgraph Studio, wo Sie ihn testen können. Es ist wichtig zu wissen, dass Bereitstellen nicht dasselbe ist wie Veröffentlichen. Wenn Sie einen Subgraph veröffentlichen, dann veröffentlichen Sie ihn in der Kette. -## Subgraph Studio Overview +## Subgraph Studio Überblick -In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: +In [Subgraph Studio] (https://thegraph.com/studio/) können Sie Folgendes tun: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs -- Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph through the Studio UI -- Deploy your subgraph using the The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph with the Studio UI -- Manage your billing +- Eine Liste der von Ihnen erstellten Subgraphs anzeigen +- Verwalten, Anzeigen von Details und Visualisieren des Status eines bestimmten Subgraphen +- Erstellen und verwalten Sie Ihre API-Schlüssel für bestimmte Subgraphen +- Schränken Sie Ihre API-Schlüssel auf bestimmte Domains ein und erlauben Sie nur bestimmten Indexern die Abfrage mit diesen Schlüsseln +- Erstellen Sie Ihren Subgraph +- Verteilen Sie Ihren Subgraph mit The Graph CLI +- Testen Sie Ihren Subgraph in der „Playground“-Umgebung +- Integrieren Sie Ihren Subgraph in Staging unter Verwendung der Entwicklungsabfrage-URL +- Veröffentlichen Sie Ihren Subgraph auf The Graph Network +- Verwalten Sie Ihre Rechnungen -## Install The Graph CLI +## Installieren der Graph-CLI -Before deploying, you must install The Graph CLI. +Vor der Bereitstellung müssen Sie The Graph CLI installieren. -You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use The Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +Sie müssen [Node.js](https://nodejs.org/) und einen Paketmanager Ihrer Wahl (`npm`, `yarn` oder `pnpm`) installiert haben, um The Graph CLI zu verwenden. Prüfen Sie, ob die [aktuellste](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI-Version installiert ist. -**Install with yarn:** +### Installieren mit yarn ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +### Installieren mit npm ```bash npm install -g @graphprotocol/graph-cli ``` -## Create Your Subgraph +## Los geht’s -Before deploying your subgraph you need to create an account in [Subgraph Studio](https://thegraph.com/studio/). +1. Öffnen Sie [Subgraph Studio] (https://thegraph.com/studio/). +2. Verbinden Sie Ihre Wallet, um sich anzumelden. + - Sie können dies über MetaMask, Coinbase Wallet, WalletConnect oder Safe tun. +3. Nachdem Sie sich angemeldet haben, wird Ihr eindeutiger Verteilungsschlüssel auf der Detailseite Ihres Subgraphen angezeigt. + - Mit dem Bereitstellungsschlüssel können Sie Ihre Subgraphs veröffentlichen oder Ihre API-Schlüssel und Abrechnungen verwalten. Er ist einmalig, kann aber neu generiert werden, wenn Sie glauben, dass er kompromittiert wurde. -1. Open [Subgraph Studio](https://thegraph.com/studio/). -2. Connect your wallet to sign in. - - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +> Wichtig: Sie benötigen einen API-Schlüssel, um Subgraphs abzufragen -> Important: You need an API key to query subgraphs - -### How to Create a Subgraph in Subgraph Studio +### So erstellen Sie einen Subgraph in Subgraph Studio -> For additional written detail, review the [Quick-Start](/quick-start/). +> Weitere schriftliche Informationen finden Sie im [Schnellstart](/quick-start/). -### Subgraph Compatibility with The Graph Network +### Kompatibilität von Subgraphs mit dem The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: +Um von Indexern auf The Graph Network unterstützt zu werden, müssen Subgraphen: -- Index a [supported network](/developing/supported-networks) -- Must not use any of the following features: +- Ein [unterstütztes Netzwerk] indizieren (/developing/supported-networks) +- Sie dürfen keine der folgenden Funktionen verwenden: - ipfs.cat & ipfs.map - Non-fatal errors - Grafting -## Initialize Your Subgraph +## Initialisieren Ihres Subgraphen -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Sobald Ihr Subgraph in Subgraph Studio erstellt wurde, können Sie seinen Code über die CLI mit diesem Befehl initialisieren: ```bash -graph init --studio +graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +Sie finden den Wert `` auf der Detailseite Ihres Subgraphs in Subgraph Studio, siehe Abbildung unten: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +Nachdem Sie `graph init` ausgeführt haben, werden Sie aufgefordert, die Vertragsadresse, das Netzwerk und eine ABI einzugeben, die Sie abfragen möchten. Daraufhin wird ein neuer Ordner auf Ihrem lokalen Rechner erstellt, der einige grundlegende Codes enthält, um mit der Arbeit an Ihrem Subgraph zu beginnen. Anschließend können Sie Ihren Subgraph fertigstellen, um sicherzustellen, dass er wie erwartet funktioniert. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to login into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Bevor Sie Ihren Subgraph in Subgraph Studio bereitstellen können, müssen Sie sich bei Ihrem Konto in der CLI anmelden. Dazu benötigen Sie Ihren Bereitstellungsschlüssel, den Sie auf der Seite mit den Details Ihres Subgraphen finden. -Then, use the following command to authenticate from the CLI: +Verwenden Sie dann den folgenden Befehl, um sich über die CLI zu authentifizieren: ```bash -graph auth --studio +graph auth ``` -## Deploying a Subgraph +## Bereitstellen eines Subgraphs Once you are ready, you can deploy your subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. Use the following CLI command to deploy your subgraph: ```bash -graph deploy --studio +graph deploy ``` After running this command, the CLI will ask for a version label. @@ -126,7 +124,7 @@ If you want to update your subgraph, you can do the following: - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). - This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an on-chain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. > Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/network/curating/). diff --git a/website/pages/de/developing/creating-a-subgraph/advanced.mdx b/website/pages/de/developing/creating-a-subgraph/advanced.mdx new file mode 100644 index 000000000000..6a27f4a235a0 --- /dev/null +++ b/website/pages/de/developing/creating-a-subgraph/advanced.mdx @@ -0,0 +1,555 @@ +--- +title: Advance Subgraph Features +--- + +## Overview + +Add and implement advanced subgraph features to enhanced your subgraph's built. + +Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: + +| Feature | Name | +| ---------------------------------------------------- | ---------------- | +| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | +| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | + +For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +features: + - fullTextSearch + - nonFatalErrors +dataSources: ... +``` + +> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. + +## Timeseries and Aggregations + +Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, etc. + +This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the Timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. + +### Example Schema + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} + +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +### Defining Timeseries and Aggregations + +Timeseries entities are defined with `@entity(timeseries: true)` in schema.graphql. Every timeseries entity must have a unique ID of the int8 type, a timestamp of the Timestamp type, and include data that will be used for calculation by aggregation entities. These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the Aggregation entities. + +Aggregation entities are defined with `@aggregation` in schema.graphql. Every aggregation entity defines the source from which it will gather data (which must be a Timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. + +#### Available Aggregation Intervals + +- `hour`: sets the timeseries period every hour, on the hour. +- `day`: sets the timeseries period every day, starting and ending at 00:00. + +#### Available Aggregation Functions + +- `sum`: Total of all values. +- `count`: Number of values. +- `min`: Minimum value. +- `max`: Maximum value. +- `first`: First value in the period. +- `last`: Last value in the period. + +#### Example Aggregations Query + +```graphql +{ + stats(interval: "hour", where: { timestamp_gt: 1704085200 }) { + id + timestamp + sum + } +} +``` + +Note: + +To use Timeseries and Aggregations, a subgraph must have a spec version ≥1.1.0. Note that this feature might undergo significant changes that could affect backward compatibility. + +[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations. + +## Non-fatal errors + +Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. + +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. + +Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +features: + - nonFatalErrors + ... +``` + +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: + +```graphql +foos(first: 100, subgraphError: allow) { + id +} + +_meta { + hasIndexingErrors +} +``` + +If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: + +```graphql +"data": { + "foos": [ + { + "id": "0xdead" + } + ], + "_meta": { + "hasIndexingErrors": true + } +}, +"errors": [ + { + "message": "indexing_error" + } +] +``` + +## IPFS/Arweave File Data Sources + +File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. + +> This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. + +### Overview + +Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found. + +This is similar to the [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources. + +> This replaces the existing `ipfs.cat` API + +### Upgrade guide + +#### Update `graph-ts` and `graph-cli` + +File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1 + +#### Add a new entity type which will be updated when files are found + +File data sources cannot access or update chain-based entities, but must update file specific entities. + +This may mean splitting out fields from existing entities into separate entities, linked together. + +Original combined entity: + +```graphql +type Token @entity { + id: ID! + tokenID: BigInt! + tokenURI: String! + externalURL: String! + ipfsURI: String! + image: String! + name: String! + description: String! + type: String! + updatedAtTimestamp: BigInt + owner: User! +} +``` + +New, split entity: + +```graphql +type Token @entity { + id: ID! + tokenID: BigInt! + tokenURI: String! + ipfsURI: TokenMetadata + updatedAtTimestamp: BigInt + owner: String! +} + +type TokenMetadata @entity { + id: ID! + image: String! + externalURL: String! + name: String! + description: String! +} +``` + +If the relationship is 1:1 between the parent entity and the resulting file data source entity, the simplest pattern is to link the parent entity to a resulting file entity by using the IPFS CID as the lookup. Get in touch on Discord if you are having difficulty modelling your new file-based entities! + +> You can use [nested filters](/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities. + +#### Add a new templated data source with `kind: file/ipfs` or `kind: file/arweave` + +This is the data source which will be spawned when a file of interest is identified. + +```yaml +templates: + - name: TokenMetadata + kind: file/ipfs + mapping: + apiVersion: 0.0.7 + language: wasm/assemblyscript + file: ./src/mapping.ts + handler: handleMetadata + entities: + - TokenMetadata + abis: + - name: Token + file: ./abis/Token.json +``` + +> Currently `abis` are required, though it is not possible to call contracts from within file data sources + +The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#limitations) for more details. + +#### Create a new handler to process files + +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). + +The CID of the file as a readable string can be accessed via the `dataSource` as follows: + +```typescript +const cid = dataSource.stringParam() +``` + +Example handler: + +```typescript +import { json, Bytes, dataSource } from '@graphprotocol/graph-ts' +import { TokenMetadata } from '../generated/schema' + +export function handleMetadata(content: Bytes): void { + let tokenMetadata = new TokenMetadata(dataSource.stringParam()) + const value = json.fromBytes(content).toObject() + if (value) { + const image = value.get('image') + const name = value.get('name') + const description = value.get('description') + const externalURL = value.get('external_url') + + if (name && image && description && externalURL) { + tokenMetadata.name = name.toString() + tokenMetadata.image = image.toString() + tokenMetadata.externalURL = externalURL.toString() + tokenMetadata.description = description.toString() + } + + tokenMetadata.save() + } +} +``` + +#### Spawn file data sources when required + +You can now create file data sources during execution of chain-based handlers: + +- Import the template from the auto-generated `templates` +- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave + +For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). + +For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). + +Example: + +```typescript +import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' + +const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' +//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. + +export function handleTransfer(event: TransferEvent): void { + let token = Token.load(event.params.tokenId.toString()) + if (!token) { + token = new Token(event.params.tokenId.toString()) + token.tokenID = event.params.tokenId + + token.tokenURI = '/' + event.params.tokenId.toString() + '.json' + const tokenIpfsHash = ipfshash + token.tokenURI + //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" + + token.ipfsURI = tokenIpfsHash + + TokenMetadataTemplate.create(tokenIpfsHash) + } + + token.updatedAtTimestamp = event.block.timestamp + token.owner = event.params.to.toHexString() + token.save() +} +``` + +This will create a new file data source, which will poll Graph Node's configured IPFS or Arweave endpoint, retrying if it is not found. When the file is found, the file data source handler will be executed. + +This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. + +> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file + +Congratulations, you are using file data sources! + +#### Deploying your subgraphs + +You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. + +#### Limitations + +File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: + +- Entities created by File Data Sources are immutable, and cannot be updated +- File Data Source handlers cannot access entities from other file data sources +- Entities associated with File Data Sources cannot be accessed by chain-based handlers + +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! + +Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. + +#### Best practices + +If you are linking NFT metadata to corresponding tokens, use the metadata's IPFS hash to reference a Metadata entity from the Token entity. Save the Metadata entity using the IPFS hash as an ID. + +You can use [DataSource context](/developing/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. + +If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. + +> We are working to improve the above recommendation, so queries only return the "most recent" version + +#### Known issues + +File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. + +Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. + +#### Beispiele + +[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) + +#### References + +[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) + +## Indexed Argument Filters / Topic Filters + +> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` + +Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. + +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. + +- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. + +### How Topic Filters Work + +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. + +- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. + +```solidity +// SPDX-License-Identifier: MIT +pragma solidity ^0.8.0; + +contract Token { + // Event declaration with indexed parameters for addresses + event Transfer(address indexed from, address indexed to, uint256 value); + + // Function to simulate transferring tokens + function transfer(address to, uint256 value) public { + // Emitting the Transfer event with from, to, and value + emit Transfer(msg.sender, to, value); + } +} +``` + +In this example: + +- The `Transfer` event is used to log transactions of tokens between addresses. +- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. +- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. + +#### Configuration in Subgraphs + +Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: + +```yaml +eventHandlers: + - event: SomeEvent(indexed uint256, indexed address, indexed uint256) + handler: handleSomeEvent + topic1: ['0xValue1', '0xValue2'] + topic2: ['0xAddress1', '0xAddress2'] + topic3: ['0xValue3'] +``` + +In this setup: + +- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. +- Each topic can have one or more values, and an event is only processed if it matches one of the values in each specified topic. + +#### Filter Logic + +- Within a Single Topic: The logic functions as an OR condition. The event will be processed if it matches any one of the listed values in a given topic. +- Between Different Topics: The logic functions as an AND condition. An event must satisfy all specified conditions across different topics to trigger the associated handler. + +#### Example 1: Tracking Direct Transfers from Address A to Address B + +```yaml +eventHandlers: + - event: Transfer(indexed address,indexed address,uint256) + handler: handleDirectedTransfer + topic1: ['0xAddressA'] # Sender Address + topic2: ['0xAddressB'] # Receiver Address +``` + +In this configuration: + +- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. +- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. +- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. + +#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses + +```yaml +eventHandlers: + - event: Transfer(indexed address,indexed address,uint256) + handler: handleTransferToOrFrom + topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Sender Address + topic2: ['0xAddressB', '0xAddressC'] # Receiver Address +``` + +In this configuration: + +- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. +- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. +- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. + +## Declared eth_call + +> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. + +Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. + +This feature does the following: + +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Allows faster data fetching, resulting in quicker query responses and a better user experience. +- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. + +### Key Concepts + +- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. +- Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously. +- Time Efficiency: The total time taken for all the calls changes from the sum of the individual call times (sequential) to the time taken by the longest call (parallel). + +#### Scenario without Declarative `eth_calls` + +Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. + +Traditionally, these calls might be made sequentially: + +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds + +Total time taken = 3 + 2 + 4 = 9 seconds + +#### Scenario with Declarative `eth_calls` + +With this feature, you can declare these calls to be executed in parallel: + +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds + +Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. + +Total time taken = max (3, 2, 4) = 4 seconds + +#### How it Works + +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. + +#### Example Configuration in Subgraph Manifest + +Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. + +`Subgraph.yaml` using `event.address`: + +```yaml +eventHandlers: +event: Swap(indexed address,indexed address,int256,int256,uint160,uint128,int24) +handler: handleSwap +calls: + global0X128: Pool[event.address].feeGrowthGlobal0X128() + global1X128: Pool[event.address].feeGrowthGlobal1X128() +``` + +Details for the example above: + +- `global0X128` is the declared `eth_call`. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` +- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. + +`Subgraph.yaml` using `event.params` + +```yaml +calls: + - ERC20DecimalsToken0: ERC20[event.params.token0].decimals() +``` + +### Grafting onto Existing Subgraphs + +> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). + +When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. + +A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: + +```yaml +description: ... +graft: + base: Qm... # Subgraph ID of base subgraph + block: 7345624 # Block number +``` + +When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. + +Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. + +The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: + +- It adds or removes entity types +- It removes attributes from entity types +- It adds nullable attributes to entity types +- It turns non-nullable attributes into nullable attributes +- It adds values to enums +- It adds or removes interfaces +- It changes for which entity types an interface is implemented + +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. diff --git a/website/pages/de/developing/creating-a-subgraph/assemblyscript-mappings.mdx b/website/pages/de/developing/creating-a-subgraph/assemblyscript-mappings.mdx new file mode 100644 index 000000000000..2ac894695fe1 --- /dev/null +++ b/website/pages/de/developing/creating-a-subgraph/assemblyscript-mappings.mdx @@ -0,0 +1,113 @@ +--- +title: Writing AssemblyScript Mappings +--- + +## Overview + +The mappings take data from a particular source and transform it into entities that are defined within your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. + +## Writing Mappings + +For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. + +In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: + +```javascript +import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' +import { Gravatar } from '../generated/schema' + +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let id = event.params.id + let gravatar = Gravatar.load(id) + if (gravatar == null) { + gravatar = new Gravatar(id) + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. + +The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on-demand. The entity is then updated to match the new event parameters before it is saved back to the store using `gravatar.save()`. + +### Recommended IDs for Creating New Entities + +It is highly recommended to use `Bytes` as the type for `id` fields, and only use `String` for attributes that truly contain human-readable text, like the name of a token. Below are some recommended `id` values to consider when creating new entities. + +- `transfer.id = event.transaction.hash` + +- `let id = event.transaction.hash.concatI32(event.logIndex.toI32())` + +- For entities that store aggregated data, for e.g, daily trade volumes, the `id` usually contains the day number. Here, using a `Bytes` as the `id` is beneficial. Determining the `id` would look like + +```typescript +let dayID = event.block.timestamp.toI32() / 86400 +let id = Bytes.fromI32(dayID) +``` + +- Convert constant addresses to `Bytes`. + +`const id = Bytes.fromHexString('0xdead...beef')` + +There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) which contains utilities for interacting with the Graph Node store and conveniences for handling smart contract data and entities. It can be imported into `mapping.ts` from `@graphprotocol/graph-ts`. + +### Handling of entities with identical IDs + +When creating and saving a new entity, if an entity with the same ID already exists, the properties of the new entity are always preferred during the merge process. This means that the existing entity will be updated with the values from the new entity. + +If a null value is intentionally set for a field in the new entity with the same ID, the existing entity will be updated with the null value. + +If no value is set for a field in the new entity with the same ID, the field will result in null as well. + +## Code Generation + +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. + +This is done with + +```sh +graph codegen [--output-dir ] [] +``` + +but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: + +```sh +# Yarn +yarn codegen + +# NPM +npm run codegen +``` + +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. + +```javascript +import { + // The contract class: + Gravity, + // The events classes: + NewGravatar, + UpdatedGravatar, +} from '../generated/Gravity/Gravity' +``` + +In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with + +```javascript +import { Gravatar } from '../generated/schema' +``` + +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. + +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/pages/de/developing/creating-a-subgraph/install-the-cli.mdx b/website/pages/de/developing/creating-a-subgraph/install-the-cli.mdx new file mode 100644 index 000000000000..91922e0319e7 --- /dev/null +++ b/website/pages/de/developing/creating-a-subgraph/install-the-cli.mdx @@ -0,0 +1,119 @@ +--- +title: Install the Graph CLI +--- + +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/network/curating/). + +## Overview + +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/creating-a-subgraph/subgraph-manifest/) and compiles the [mappings](/creating-a-subgraph/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. + +## Getting Started + +### Install the Graph CLI + +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +Führen Sie einen der folgenden Befehle auf Ihrem lokalen Computer aus: + +#### Using [npm](https://www.npmjs.com/) + +```bash +npm install -g @graphprotocol/graph-cli@latest +``` + +#### Using [yarn](https://yarnpkg.com/) + +```bash +yarn global add @graphprotocol/graph-cli +``` + +The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. + +## Create a Subgraph + +### From an Existing Contract + +The following command creates a subgraph that indexes all events of an existing contract: + +```sh +graph init \ + --product subgraph-studio + --from-contract \ + [--network ] \ + [--abi ] \ + [] +``` + +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. + +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. + +### From an Example Subgraph + +The following command initializes a new project from an example subgraph: + +```sh +graph init --from-example=example-subgraph +``` + +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. + +### Add New `dataSources` to an Existing Subgraph + +`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. + +Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: + +```sh +graph add
[] + +Options: + + --abi Path to the contract ABI (default: download from Etherscan) + --contract-name Name of the contract (default: Contract) + --merge-entities Whether to merge entities with the same name (default: false) + --network-file Networks config file path (default: "./networks.json") +``` + +#### Besonderheiten + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. + +### Getting The ABIs + +The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: + +- If you are building your own project, you will likely have access to your most current ABIs. +- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. + +## SpecVersion Releases + +| Version | Release notes | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/pages/de/developing/creating-a-subgraph/ql-schema.mdx b/website/pages/de/developing/creating-a-subgraph/ql-schema.mdx new file mode 100644 index 000000000000..90036d1bfab9 --- /dev/null +++ b/website/pages/de/developing/creating-a-subgraph/ql-schema.mdx @@ -0,0 +1,312 @@ +--- +title: The Graph QL Schema +--- + +## Overview + +The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. + +> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/querying/graphql-api/) section. + +### Defining Entities + +Before defining entities, it is important to take a step back and think about how your data is structured and linked. + +- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- It may be useful to imagine entities as "objects containing data", rather than as events or functions. +- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. +- Each type that should be an entity is required to be annotated with an `@entity` directive. +- By default, entities are mutable, meaning that mappings can load existing entities, modify them and store a new version of that entity. + - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. + - If changes happen in the same block in which the entity was created, then mappings can make changes to immutable entities. Immutable entities are much faster to write and to query so they should be used whenever possible. + +#### Good Example + +The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. + +```graphql +type Gravatar @entity(immutable: true) { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String + accepted: Boolean +} +``` + +#### Bad Example + +The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. + +```graphql +type GravatarAccepted @entity { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String +} + +type GravatarDeclined @entity { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String +} +``` + +#### Optional and Required Fields + +Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If the field is a scalar field, you get an error when you try to store the entity. If the field references another entity then you get this error: + +``` +Null value resolved for non-null field 'name' +``` + +Each entity must have an `id` field, which must be of type `Bytes!` or `String!`. It is generally recommended to use `Bytes!`, unless the `id` contains human-readable text, since entities with `Bytes!` id's will be faster to write and query as those with a `String!` `id`. The `id` field serves as the primary key, and needs to be unique among all entities of the same type. For historical reasons, the type `ID!` is also accepted and is a synonym for `String!`. + +For some entity types the `id` for `Bytes!` is constructed from the id's of two other entities; that is possible using `concat`, e.g., `let id = left.id.concat(right.id) ` to form the id from the id's of `left` and `right`. Similarly, to construct an id from the id of an existing entity and a counter `count`, `let id = left.id.concatI32(count)` can be used. The concatenation is guaranteed to produce unique id's as long as the length of `left` is the same for all such entities, for example, because `left.id` is an `Address`. + +### Built-In Scalar Types + +#### GraphQL Supported Scalars + +The following scalars are supported in the GraphQL API: + +| Type | Description | +| --- | --- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | + +### Enums + +You can also create enums within a schema. Enums have the following syntax: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner"`. The example below demonstrates what the Token entity would look like with an enum field: + +More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). + +### Entity Relationships + +An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. + +Relationships are defined on entities just like any other field except that the type specified is that of another entity. + +#### One-To-One Relationships + +Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: + +```graphql +type Transaction @entity(immutable: true) { + id: Bytes! + transactionReceipt: TransactionReceipt +} + +type TransactionReceipt @entity(immutable: true) { + id: Bytes! + transaction: Transaction +} +``` + +#### One-To-Many Relationships + +Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: + +```graphql +type Token @entity(immutable: true) { + id: Bytes! +} + +type TokenBalance @entity { + id: Bytes! + amount: Int! + token: Token! +} +``` + +### Reverse Lookups + +Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. + +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. + +#### Example + +We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: + +```graphql +type Token @entity(immutable: true) { + id: Bytes! + tokenBalances: [TokenBalance!]! @derivedFrom(field: "token") +} + +type TokenBalance @entity { + id: Bytes! + amount: Int! + token: Token! +} +``` + +#### Many-To-Many Relationships + +For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. + +#### Example + +Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. + +```graphql +type Organization @entity { + id: Bytes! + name: String! + members: [User!]! +} + +type User @entity { + id: Bytes! + name: String! + organizations: [Organization!]! @derivedFrom(field: "members") +} +``` + +A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like + +```graphql +type Organization @entity { + id: Bytes! + name: String! + members: [UserOrganization!]! @derivedFrom(field: "organization") +} + +type User @entity { + id: Bytes! + name: String! + organizations: [UserOrganization!] @derivedFrom(field: "user") +} + +type UserOrganization @entity { + id: Bytes! # Set to `user.id.concat(organization.id)` + user: User! + organization: Organization! +} +``` + +This approach requires that queries descend into one additional level to retrieve, for example, the organizations for users: + +```graphql +query usersWithOrganizations { + users { + organizations { + # this is a UserOrganization entity + organization { + name + } + } + } +} +``` + +This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. + +### Adding comments to the schema + +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: + +```graphql +type MyFirstEntity @entity { + # unique identifier and primary key of the entity + id: Bytes! + address: Bytes! +} +``` + +## Defining Fulltext Search Fields + +Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing them to the indexed text data. + +A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. + +To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. + +```graphql +type _Schema_ + @fulltext( + name: "bandSearch" + language: en + algorithm: rank + include: [{ entity: "Band", fields: [{ name: "name" }, { name: "description" }, { name: "bio" }] }] + ) + +type Band @entity { + id: Bytes! + name: String! + description: String! + bio: String + wallet: Address + labels: [Label!]! + discography: [Album!]! + members: [Musician!]! +} +``` + +The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/querying/graphql-api#queries) for a description of the fulltext search API and more example usage. + +```graphql +query { + bandSearch(text: "breaks & electro & detroit") { + id + name + description + wallet + } +} +``` + +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. + +## Languages supported + +Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary from language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". + +Supported language dictionaries: + +| Code | Dictionary | +| ------ | ---------- | +| simple | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | Portuguese | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | + +### Ranking Algorithms + +Supported algorithms for ordering results: + +| Algorithm | Description | +| ------------- | ----------------------------------------------------------------------- | +| rank | Use the match quality (0-1) of the fulltext query to order the results. | +| proximityRank | Similar to rank but also includes the proximity of the matches. | diff --git a/website/pages/de/developing/creating-a-subgraph/starting-your-subgraph.mdx b/website/pages/de/developing/creating-a-subgraph/starting-your-subgraph.mdx new file mode 100644 index 000000000000..5127f01632aa --- /dev/null +++ b/website/pages/de/developing/creating-a-subgraph/starting-your-subgraph.mdx @@ -0,0 +1,21 @@ +--- +title: Starting Your Subgraph +--- + +## Overview + +The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. + +When you create a [subgraph](/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. + +Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. + +### Start Building + +Start the process and build a subgraph that matches your needs: + +1. [Install the CLI](/developing/creating-a-subgraph/install-the-cli/) - Set up your infrastructure +2. [Subgraph Manifest](/developing/creating-a-subgraph/subgraph-manifest/) - Understand a subgraph's key component +3. [The Graph Ql Schema](/developing/creating-a-subgraph/ql-schema/) - Write your schema +4. [Writing AssemblyScript Mappings](/developing/creating-a-subgraph/assemblyscript-mappings/) - Write your mappings +5. [Advanced Features](/developing/creating-a-subgraph/advanced/) - Customize your subgraph with advanced features diff --git a/website/pages/de/developing/creating-a-subgraph/subgraph-manifest.mdx b/website/pages/de/developing/creating-a-subgraph/subgraph-manifest.mdx new file mode 100644 index 000000000000..0adbd44216a0 --- /dev/null +++ b/website/pages/de/developing/creating-a-subgraph/subgraph-manifest.mdx @@ -0,0 +1,534 @@ +--- +title: Subgraph Manifest +--- + +## Overview + +Das Subgraph-Manifest, `subgraph.yaml`, definiert die Smart Contracts und das Netzwerk, die Ihr Subgraph indizieren wird, die Ereignisse aus diesen Verträgen, auf die geachtet werden soll, und wie die Ereignisdaten auf Entitäten abgebildet werden, die Graph Node speichert und abfragen kann. + +Die **Subgraph-Definition** besteht aus den folgenden Dateien: + +- subgraph.yaml": Enthält das Manifest des Subgraphen + +- schema.graphql": Ein GraphQL-Schema, das die für Ihren Subgraph gespeicherten Daten definiert und festlegt, wie sie über GraphQL abgefragt werden können + +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +### Subgraph-Fähigkeiten + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +repository: https://github.com/graphprotocol/graph-tooling +schema: + file: ./schema.graphql +indexerHints: + prune: auto +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + abi: Gravity + startBlock: 6175244 + endBlock: 7175245 + context: + foo: + type: Bool + data: true + bar: + type: String + data: 'bar' + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + - event: UpdatedGravatar(uint256,address,string,string) + handler: handleUpdatedGravatar + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCall + filter: + kind: call + file: ./src/mapping.ts +``` + +## Subgraph Entries + +> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/developing/creating-a-subgraph/ql-schema/). + +The important entries to update for the manifest are: + +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. + +- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. + +- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. + +- `features`: a list of all used [feature](#experimental-features) names. + +- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. + +- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. + +- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. + +- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. + +- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. + +- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. + +- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. + +- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. + +- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. + +- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. + +A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. + +## Event Handlers + +Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. + +### Defining an Event Handler + +An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: Approval(address,address,uint256) + handler: handleApproval + - event: Transfer(address,address,uint256) + handler: handleTransfer + topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optional topic filter which filters only events with the specified topic. +``` + +## Call Handlers + +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. + +Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. + +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. + +### Defining a Call Handler + +To define a call handler in your manifest, simply add a `callHandlers` array under the data source you would like to subscribe to. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar +``` + +The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. + +### Mapping Function + +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: + +```typescript +import { CreateGravatarCall } from '../generated/Gravity/Gravity' +import { Transaction } from '../generated/schema' + +export function handleCreateGravatar(call: CreateGravatarCall): void { + let id = call.transaction.hash + let transaction = new Transaction(id) + transaction.displayName = call.inputs._displayName + transaction.imageUrl = call.inputs._imageUrl + transaction.save() +} +``` + +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. + +## Block Handlers + +In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. + +### Supported Filters + +#### Call Filter + +```yaml +filter: + kind: call +``` + +_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ + +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. + +The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCallToContract + filter: + kind: call +``` + +#### Polling Filter + +> **Requires `specVersion` >= 0.0.8** +> +> **Note:** Polling filters are only available on dataSources of `kind: ethereum`. + +```yaml +blockHandlers: + - handler: handleBlock + filter: + kind: polling + every: 10 +``` + +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. + +#### Once Filter + +> **Requires `specVersion` >= 0.0.8** +> +> **Note:** Once filters are only available on dataSources of `kind: ethereum`. + +```yaml +blockHandlers: + - handler: handleOnce + filter: + kind: once +``` + +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. + +```ts +export function handleOnce(block: ethereum.Block): void { + let data = new InitialData(Bytes.fromUTF8('initial')) + data.data = 'Setup data here' + data.save() +} +``` + +### Mapping Function + +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. + +```typescript +import { ethereum } from '@graphprotocol/graph-ts' + +export function handleBlock(block: ethereum.Block): void { + let id = block.hash + let entity = new Block(id) + entity.save() +} +``` + +## Anonymous Events + +If you need to process anonymous events in Solidity, that can be achieved by providing the topic 0 of the event, as in the example: + +```yaml +eventHandlers: + - event: LogNote(bytes4,address,bytes32,bytes32,uint256,bytes) + topic0: '0x644843f351d3fba4abcd60109eaff9f54bac8fb8ccf0bab941009c21df21cf31' + handler: handleGive +``` + +An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. + +## Transaction Receipts in Event Handlers + +Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. + +To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. + +```yaml +eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + receipt: true +``` + +Inside the handler function, the receipt can be accessed in the `Event.receipt` field. When the `receipt` key is set to `false` or omitted in the manifest, a `null` value will be returned instead. + +## Order of Triggering Handlers + +The triggers for a data source within a block are ordered using the following process: + +1. Event and call triggers are first ordered by transaction index within the block. +2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. + +These ordering rules are subject to change. + +> **Note:** When new [dynamic data source](#data-source-templates-for-dynamically-created-contracts) are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. + +## Data Source Templates + +A common pattern in EVM-compatible smart contracts is the use of registry or factory contracts, where one contract creates, manages, or references an arbitrary number of other contracts that each have their own state and events. + +The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. + +### Data Source for the Main Contract + +First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.org) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on-chain by the factory contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: Factory + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - Directory + abis: + - name: Factory + file: ./abis/factory.json + eventHandlers: + - event: NewExchange(address,address) + handler: handleNewExchange +``` + +### Data Source Templates for Dynamically Created Contracts + +Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a pre-defined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + # ... other source fields for the main contract ... +templates: + - name: Exchange + kind: ethereum/contract + network: mainnet + source: + abi: Exchange + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/exchange.ts + entities: + - Exchange + abis: + - name: Exchange + file: ./abis/exchange.json + eventHandlers: + - event: TokenPurchase(address,uint256,uint256) + handler: handleTokenPurchase + - event: EthPurchase(address,uint256,uint256) + handler: handleEthPurchase + - event: AddLiquidity(address,uint256,uint256) + handler: handleAddLiquidity + - event: RemoveLiquidity(address,uint256,uint256) + handler: handleRemoveLiquidity +``` + +### Instantiating a Data Source Template + +In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + // Start indexing the exchange; `event.params.exchange` is the + // address of the new exchange contract + Exchange.create(event.params.exchange) +} +``` + +> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> +> If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. + +### Data Source Context + +Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + let context = new DataSourceContext() + context.setString('tradingPair', event.params.tradingPair) + Exchange.createWithContext(event.params.exchange, context) +} +``` + +Inside a mapping of the `Exchange` template, the context can then be accessed: + +```typescript +import { dataSource } from '@graphprotocol/graph-ts' + +let context = dataSource.context() +let tradingPair = context.getString('tradingPair') +``` + +There are setters and getters like `setString` and `getString` for all value types. + +## Start Blocks + +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. + +```yaml +dataSources: + - kind: ethereum/contract + name: ExampleSource + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: ExampleContract + startBlock: 6627917 + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - User + abis: + - name: ExampleContract + file: ./abis/ExampleContract.json + eventHandlers: + - event: NewEvent(address,address) + handler: handleNewEvent +``` + +> **Note:** The contract creation block can be quickly looked up on Etherscan: +> +> 1. Search for the contract by entering its address in the search bar. +> 2. Click on the creation transaction hash in the `Contract Creator` section. +> 3. Load the transaction details page where you'll find the start block for that contract. + +## Indexer Hints + +The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. + +> This feature is available from `specVersion: 1.0.0` + +### Prune + +`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: + +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. + +``` + indexerHints: + prune: auto +``` + +> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. + +History as of a given block is required for: + +- [Time travel queries](/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history +- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block +- Rewinding the subgraph back to that block + +If historical data as of the block has been pruned, the above capabilities will not be available. + +> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. + +For subgraphs leveraging [time travel queries](/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: + +To retain a specific amount of historical data: + +``` + indexerHints: + prune: 1000 # Replace 1000 with the desired number of blocks to retain +``` + +To preserve the complete history of entity states: + +``` +indexerHints: + prune: never +``` diff --git a/website/pages/de/developing/developer-faqs.mdx b/website/pages/de/developing/developer-faqs.mdx index b4af2c711bc8..244aae52d0f4 100644 --- a/website/pages/de/developing/developer-faqs.mdx +++ b/website/pages/de/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: Developer FAQs --- -## 1. What is a subgraph? +This page summarizes some of the most common questions for developers building on The Graph. -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers. +## Subgraph Related -## 2. Can I delete my subgraph? +### 1. What is a subgraph? -It is not possible to delete subgraphs once they are created. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Can I change my subgraph name? +### 2. What is the first step to create a subgraph? -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Can I change the GitHub account associated with my subgraph? +### 3. Can I still create a subgraph if my smart contracts don't have events? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Am I still able to create a subgraph if my smart contracts don't have events? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. +### 4. Can I change the GitHub account associated with my subgraph? -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. How do I update a subgraph on mainnet? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. How are templates different from data sources? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) upfront you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -You can run the following command: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. How do I call a contract function or access a public state variable from my subgraph mappings? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +You can run the following command: -## 11. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -## 13. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 15. Can I delete my subgraph? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Yes, you can [delete](/managing/delete-a-subgraph/) and [transfer](/managing/transfer-a-subgraph/) your subgraph. -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Yes. You can do this by importing `graph-ts` as per the example below: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Can I import ethers.js or other JS libraries into my subgraph mappings? - -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +## Indexing & Querying Related -## 17. Is it possible to specify what block to start indexing on? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -102,19 +121,7 @@ Yes! Try the following command, substituting "organization/subgraphName" with th curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. What networks are supported by The Graph? - -You can find the list of the supported networks [here](/developing/supported-networks). - -## 21. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? - -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. - -## 22. Is this possible to use Apollo Federation on top of graph-node? - -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. - -## 23. Is there a limit to how many objects The Graph can return per query? +### 22. Is there a limit to how many objects The Graph can return per query? By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: @@ -122,24 +129,19 @@ By default, query responses are limited to 100 items per collection. If you want someCollection(first: 1000, skip: ) { ... } ``` -## 24. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? - -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. - -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/de/developing/graph-ts/api.mdx b/website/pages/de/developing/graph-ts/api.mdx index c2f994f31006..649dc4d73cd5 100644 --- a/website/pages/de/developing/graph-ts/api.mdx +++ b/website/pages/de/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API Reference @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Loading entities from the store @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Looking up entities created withing a block As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a Transaction from some on-chain event, and a later handler wants to access this transaction if it exists. In the case where the transaction does not exist, the subgraph will have to go to the database just to find out that the entity does not exist; if the subgraph author already knows that the entity must have been created in the same block, using loadInBlock avoids this database roundtrip. For some subgraphs, these missed lookups can contribute significantly to the indexing time. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Any other contract that is part of the subgraph can be imported from the generat #### Handling Reverted Calls -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Encoding/Decoding ABI diff --git a/website/pages/de/developing/supported-networks.mdx b/website/pages/de/developing/supported-networks.mdx index 7c2d8d858261..797202065e99 100644 --- a/website/pages/de/developing/supported-networks.mdx +++ b/website/pages/de/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/de/developing/unit-testing-framework.mdx b/website/pages/de/developing/unit-testing-framework.mdx index 0898dbdd5638..553eec2157b3 100644 --- a/website/pages/de/developing/unit-testing-framework.mdx +++ b/website/pages/de/developing/unit-testing-framework.mdx @@ -2,23 +2,32 @@ title: Unit Testing Framework --- -Matchstick is a unit testing framework, developed by [LimeChain](https://limechain.tech/), that enables subgraph developers to test their mapping logic in a sandboxed environment and deploy their subgraphs with confidence! +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and sucessfully deploy their subgraphs. + +## Benefits of Using Matchstick + +- It's written in Rust and optimized for high performance. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. ## Getting Started -### Install dependencies +### Install Dependencies -In order to use the test helper methods and run the tests, you will need to install the following dependencies: +In order to use the test helper methods and run tests, you need to install the following dependencies: ```sh yarn add --dev matchstick-as ``` -❗ `graph-node` depends on PostgreSQL, so if you don't already have it, you will need to install it. We highly advise using the commands below as adding it in any other way may cause unexpected errors! +### Install PostgreSQL + +`graph-node` depends on PostgreSQL, so if you don't already have it, then you will need to install it. + +> Note: It's highly recommended to use the commands below to avoid unexpected errors. -#### MacOS +#### Using MacOS -Postgres installation command: +Installation command: ```sh brew install postgresql @@ -30,15 +39,15 @@ Create a symlink to the latest libpq.5.lib _You may need to create this dir firs ln -sf /usr/local/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/opt/postgresql/lib/libpq.5.dylib ``` -#### Linux +#### Using Linux -Postgres installation command (depends on your distro): +Installation command (depends on your distro): ```sh sudo apt install postgresql ``` -### WSL (Windows Subsystem for Linux) +### Using WSL (Windows Subsystem for Linux) You can use Matchstick on WSL both using the Docker approach and the binary approach. As WSL can be a bit tricky, here's a few tips in case you encounter issues like @@ -76,7 +85,7 @@ And finally, do not use `graph test` (which uses your global installation of gra } ``` -### Usage +### Using Matchstick To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). @@ -1384,6 +1393,10 @@ This means you have used `console.log` in your code, which is not supported by A The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. +## Additional Resources + +For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). + ## Feedback If you have any questions, feedback, feature requests or just want to reach out, the best place would be The Graph Discord where we have a dedicated channel for Matchstick, called 🔥| unit-testing. diff --git a/website/pages/de/docsearch.json b/website/pages/de/docsearch.json index 9f300c69acb0..366e6903069d 100644 --- a/website/pages/de/docsearch.json +++ b/website/pages/de/docsearch.json @@ -7,36 +7,36 @@ "searchBox": { "resetButtonTitle": "Die Abfrage löschen", "resetButtonAriaLabel": "Die Abfrage löschen", - "cancelButtonText": "Cancel", + "cancelButtonText": "Abbrechen", "cancelButtonAriaLabel": "Anuluj" }, "startScreen": { - "recentSearchesTitle": "Recent", - "noRecentSearchesText": "No recent searches", - "saveRecentSearchButtonTitle": "Save this search", - "removeRecentSearchButtonTitle": "Remove this search from history", - "favoriteSearchesTitle": "Favorite", - "removeFavoriteSearchButtonTitle": "Remove this search from favorites" + "recentSearchesTitle": "Aktuelle", + "noRecentSearchesText": "Keine aktuellen Suchanfragen", + "saveRecentSearchButtonTitle": "Diese Suche speichern", + "removeRecentSearchButtonTitle": "Diese Suche aus dem Verlauf entfernen", + "favoriteSearchesTitle": "Favorit", + "removeFavoriteSearchButtonTitle": "Die Suche aus Favoriten entfernen" }, "errorScreen": { - "titleText": "Unable to fetch results", - "helpText": "You might want to check your network connection." + "titleText": "Ergebnis kann nicht abgerufen werden", + "helpText": "Sie sollten Ihre Netzwerkverbindung überprüfen." }, "footer": { - "selectText": "to select", - "selectKeyAriaLabel": "Enter key", - "navigateText": "to navigate", - "navigateUpKeyAriaLabel": "Arrow up", - "navigateDownKeyAriaLabel": "Arrow down", - "closeText": "to close", - "closeKeyAriaLabel": "Escape key", - "searchByText": "Search by" + "selectText": "zur Auswahl", + "selectKeyAriaLabel": "Enter-Taste", + "navigateText": "zum Navigieren", + "navigateUpKeyAriaLabel": "Pfeil nach oben", + "navigateDownKeyAriaLabel": "Pfeil nach unten", + "closeText": "schließen", + "closeKeyAriaLabel": "Escape-Taste", + "searchByText": "Suche nach" }, "noResultsScreen": { - "noResultsText": "No results for", - "suggestedQueryText": "Try searching for", - "reportMissingResultsText": "Believe this query should return results?", - "reportMissingResultsLinkText": "Let us know." + "noResultsText": "Kein Ergebnis für", + "suggestedQueryText": "Suchen Sie nach", + "reportMissingResultsText": "Glauben Sie, dass diese Abfrage Ergebnisse liefern sollte?", + "reportMissingResultsLinkText": "Informieren Sie uns." } } } diff --git a/website/pages/de/glossary.mdx b/website/pages/de/glossary.mdx index cd24a22fd4d5..bdeff7281023 100644 --- a/website/pages/de/glossary.mdx +++ b/website/pages/de/glossary.mdx @@ -10,11 +10,9 @@ title: Glossary - **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexers**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. @@ -22,19 +20,19 @@ title: Glossary 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. -- **Indexer's Self Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. +- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegators**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curators**: Network participants that identify high-quality subgraphs, and “curate” them (i.e., signal GRT on them) in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. Indexers earn indexing rewards proportional to the signal on a subgraph. We see a correlation between the amount of GRT signalled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. -- **Subgraph Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a subgraph. - **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. @@ -46,15 +44,15 @@ title: Glossary 1. **Active**: An allocation is considered active when it is created on-chain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. -- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. +- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. - **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. @@ -62,11 +60,11 @@ title: Glossary - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. @@ -76,12 +74,8 @@ title: Glossary - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. - -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/de/index.json b/website/pages/de/index.json index 79b15c8d3ca4..a3d38b854804 100644 --- a/website/pages/de/index.json +++ b/website/pages/de/index.json @@ -1,5 +1,5 @@ { - "title": "Get Started", + "title": "Los geht’s", "intro": "Learn about The Graph, a decentralized protocol for indexing and querying data from blockchains.", "shortcuts": { "aboutTheGraph": { @@ -56,16 +56,12 @@ "graphExplorer": { "title": "Graph Explorer", "description": "Explore subgraphs and interact with the protocol" - }, - "hostedService": { - "title": "Hosted Service", - "description": "Create and explore subgraphs on the hosted service" } } }, "supportedNetworks": { "title": "Supported Networks", - "description": "The Graph supports the following networks.", - "footer": "For more details, see the {0} page." + "description": "The Graph unterstützt folgende Netzwerke.", + "footer": "Weitere Einzelheiten finden Sie auf der Seite {0}." } } diff --git a/website/pages/de/managing/delete-a-subgraph.mdx b/website/pages/de/managing/delete-a-subgraph.mdx index 68ef0a37da75..1807741026ae 100644 --- a/website/pages/de/managing/delete-a-subgraph.mdx +++ b/website/pages/de/managing/delete-a-subgraph.mdx @@ -9,7 +9,9 @@ Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). ## Step-by-Step 1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). + 2. Click on the three-dots to the right of the "publish" button. + 3. Click on the option to "delete this subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) diff --git a/website/pages/de/managing/transfer-a-subgraph.mdx b/website/pages/de/managing/transfer-a-subgraph.mdx index c4060284d5d9..ed29ea904e5b 100644 --- a/website/pages/de/managing/transfer-a-subgraph.mdx +++ b/website/pages/de/managing/transfer-a-subgraph.mdx @@ -1,65 +1,42 @@ --- -title: Transfer and Deprecate a Subgraph +title: Einen Subgraph übertragen --- -## Transferring ownership of a subgraph +Subgraphs, die im dezentralen Netzwerk veröffentlicht werden, haben eine NFT, die auf die Adresse geprägt wird, die den Subgraph veröffentlicht hat. Die NFT basiert auf dem Standard ERC721, der Überweisungen zwischen Konten im The Graph Network erleichtert. -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +## Erinnerungshilfen -**Please note the following:** +- Wer im Besitz der NFT ist, kontrolliert den Subgraph. +- Wenn der Eigentümer beschließt, das NFT zu verkaufen oder zu übertragen, kann er diesen Subgraph im Netz nicht mehr bearbeiten oder aktualisieren. +- Sie können die Kontrolle über einen Subgraph leicht an eine Multisig übertragen. +- Ein Community-Mitglied kann einen Subgraph im Namen einer DAO erstellen. -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +## Betrachten Sie Ihren Subgraph als NFT -### View your subgraph as an NFT - -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +Um Ihren Subgraph als NFT zu betrachten, können Sie einen NFT-Marktplatz wie **OpenSea** besuchen: ``` https://opensea.io/your-wallet-address ``` -Or a wallet explorer like **Rainbow.me**: +Oder ein Wallet-Explorer wie **Rainbow.me**: ``` https://rainbow.me/your-wallet-addres ``` -### Step-by-Step - -To transfer ownership of a subgraph, do the following: - -1. Use the UI built into Subgraph Studio: - - ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) - -2. Choose the address that you would like to transfer the subgraph to: - - ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) - -Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: - -![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) - -## Deprecating a subgraph +## Schritt für Schritt -Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. +Um das Eigentum an einem Subgraph zu übertragen, gehen Sie wie folgt vor: -### Step-by-Step +1. Verwenden Sie die in Subgraph Studio integrierte Benutzeroberfläche: -To deprecate your subgraph, do the following: + ![Subgraph-Besitzübertragung](/img/subgraph-ownership-transfer-1.png) -1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). -2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. -3. Your subgraph will no longer appear in searches on Graph Explorer. +2. Wählen Sie die Adresse, an die Sie den Subgraph übertragen möchten: -**Please note the following:** + ![Subgraph-Besitzübertragung](/img/subgraph-ownership-transfer-2.png) -- The owner's wallet should call the `deprecateSubgraph` function. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deprecated subgraphs will show an error message. +Optional können Sie auch die integrierte Benutzeroberfläche von NFT-Marktplätzen wie OpenSea verwenden: -> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. +![Subgraph-Eigentumsübertragung vom NFT-Marktplatz](/img/subgraph-ownership-transfer-nft-marketplace.png) diff --git a/website/pages/de/network/benefits.mdx b/website/pages/de/network/benefits.mdx index e80dd34993af..f9aec6eee091 100644 --- a/website/pages/de/network/benefits.mdx +++ b/website/pages/de/network/benefits.mdx @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [upgrade your subgraph to The Graph's decentralized network](/cookbook/upgrading-a-subgraph). +Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/quick-start). diff --git a/website/pages/de/network/curating.mdx b/website/pages/de/network/curating.mdx index fb2107c53884..a5a63f3e2751 100644 --- a/website/pages/de/network/curating.mdx +++ b/website/pages/de/network/curating.mdx @@ -8,9 +8,7 @@ Curators are critical to The Graph's decentralized economy. They use their knowl Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. -Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. - -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +16,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -30,11 +28,11 @@ Within the Curator tab in Graph Explorer, curators will be able to signal and un A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dApps. One dApp might need to regularly update the subgraph with new features. Another dApp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,8 +47,8 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. @@ -63,7 +61,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th ### 2. How do I decide which subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dApp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: - Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. @@ -78,50 +76,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. Can I sell my curation shares? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Bonding Curve 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Price per shares](/img/price-per-share.png) - -As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: - -![Bonding curve](/img/bonding-curve.png) - -Consider we have two curators that mint shares for a subgraph: - -- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signaling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. - Still confused? Check out our Curation video guide below: diff --git a/website/pages/de/network/delegating.mdx b/website/pages/de/network/delegating.mdx index 81824234e072..eda60369e00a 100644 --- a/website/pages/de/network/delegating.mdx +++ b/website/pages/de/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegating --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Delegator Guide -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly. The Ethereum community provides a comprehensive resource regarding wallets through the following link ([source](https://ethereum.org/en/wallets/)). There are three sections in this guide: @@ -24,15 +34,19 @@ Listed below are the main risks of being a Delegator in the protocol. Delegators cannot be slashed for bad behavior, but there is a tax on Delegators to disincentivize poor decision-making that could harm the integrity of the network. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### The delegation unbonding period Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day @@ -41,47 +55,65 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### Choosing a trustworthy Indexer with a fair reward payout for Delegators -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top Indexer is giving Delegators 90% of the rewards. The middle one is giving Delegators 20%. The bottom one is giving Delegators ~83%.*
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Calculating Delegators expected return +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- A technical Delegator can also look at the Indexer's ability to use the Delegated tokens available to them. If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Considering the query fee cut and indexing fee cut -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the Delegators are getting. The formula is: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) ### Considering the Indexer's delegation pool -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the Delegator a share of the pool: +Delegators should consider the proportion of the Delegation Pool they own. -![Share formula](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Share formula](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Considering the delegation capacity -Another thing to consider is the delegation capacity. Currently, the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Pending Transaction" Bug -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Example -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Video guide for the network UI +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/de/network/developing.mdx b/website/pages/de/network/developing.mdx index 1b76eb94ccca..6ba33f6d916c 100644 --- a/website/pages/de/network/developing.mdx +++ b/website/pages/de/network/developing.mdx @@ -2,52 +2,29 @@ title: Developing --- -Developers are the demand side of The Graph ecosystem. Developers build subgraphs and publish them to The Graph Network. Then, they query live subgraphs with GraphQL in order to power their applications. +To start coding right away, go to [Developer Quick Start](/quick-start/). -## Subgraph Lifecycle +## Overview -Subgraphs deployed to the network have a defined lifecycle. +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. -### Build locally +On The Graph, you can: -As with all subgraph development, it starts with local development and testing. Developers can use the same local setup whether they are building for The Graph Network, the hosted service or a local Graph Node, leveraging `graph-cli` and `graph-ts` to build their subgraph. Developers are encouraged to use tools such as [Matchstick](https://github.com/LimeChain/matchstick) for unit testing to improve the robustness of their subgraphs. +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. -> There are certain constraints on The Graph Network, in terms of feature and network support. Only subgraphs on [supported networks](/developing/supported-networks) will earn indexing rewards, and subgraphs which fetch data from IPFS are also not eligible. +### What is GraphQL? -### Deploy to Subgraph Studio +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. +### Developer Actions -### Publish to the Network +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. -When the developer is happy with their subgraph, they can publish it to The Graph Network. This is an on-chain action, which registers the subgraph so that it is discoverable by Indexers. Published subgraphs have a corresponding NFT, which is then easily transferable. The published subgraph has associated metadata, which provides other network participants with useful context and information. +### What are subgraphs? -### Signal to Encourage Indexing +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Published subgraphs are unlikely to be picked up by Indexers without the addition of signal. Signal is locked GRT associated with a given subgraph, which indicates to Indexers that a given subgraph will receive query volume, and also contributes to the indexing rewards available for processing it. Subgraph developers will generally add signal to their subgraph, in order to encourage indexing. Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. - -### Querying & Application Development - -Once a subgraph has been processed by Indexers and is available for querying, developers can start to use the subgraph in their applications. Developers query subgraphs via a gateway, which forwards their queries to an Indexer who has processed the subgraph, paying query fees in GRT. - -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. - -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. - -### Updating Subgraphs - -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. - -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. - -### Deprecating Subgraphs - -At some point a developer may decide that they no longer need a published subgraph. At that point they may deprecate the subgraph, which returns any signalled GRT to the Curators. - -### Diverse Developer Roles - -Some developers will engage with the full subgraph lifecycle on the network, publishing, querying and iterating on their own subgraphs. Some may be focused on subgraph development, building open APIs which others can build on. Some may be application focused, querying subgraphs deployed by others. - -### Developers and Network Economics - -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +Check out the documentation on [subgraphs](/subgraphs/) to learn specifics. diff --git a/website/pages/de/network/explorer.mdx b/website/pages/de/network/explorer.mdx index bca2993eb0b3..71f5b687a15d 100644 --- a/website/pages/de/network/explorer.mdx +++ b/website/pages/de/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +On each subgraph’s dedicated page, you can do the following: - Signal/Un-signal on subgraphs - View more details such as charts, current deployment ID, and other metadata @@ -31,26 +45,32 @@ On each subgraph’s dedicated page, several details are surfaced. These include ## Participants -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in-depth review of what each tab means for you. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexers ![Explorer Image 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated +**Besonderheiten** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking on the right-hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. To learn more about how to become an Indexer, you can take a look at the [official documentation](/network/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ To learn more about how to become an Indexer, you can take a look at the [offici ### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +In the The Curator table listed below you can see: - The date the Curator started curating - The number of GRT that was deposited @@ -68,34 +92,36 @@ Curators can be community members, data consumers, or even subgraph developers w ![Explorer Image 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegators -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +In the Delegators table you can see the active Delegators in the community and important metrics: - The number of Indexers a Delegator is delegating towards - A Delegator’s original delegation - The rewards they have accumulated but have not withdrawn from the protocol - The realized rewards they withdrew from the protocol - Total amount of GRT they have currently in the protocol -- The date they last delegated at +- The date they last delegated -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Network -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Overview -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - The current total network stake - The stake split between the Indexers and their Delegators @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Protocol parameters such as curation reward, inflation rate, and more - Current epoch rewards and fees -A few key details that are worth mentioning: +A few key details to note: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as: - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Explorer Image 9](/img/Epoch-Stats.png) ## Your User Profile -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Profile Overview -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) ### Subgraphs Tab -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -158,7 +189,9 @@ This section will also include details about your net Indexer rewards and net qu ### Delegating Tab -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. diff --git a/website/pages/de/network/indexing.mdx b/website/pages/de/network/indexing.mdx index 68a96556ac68..c620276a90e7 100644 --- a/website/pages/de/network/indexing.mdx +++ b/website/pages/de/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -477,7 +477,7 @@ graph-indexer-agent start \ --index-node-ids default \ --indexer-management-port 18000 \ --metrics-port 7040 \ - --network-subgraph-endpoint https://gateway.network.thegraph.com/network \ + --network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \ --default-allocation-amount 100 \ --register true \ --inject-dai true \ @@ -512,7 +512,7 @@ graph-indexer-service start \ --postgres-username \ --postgres-password \ --postgres-database is_staging \ - --network-subgraph-endpoint https://gateway.network.thegraph.com/network \ + --network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \ | pino-pretty ``` @@ -545,7 +545,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additonal argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Queue allocation action @@ -810,7 +810,7 @@ To set the delegation parameters using Graph Explorer interface, follow these st ### The life of an allocation -After being created by an Indexer a healthy allocation goes through four states. +After being created by an Indexer a healthy allocation goes through two states. - **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. diff --git a/website/pages/de/network/overview.mdx b/website/pages/de/network/overview.mdx index 16214028dbc9..e89a12cde9bd 100644 --- a/website/pages/de/network/overview.mdx +++ b/website/pages/de/network/overview.mdx @@ -2,14 +2,20 @@ title: Network Overview --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Overview +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Besonderheiten + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Token Economics](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20 used to allocate resources in the network. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/de/new-chain-integration.mdx b/website/pages/de/new-chain-integration.mdx index 35b2bc7c8b4a..3bb6c774f961 100644 --- a/website/pages/de/new-chain-integration.mdx +++ b/website/pages/de/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrating New Networks +title: New Chain Integration --- -Graph Node can currently index data from the following chain types: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -If you are interested in any of those chains, integration is a matter of Graph Node configuration and testing. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. For more information, refer to [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testing an EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Difference between EVM JSON-RPC & Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` *(optionally required for Graph Node to support call handlers)* -While the two are suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](substreams/), like building [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). In addition, Firehose allows for improved indexing speeds when compared to JSON-RPC. +### 2. Firehose Integration -New EVM chain integrators may also consider the Firehose-based approach, given the benefits of substreams and its massive parallelized indexing capabilities. Supporting both allows developers to choose between building substreams or subgraphs for the new chain. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTE**: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that eth_calls are [not a good practice for developers](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testing an EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON RPC methods: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node Configuration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Start by preparing your local environment** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON RPC compliant URL - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ -**Test the integration by locally deploying a subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Create a simple example subgraph. Some options are below: - 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point - 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrating a new Firehose-enabled chain +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Create a simple example subgraph. Some options are below: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. -Integrating a new chain is also possible using the Firehose approach. This is currently the best option for non-EVM chains and a requirement for substreams support. Additional documentation focuses on how Firehose works, adding Firehose support for a new chain and integrating it with Graph Node. Recommended docs for integrators: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrating Graph Node with a new chain via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/de/operating-graph-node.mdx b/website/pages/de/operating-graph-node.mdx index 1db929271a01..0a2dda70086d 100644 --- a/website/pages/de/operating-graph-node.mdx +++ b/website/pages/de/operating-graph-node.mdx @@ -97,9 +97,9 @@ Dieses Setup kann horizontal skaliert werden, indem mehrere Graph-Knoten und meh Eine [TOML](https://toml.io/en/)-Konfigurationsdatei kann verwendet werden, um komplexere Konfigurationen als die in der CLI bereitgestellten festzulegen. Der Speicherort der Datei wird mit dem Befehlszeilenschalter --config übergeben. -> When using a configuration file, it is not possible to use the options --postgres-url, --postgres-secondary-hosts, and --postgres-host-weights. +> Bei Verwendung einer Konfigurationsdatei ist es nicht möglich, die Optionen --postgres-url, --postgres-secondary-hosts und --postgres-host-weights zu verwenden. -A minimal `config.toml` file can be provided; the following file is equivalent to using the --postgres-url command line option: +Eine minimale `config.toml`-Datei kann bereitgestellt werden; Die folgende Datei entspricht der Verwendung der Befehlszeilenoption --postgres-url: ```toml [store] @@ -110,19 +110,19 @@ connection="<.. postgres-url argument ..>" indexers = [ "<.. list of all indexing nodes ..>" ] ``` -Full documentation of `config.toml` can be found in the [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). +Die vollständige Dokumentation von `config.toml` finden Sie in den [Graph Node Dokumenten](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). -#### Multiple Graph Nodes +#### Mehrere Graph-Knoten Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). -> Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. +> Beachten Sie darauf, dass mehrere Graph-Knoten so konfiguriert werden können, dass sie dieselbe Datenbank verwenden, die ihrerseits durch Sharding horizontal skaliert werden kann. -#### Deployment rules +#### Bereitstellungsregeln Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. -Example deployment rule configuration: +Beispielkonfiguration für Bereitstellungsregeln: ```toml [deployment] @@ -150,49 +150,49 @@ indexers = [ ] ``` -Read more about deployment rules [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). +Weitere Informationen zu Bereitstellungsregeln finden Sie [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). -#### Dedicated query nodes +#### Dedizierte Abfrageknoten -Nodes can be configured to explicitly be query nodes by including the following in the configuration file: +Knoten können explizit als Abfrageknoten konfiguriert werden, indem Sie Folgendes in die Konfigurationsdatei aufnehmen: ```toml [general] query = "" ``` -Any node whose --node-id matches the regular expression will be set up to only respond to queries. +Jeder Knoten, dessen --node-id mit dem regulären Ausdruck übereinstimmt, wird so eingerichtet, dass er nur auf Abfragen antwortet. -#### Database scaling via sharding +#### Datenbankskalierung durch Sharding -For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. +Für die meisten Anwendungsfälle reicht eine einzelne Postgres-Datenbank aus, um eine Graph-Node-Instanz zu unterstützen. Wenn eine Graph-Node-Instanz aus einer einzelnen Postgres-Datenbank herauswächst, ist es möglich, die Speicherung der Daten des Graph-Nodes auf mehrere Postgres-Datenbanken aufzuteilen. Alle Datenbanken zusammen bilden den Speicher der Graph-Node-Instanz. Jede einzelne Datenbank wird als Shard bezeichnet. Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. -Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. +Sharding wird nützlich, wenn Ihre vorhandene Datenbank nicht mit der Last Schritt halten kann, die Graph Node ihr auferlegt, und wenn es nicht mehr möglich ist, die Datenbankgröße zu erhöhen. > It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. -In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. +Was das Konfigurieren von Verbindungen betrifft, beginnen Sie mit max_connections in postgresql.conf, das auf 400 (oder vielleicht sogar 200) eingestellt ist, und sehen Sie sich die Prometheus-Metriken store_connection_wait_time_ms und store_connection_checkout_count an. Spürbare Wartezeiten (alles über 5 ms) sind ein Hinweis darauf, dass zu wenige Verbindungen verfügbar sind; hohe Wartezeiten werden auch dadurch verursacht, dass die Datenbank sehr ausgelastet ist (z. B. hohe CPU-Last). Wenn die Datenbank jedoch ansonsten stabil erscheint, weisen hohe Wartezeiten darauf hin, dass die Anzahl der Verbindungen erhöht werden muss. In der Konfiguration ist die Anzahl der Verbindungen, die jede Graph-Knoten-Instanz verwenden kann, eine Obergrenze, und der Graph-Knoten hält Verbindungen nicht offen, wenn er sie nicht benötigt. -Read more about store configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases). +Weitere Informationen zur Speicherkonfiguration finden Sie [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases). -#### Dedicated block ingestion +#### Dedizierte Blockaufnahme -If there are multiple nodes configured, it will be necessary to specify one node which is responsible for ingestion of new blocks, so that all configured index nodes aren't polling the chain head. This is done as part of the `chains` namespace, specifying the `node_id` to be used for block ingestion: +Wenn mehrere Knoten konfiguriert sind, muss ein Knoten angegeben werden, der für die Aufnahme neuer Blöcke verantwortlich ist, damit nicht alle konfigurierten Indexknoten den Kettenkopf abfragen. Dies geschieht als Teil des `chains`-Namespace, der die `node_id` angibt, die für die Blockaufnahme verwendet werden soll: ```toml [chains] ingestor = "block_ingestor_node" ``` -#### Supporting multiple networks +#### Unterstützung mehrerer Netzwerke -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +Das Graph-Protokoll erhöht die Anzahl der Netzwerke, die für die Indizierung von Belohnungen unterstützt werden, und es gibt viele Subgraphen, die nicht unterstützte Netzwerke indizieren, die ein Indexer verarbeiten möchte. Die Datei `config.toml` ermöglicht eine ausdrucksstarke und flexible Konfiguration von: -- Multiple networks -- Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). -- Additional provider details, such as features, authentication and the type of provider (for experimental Firehose support) +- Mehrere Netzwerke +- Mehrere Anbieter pro Netzwerk (dies kann eine Aufteilung der Last auf Anbieter ermöglichen und kann auch die Konfiguration von vollständigen Knoten sowie Archivknoten ermöglichen, wobei Graph Node günstigere Anbieter bevorzugt, wenn eine bestimmte Arbeitslast dies zulässt). +- Zusätzliche Anbieterdetails, wie Funktionen, Authentifizierung und Anbietertyp (für experimentelle Firehose-Unterstützung) The `[chains]` section controls the ethereum providers that graph-node connects to, and where blocks and other metadata for each chain are stored. The following example configures two chains, mainnet and kovan, where blocks for mainnet are stored in the vip shard and blocks for kovan are stored in the primary shard. The mainnet chain can use two different providers, whereas kovan only has one provider. @@ -210,17 +210,17 @@ shard = "primary" provider = [ { label = "kovan", url = "http://..", features = [] } ] ``` -Read more about provider configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers). +Weitere Informationen zur Anbieterkonfiguration finden Sie [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers). -### Environment variables +### Umgebungsvariablen -Graph Node supports a range of environment variables which can enable features, or change Graph Node behaviour. These are documented [here](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md). +Graph Node unterstützt eine Reihe von Umgebungsvariablen, die Funktionen aktivieren oder das Verhalten von Graph Node ändern können. Diese sind [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md) dokumentiert. -### Continuous deployment +### Kontinuierlicher Einsatz -Users who are operating a scaled indexing setup with advanced configuration may benefit from managing their Graph Nodes with Kubernetes. +Benutzer, die ein skaliertes Indizierungs-Setup mit erweiterter Konfiguration betreiben, können von der Verwaltung ihrer Graph-Knoten mit Kubernetes profitieren. -- The indexer repository has an [example Kubernetes reference](https://github.com/graphprotocol/indexer/tree/main/k8s) +- Das Indexer-Repository enthält eine [Beispielreferenz für Kubernetes](https://github.com/graphprotocol/indexer/tree/main/k8s) - [Launchpad](https://docs.graphops.xyz/launchpad/intro) is a toolkit for running a Graph Protocol Indexer on Kubernetes maintained by GraphOps. It provides a set of Helm charts and a CLI to manage a Graph Node deployment. ### Managing Graph Node @@ -231,25 +231,25 @@ Given a running Graph Node (or Graph Nodes!), the challenge is then to manage de Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. -In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). +Außerdem bietet das Festlegen von `GRAPH_LOG_QUERY_TIMING` auf `gql` weitere Details darüber, wie GraphQL-Abfragen ausgeführt werden (obwohl dies eine große Menge an Protokollen generieren wird). -#### Monitoring & alerting +#### Überwachung & Warnungen -Graph Node provides the metrics via Prometheus endpoint on 8040 port by default. Grafana can then be used to visualise these metrics. +Graph Node stellt die Metriken standardmäßig durch den Prometheus-Endpunkt am Port 8040 bereit. Grafana kann dann zur Visualisierung dieser Metriken verwendet werden. -The indexer repository provides an [example Grafana configuration](https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml). +Das Indexer-Repository bietet eine [Beispielkonfiguration für Grafana](https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml). #### Graphman -`graphman` is a maintenance tool for Graph Node, helping with diagnosis and resolution of different day-to-day and exceptional tasks. +`graphman` ist ein Wartungstool für Graph Node, das bei der Diagnose und Lösung verschiedener alltäglicher und außergewöhnlicher Aufgaben hilft. The graphman command is included in the official containers, and you can docker exec into your graph-node container to run it. It requires a `config.toml` file. -Full documentation of `graphman` commands is available in the Graph Node repository. See \[/docs/graphman.md\] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` +Eine vollständige Dokumentation der `graphman`-Befehle ist im Graph Node-Repository verfügbar. Siehe \[/docs/graphman.md\] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) im Graph Node `/docs` ### Working with subgraphs -#### Indexing status API +#### Indizierungsstatus-API Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. diff --git a/website/pages/de/querying/graphql-api.mdx b/website/pages/de/querying/graphql-api.mdx index c1831e3117e6..cd06c27f4d94 100644 --- a/website/pages/de/querying/graphql-api.mdx +++ b/website/pages/de/querying/graphql-api.mdx @@ -1,16 +1,24 @@ --- -title: GraphQL API +title: GraphQL-API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Queries +## What is GraphQL? -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Beispiele -Query for a single `Token` entity defined in your schema: +Die Abfrage für eine einzelne `Token`-Entität, die in Ihrem Schema definiert ist: ```graphql { @@ -21,9 +29,9 @@ Query for a single `Token` entity defined in your schema: } ``` -> **Note:** When querying for a single entity, the `id` field is required, and it must be a string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. -Query all `Token` entities: +Die Abfrage für alle `Token`-Entitäten: ```graphql { @@ -34,11 +42,14 @@ Query all `Token` entities: } ``` -### Sorting +### Sortierung -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, you may: -#### Example +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. + +#### Beispiel ```graphql { @@ -49,11 +60,11 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe } ``` -#### Example for nested entity sorting +#### Beispiel für die Sortierung verschachtelter Entitäten -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. +Ab Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) können Entitäten auf der Basis von verschachtelten Entitäten sortiert werden. -In the following example, we sort the tokens by the name of their owner: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -66,19 +77,20 @@ In the following example, we sort the tokens by the name of their owner: } ``` -> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported. +> Derzeit können Sie in den Feldern `@entity` und `@derivedFrom` nach einstufig tiefen `String`- oder `ID`-Typen sortieren. Leider ist das [Sortieren nach Schnittstellen auf Entitäten mit einer Tiefe von einer Ebene](https://github.com/graphprotocol/graph-node/pull/4058), das Sortieren nach Feldern, die Arrays und verschachtelte Entitäten sind, noch nicht unterstützt. ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. - -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +When querying a collection, it's best to: -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. -#### Example using `first` +#### Ein Beispiel für die Verwendung von `first` -Query the first 10 tokens: +Die Abfrage für die ersten 10 Token: ```graphql { @@ -89,7 +101,7 @@ Query the first 10 tokens: } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +Um Gruppen von Entitäten in der Mitte einer Sammlung abzufragen, kann der Parameter `skip` in Verbindung mit dem Parameter `first` verwendet werden, um eine bestimmte Anzahl von Entitäten beginnend am Anfang der Sammlung zu überspringen. #### Example using `first` and `skip` @@ -106,7 +118,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example using `first` and `id_ge` -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Example using `where` @@ -155,7 +168,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Example for block filtering -You can also filter entities by the `_change_block(number_gte: Int)` - this filters entities which were updated in or after the specified block. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). @@ -193,7 +206,7 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release ##### `AND`-Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -223,7 +236,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ##### `OR` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Example @@ -376,11 +389,11 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 ## Schema -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/de/querying/managing-api-keys.mdx b/website/pages/de/querying/managing-api-keys.mdx index 155710dd6849..f7aff33ea926 100644 --- a/website/pages/de/querying/managing-api-keys.mdx +++ b/website/pages/de/querying/managing-api-keys.mdx @@ -2,23 +2,33 @@ title: Managing your API keys --- -Regardless of whether you’re a dapp developer or a subgraph developer, you’ll need to manage your API keys. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. +## Overview -The "API keys" table lists out existing API keys, which will give you the ability to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, as well as total query numbers. You can click the "three dots" menu to edit a given API key: +API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. + +### Create and Manage API Keys + +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. + +The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. + +You can click the "three dots" menu to the right of a given API key to: - Rename API key - Regenerate API key - Delete API key - Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +### API Key Details + You can click on an individual API key to view the Details page: -1. The **Overview** section will allow you to: +1. Under the **Overview** section, you can: - Edit your key name - Regenerate API keys - View the current usage of the API key with stats: - Number of queries - Amount of GRT spent -2. Under **Security**, you’ll be able to opt into security settings depending on the level of control you’d like to have over your API keys. In this section, you can: +2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - Assign subgraphs that can be queried with your API key diff --git a/website/pages/de/querying/querying-best-practices.mdx b/website/pages/de/querying/querying-best-practices.mdx index 32d1415b20fa..6e085cfe7bf1 100644 --- a/website/pages/de/querying/querying-best-practices.mdx +++ b/website/pages/de/querying/querying-best-practices.mdx @@ -2,17 +2,15 @@ title: Querying Best Practices --- -The Graph provides a decentralized way to query data from blockchains. +The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -The Graph network's data is exposed through a GraphQL API, making it easier to query data with the GraphQL language. - -This page will guide you through the essential GraphQL language rules and GraphQL queries best practices. +Learn the essential GraphQL language rules and best practices to optimize your subgraph. --- ## Querying a GraphQL API -### The anatomy of a GraphQL query +### The Anatomy of a GraphQL Query Unlike REST API, a GraphQL API is built upon a Schema that defines which queries can be performed. @@ -52,7 +50,7 @@ query [operationName]([variableName]: [variableType]) { } ``` -While the list of syntactic do's and don'ts is long, here are the essential rules to keep in mind when it comes to writing GraphQL queries: +## Rules for Writing GraphQL Queries - Each `queryName` must only be used once per operation. - Each `field` must be used only once in a selection (we cannot query `id` twice under `token`) @@ -61,9 +59,9 @@ While the list of syntactic do's and don'ts is long, here are the essential rule - In a given list of variables, each of them must be unique. - All defined variables must be used. -Failing to follow the above rules will end with an error from the Graph API. +> Note: Failing to follow these rules will result in an error from The Graph API. -For a complete list of rules with code examples, please look at our [GraphQL Validations guide](/release-notes/graphql-validations-migration-guide/). +For a complete list of rules with code examples, check out [GraphQL Validations guide](/release-notes/graphql-validations-migration-guide/). ### Sending a query to a GraphQL API @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as mentioned in ["Querying from an Application"](/querying/querying-from-an-application), it's recommended to use `graph-client`, which supports the following unique features: - Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -164,11 +160,11 @@ Doing so brings **many advantages**: - **Variables can be cached** at server-level - **Queries can be statically analyzed by tools** (more on this in the following sections) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -337,8 +332,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- harder to read for more extensive queries -- when using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### GraphQL Fragment do's and don'ts -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- when fields of the same type are repeated in a query, group them in a Fragment -- when similar but not the same fields are repeated, create multiple fragments, ex: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## The essential tools +## The Essential Tools ### GraphQL web-based explorers @@ -473,11 +468,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets -- go to definition for fragments and input types +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -485,9 +480,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/de/querying/querying-from-an-application.mdx b/website/pages/de/querying/querying-from-an-application.mdx index 84c489360087..510accf3c883 100644 --- a/website/pages/de/querying/querying-from-an-application.mdx +++ b/website/pages/de/querying/querying-from-an-application.mdx @@ -2,42 +2,46 @@ title: Querying from an Application --- -Once a subgraph is deployed to Subgraph Studio or to Graph Explorer, you will be given the endpoint for your GraphQL API that should look something like this: +Learn how to query The Graph from your application. -**Subgraph Studio (testing endpoint)** +## Getting GraphQL Endpoint -```sh -Queries (HTTP) +Once a subgraph is deployed to [Subgraph Studio](https://thegraph.com/studio/) or [Graph Explorer](https://thegraph.com/explorer), you will be given the endpoint for your GraphQL API that should look something like this: + +### Subgraph Studio + +``` https://api.studio.thegraph.com/query/// ``` -**Graph Explorer** +### Graph Explorer -```sh -Queries (HTTP) +``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -Using the GraphQL endpoint, you can use various GraphQL Client libraries to query the subgraph and populate your app with the data indexed by the subgraph. - -Here are a couple of the more popular GraphQL clients in the ecosystem and how to use them: +With your GraphQL endpoint, you can use various GraphQL Client libraries to query the subgraph and populate your app with data indexed by the subgraph. -## GraphQL clients +## Using Popular GraphQL Clients -### Graph client +### Graph Client -The Graph is providing it own GraphQL client, `graph-client` that supports unique features such as: +The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: - Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result -Also integrated with popular GraphQL clients such as Apollo and URQL and compatible with all environments (React, Angular, Node.js, React Native), using `graph-client` will give you the best experience for interacting with The Graph. +> Note: `graph-client` is integrated with other popular GraphQL clients such as Apollo and URQL, which are compatible with environments such as React, Angular, Node.js, and React Native. As a result, using `graph-client` will provide you with an enhanced experience for working with The Graph. + +### Fetch Data with Graph Client + +Let's look at how to fetch data from a subgraph with `graph-client`: -Let's look at how to fetch data from a subgraph with `graphql-client`. +#### Schritt 1 -To get started, make sure to install The Graph Client CLI in your project: +Install The Graph Client CLI in your project: ```sh yarn add -D @graphprotocol/client-cli @@ -45,6 +49,8 @@ yarn add -D @graphprotocol/client-cli npm install --save-dev @graphprotocol/client-cli ``` +#### Schritt 2 + Define your query in a `.graphql` file (or inlined in your `.js` or `.ts` file): ```graphql @@ -72,7 +78,9 @@ query ExampleQuery { } ``` -Then, create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +#### Schritt 3 + +Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: ```yaml # .graphclientrc.yml @@ -90,13 +98,17 @@ documents: - ./src/example-query.graphql ``` -Running the following The Graph Client CLI command will generate typed and ready to use JavaScript code: +#### Step 4 + +Run the following The Graph Client CLI command to generate typed and ready to use JavaScript code: ```sh graphclient build ``` -Finally, update your `.ts` file to use the generated typed GraphQL documents: +#### Step 5 + +Update your `.ts` file to use the generated typed GraphQL documents: ```tsx import React, { useEffect } from 'react' @@ -134,33 +146,35 @@ function App() { export default App ``` -**⚠️ Important notice** +> **Important Note:** `graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you can [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). However, if you choose to go with another client, keep in mind that **you won't be able to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. -`graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you will [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). +### Apollo Client -However, if you choose to go with another client, keep in mind that **you won't be able to get to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. +[Apollo client](https://www.apollographql.com/docs/) is a common GraphQL client on front-end ecosystems. It's available for React, Angular, Vue, Ember, iOS, and Android. -### Apollo client +Although it's the heaviest client, it has many features to build advanced UI on top of GraphQL: -[Apollo client](https://www.apollographql.com/docs/) is the ubiquitous GraphQL client on the front-end ecosystem. +- Advanced error handling +- Pagination +- Data prefetching +- Optimistic UI +- Local state management -Available for React, Angular, Vue, Ember, iOS, and Android, Apollo Client, although the heaviest client, brings many features to build advanced UI on top of GraphQL: +### Fetch Data with Apollo Client -- advanced error handling -- pagination -- data prefetching -- optimistic UI -- local state management +Let's look at how to fetch data from a subgraph with Apollo client: -Let's look at how to fetch data from a subgraph with Apollo client in a web project. +#### Schritt 1 -First, install `@apollo/client` and `graphql`: +Install `@apollo/client` and `graphql`: ```sh npm install @apollo/client graphql ``` -Then you can query the API with the following code: +#### Schritt 2 + +Query the API with the following code: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -193,6 +207,8 @@ client }) ``` +#### Schritt 3 + To use variables, you can pass in a `variables` argument to the query: ```javascript @@ -224,24 +240,30 @@ client }) ``` -### URQL +### URQL Overview -Another option is [URQL](https://formidable.com/open-source/urql/) which is available within Node.js, React/Preact, Vue, and Svelte environments, with more advanced features: +[URQL](https://formidable.com/open-source/urql/) is available within Node.js, React/Preact, Vue, and Svelte environments, with some more advanced features: - Flexible cache system - Extensible design (easing adding new capabilities on top of it) - Lightweight bundle (~5x lighter than Apollo Client) - Support for file uploads and offline mode -Let's look at how to fetch data from a subgraph with URQL in a web project. +### Fetch data with URQL + +Let's look at how to fetch data from a subgraph with URQL: -First, install `urql` and `graphql`: +#### Schritt 1 + +Install `urql` and `graphql`: ```sh npm install urql graphql ``` -Then you can query the API with the following code: +#### Schritt 2 + +Query the API with the following code: ```javascript import { createClient } from 'urql' diff --git a/website/pages/de/querying/querying-the-graph.mdx b/website/pages/de/querying/querying-the-graph.mdx index a573683573c5..1255e0e88a51 100644 --- a/website/pages/de/querying/querying-the-graph.mdx +++ b/website/pages/de/querying/querying-the-graph.mdx @@ -2,7 +2,7 @@ title: Querying The Graph --- -When a subgraph is published to The Graph Network, you can visit its subgraph details page on [Graph Explorer](https://thegraph.com/explorer) and use the "Playground" tab to explore the deployed GraphQL API for the subgraph, issuing queries and viewing the schema. +When a subgraph is published to The Graph Network, you can visit its subgraph details page on [Graph Explorer](https://thegraph.com/explorer) and use the "query" tab to explore the deployed GraphQL API for the subgraph, issuing queries and viewing the schema. > Please see the [Query API](/querying/graphql-api) for a complete reference on how to query the subgraph's entities. You can learn about GraphQL querying best practices [here](/querying/querying-best-practices) @@ -10,7 +10,9 @@ When a subgraph is published to The Graph Network, you can visit its subgraph de Each subgraph published to The Graph Network has a unique query URL in Graph Explorer for making direct queries that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. -![Query Subgraph Pane](/img/query-subgraph-pane.png) +![Query Subgraph Button](/img/query-button-screenshot.png) + +![Query Subgraph URL](/img/query-url-screenshot.png) Learn more about querying from an application [here](/querying/querying-from-an-application). diff --git a/website/pages/de/quick-start.mdx b/website/pages/de/quick-start.mdx index 1a3c915185de..10201769e249 100644 --- a/website/pages/de/quick-start.mdx +++ b/website/pages/de/quick-start.mdx @@ -2,166 +2,183 @@ title: Schnellstart --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily build, publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Stellen Sie sicher, dass Ihr Subgraph Daten aus einem [unterstützten Netzwerk] (/developing/supported-networks) indiziert. - -Bei der Erstellung dieses Leitfadens wird davon ausgegangen, dass Sie über die entsprechenden Kenntnisse verfügen: +## Prerequisites - Eine Krypto-Wallet -- Eine Smart-Contract-Adresse im Netzwerk Ihrer Wahl nach +- A smart contract address on a [supported network](/developing/supported-networks/) +- [Node.js](https://nodejs.org/) installed +- A package manager of your choice (`npm`, `yarn` or `pnpm`) + +## How to Build a Subgraph -## 1. Erstellen Sie einen Untergraphen in Subgraph Studio +### 1. Create a subgraph in Subgraph Studio -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und verbinden Sie Ihre Wallet. -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +Mit Subgraph Studio können Sie Subgraphen erstellen, verwalten, bereitstellen und veröffentlichen sowie API-Schlüssel erstellen und verwalten. -## 2. Installieren der Graph-CLI +Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +### 2. Installieren der Graph-CLI Führen Sie einen der folgenden Befehle auf Ihrem lokalen Computer aus: -Using [npm](https://www.npmjs.com/): +Verwendung von [npm](https://www.npmjs.com/): ```sh npm install -g @graphprotocol/graph-cli@latest ``` -Using [yarn](https://yarnpkg.com/): +Verwendung von [yarn] (https://yarnpkg.com/): ```sh yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 3. Initialize your subgraph + +> Die Befehle für Ihren spezifischen Subgraphen finden Sie auf der Subgraphen-Seite in [Subgraph Studio](https://thegraph.com/studio/). -Initialize your subgraph from an existing contract by running the initialize command: +The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. + +Mit dem folgenden Befehl wird Ihr Subgraph aus einem bestehenden Vertrag initialisiert: ```sh -graph init --studio +graph init ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. -Wenn Sie Ihren Untergraphen initialisieren, fragt das CLI-Tool Sie nach den folgenden Informationen: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protokoll: Wählen Sie das Protokoll aus, von dem Ihr Untergraph ( Subgraph ) Daten indizieren soll. -- Subgraph slug: Erstellen Sie einen Namen für Ihren Subgraphen. Ihr Subgraph-Slug ist ein Identifikationsmerkmal für Ihren Subgraphen. -- Verzeichnis zur Erstellung des Subgraphen: Wählen Sie Ihr lokales Verzeichnis -- Ethereum-Netzwerk (optional): Sie müssen ggf. angeben, von welchem EVM-kompatiblen Netzwerk Ihr Subgraph Daten indizieren soll. -- Vertragsadresse: Suchen Sie die Smart-Contract-Adresse, von der Sie Daten abfragen möchten -- ABI: Wenn die ABI nicht automatisch ausgefüllt wird, müssen Sie sie manuell in Form einer JSON-Datei eingeben. -- Startblock: Es wird empfohlen, den Startblock einzugeben, um Zeit zu sparen, während Ihr Subgraph die Blockchain-Daten indiziert. Sie können den Startblock finden, indem Sie den Block suchen, in dem Ihr Vertrag bereitgestellt wurde. -- Vertragsname: Geben Sie den Namen Ihres Vertrags ein -- Index contract events as entities (Vertragsereignisse als Entitäten): Es wird empfohlen, dies auf true (wahr) zu setzen, da es automatisch Zuordnungen zu Ihrem Subgraph für jedes emittierte Ereignis hinzufügt -- Einen weiteren Vertrag hinzufügen (optional): Sie können einen weiteren Vertrag hinzufügen +- **Protocol**: Choose the protocol your subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- **Directory**: Choose a directory to create your subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Contract address**: Locate the smart contract address you’d like to query data from. +- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Contract Name**: Input the name of your contract. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Add another contract** (optional): You can add another contract. Der folgende Screenshot zeigt ein Beispiel dafür, was Sie bei der Initialisierung Ihres Untergraphen ( Subgraph ) erwarten können: -![Subgraph command](/img/subgraph-init-example.png) +![Subgraph command](/img/CLI-Example.png) + +### 4. Edit your subgraph -## 4. Write your subgraph +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -Die vorangegangenen Befehle erstellen einen gerüstartigen Subgraphen, den Sie als Ausgangspunkt für den Aufbau Ihres Subgraphen verwenden können. Wenn Sie Änderungen an dem Subgraphen vornehmen, werden Sie hauptsächlich mit +When making changes to the subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +- Manifest (`subgraph.yaml`) - definiert, welche Datenquellen Ihr Subgraph indizieren wird. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (mapping.ts) - Dies ist der Code, der die Daten aus Ihren Datenquellen in die im Schema definierten Entitäten übersetzt. -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -## 5. Deploy to Subgraph Studio +### 5. Deploy your subgraph + +Denken Sie daran, dass die Bereitstellung nicht dasselbe ist wie die Veröffentlichung. + +When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. + +When you publish a subgraph, you are publishing it onchain to the decentralized network. Sobald Ihr Subgraph geschrieben ist, führen Sie die folgenden Befehle aus: +```` ```sh -$ graph codegen -$ graph build +graph codegen && graph build ``` +```` + +Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. + +![Deploy key](/img/subgraph-studio-deploy-key.jpg) + +```` +```sh + +graph auth + +graph deploy +``` +```` + +The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. + +### 6. Review your subgraph -- Authentifizieren Sie Ihren Subgraphen und stellen Sie ihn bereit. Den Bereitstellungsschlüssel finden Sie auf der Seite "Subgraph" in Subgraph Studio. +If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +- Führen Sie eine Testabfrage durch. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: + + ![Subgraph logs](/img/subgraph-logs-image.png) + +### 7. Publish your subgraph to The Graph Network + +Publishing a subgraph to the decentralized network is an onchain action that makes your subgraph available for [Curators](/network/curating/) to curate it and [Indexers](/network/indexing/) to index it. + +#### Veröffentlichung mit Subgraph Studio + +To publish your subgraph, click the Publish button in the dashboard. + +![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) + +Select the network to which you would like to publish your subgraph. + +#### Veröffentlichen über die CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +Öffnen Sie den `graph-cli`. + +Verwenden Sie die folgenden Befehle: + +```` ```sh -$ graph auth --studio -$ graph deploy --studio +graph codegen && graph build ``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Testen Sie Ihren Untergraphen ( Subgraphen ) - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -In den Protokollen können Sie sehen, ob es Fehler in Ihrem Subgraphen gibt. Die Protokolle eines funktionierenden Subgraphen sehen wie folgt aus: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +Then, + +```sh +graph publish ``` +```` + +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +Wie Sie Ihre Bereitstellung anpassen können, erfahren Sie unter [Veröffentlichen eines Subgraphen](/publishing/publishing-a-subgraph/). -## 7. Publish your subgraph to The Graph’s Decentralized Network +#### Adding signal to your subgraph -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +2. Indexer erhalten GRT Rewards auf der Grundlage des signalisierten Betrags, wenn sie für Indexing Rewards in Frage kommen. -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + - Es wird empfohlen, mindestens 3.000 GRT zu kuratieren, um 3 Indexer anzuziehen. Prüfen Sie die Berechtigung zum Reward anhand der Nutzung der Subgraph-Funktionen und der unterstützten Netzwerke. -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +Um mehr über Kuratierung zu erfahren, lesen Sie [Kuratieren](/network/curating/). -Um Gaskosten zu sparen, können Sie Ihren Subgraphen in der gleichen Transaktion kuratieren, in der Sie ihn veröffentlicht haben, indem Sie diese Schaltfläche auswählen, wenn Sie Ihren Subgraphen im dezentralen Netzwerk von The Graph veröffentlichen: +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: -![Subgraph publish](/img/publish-and-signal-tx.png) +![Subgraph veröffentlichen](/img/studio-publish-modal.png) -## 8. Query your subgraph +### 8. Query your subgraph -Jetzt können Sie Ihren Subgraphen abfragen, indem Sie GraphQL-Abfragen an die Abfrage-URL Ihres Subgraphen senden, die Sie durch Klicken auf die Abfrage-Schaltfläche finden können. +You now have access to 100,000 free queries per month with your subgraph on The Graph Network! -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +Weitere Informationen zur Abfrage von Daten aus Ihrem Subgraphen finden Sie unter [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/de/sps/introduction.mdx b/website/pages/de/sps/introduction.mdx index 3e50521589af..12e3f81c6d53 100644 --- a/website/pages/de/sps/introduction.mdx +++ b/website/pages/de/sps/introduction.mdx @@ -14,6 +14,6 @@ It is really a matter of where you put your logic, in the subgraph or the Substr Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: -- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application/solana) -- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application/evm) -- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application/injective) +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/de/sps/triggers-example.mdx b/website/pages/de/sps/triggers-example.mdx index d8d61566295e..bb278b7f0eee 100644 --- a/website/pages/de/sps/triggers-example.mdx +++ b/website/pages/de/sps/triggers-example.mdx @@ -11,6 +11,8 @@ Before starting, make sure to: ## Step 1: Initialize Your Project + + 1. Open your Dev Container and run the following command to initialize your project: ```bash @@ -18,6 +20,7 @@ Before starting, make sure to: ``` 2. Select the "minimal" project option. + 3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: ```yaml @@ -87,17 +90,7 @@ type MyTransfer @entity { This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. -## Step 4: Generate Protobuf Files - -To generate Protobuf objects in AssemblyScript, run the following command: - -```bash -npm run protogen -``` - -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. - -## Step 5: Handle Substreams Data in `mappings.ts` +## Step 4: Handle Substreams Data in `mappings.ts` With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: @@ -120,7 +113,7 @@ export function handleTriggers(bytes: Uint8Array): void { entity.designation = event.transfer!.accounts!.destination if (event.transfer!.accounts!.signer!.single != null) { - entity.signers = [event.transfer!.accounts!.signer!.single.signer] + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] } else if (event.transfer!.accounts!.signer!.multisig != null) { entity.signers = event.transfer!.accounts!.signer!.multisig!.signers } @@ -130,6 +123,16 @@ export function handleTriggers(bytes: Uint8Array): void { } ``` +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + ## Conclusion You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. diff --git a/website/pages/de/subgraphs.mdx b/website/pages/de/subgraphs.mdx index 27b452211477..58cf1e9f6498 100644 --- a/website/pages/de/subgraphs.mdx +++ b/website/pages/de/subgraphs.mdx @@ -1,41 +1,86 @@ --- -title: Subgraphs +title: Subgraphen --- -## What is a Subgraph? +## Was ist ein Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +Ein Subgraph ist eine benutzerdefinierte, offene API, die Daten aus einer Blockchain extrahiert, verarbeitet und so speichert, dass sie einfach über GraphQL abgefragt werden können. -### Subgraph Capabilities +### Subgraph-Fähigkeiten -- **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Zugangsdaten:** Subgraphs ermöglichen die Abfrage und Indizierung von Blockchain-Daten für web3. +- **Build:** Entwickler können Subgraphs für The Graph Network erstellen, bereitstellen und veröffentlichen. Um loszulegen, schauen Sie sich den Subgraph Entwickler [Quick Start](quick-start/) an. +- **Index & Abfrage:** Sobald ein Subgraph indiziert ist, kann jeder ihn abfragen. Alle im Netzwerk veröffentlichten Subgraphen können im [Graph Explorer] (https://thegraph.com/explorer) untersucht und abgefragt werden. -## Inside a Subgraph +## Innerhalb eines Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +Das Subgraph-Manifest, `subgraph.yaml`, definiert die Smart Contracts und das Netzwerk, die Ihr Subgraph indizieren wird, die Ereignisse aus diesen Verträgen, auf die geachtet werden soll, und wie die Ereignisdaten auf Entitäten abgebildet werden, die Graph Node speichert und abfragen kann. -The **subgraph definition** consists of the following files: +Die **Subgraph-Definition** besteht aus den folgenden Dateien: -- `subgraph.yaml`: Contains the subgraph manifest +- subgraph.yaml": Enthält das Manifest des Subgraphen -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- schema.graphql": Ein GraphQL-Schema, das die für Ihren Subgraph gespeicherten Daten definiert und festlegt, wie sie über GraphQL abgefragt werden können -- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema +- mapping.ts\`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) Code, der die Ereignisdaten in die in Ihrem Schema definierten Entitäten übersetzt -To learn more about each of subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +Um mehr über die einzelnen Komponenten eines Subgraphs zu erfahren, lesen Sie bitte [Erstellen eines Subgraphs](/developing/creating-a-subgraph/). -## Subgraph Development +## Subgraph Lebenszyklus -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +Hier ist ein allgemeiner Überblick über den Lebenszyklus eines Subgraphs: -## Subgraph Lifecycle +![Subgraph Lifecycle](/img/subgraph-lifecycle.png) -Here is a general overview of a subgraph’s lifecycle: +## Subgraph Entwicklung -![Subgraph Lifecycle](/img/subgraph-lifecycle.png) +1. [Einen Subgraph erstellen](/entwickeln/einen-subgraph-erstellen/) +2. [Einen Subgraph bereitstellen](/deploying/deploying-a-subgraph-to-studio/) +3. [Testen eines Subgraphen](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Einen Subgraph veröffentlichen](/publishing/publishing-a-subgraph/) +5. [Signal auf einem Subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) + +### Build locally + +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. + +### Deploy to Subgraph Studio + +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: + +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. + +### Publish to the Network + +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. + +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. + +### Add Curation Signal for Indexing + +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. + +#### What is signal? + +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. + +### Querying & Application Development + +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). + +Learn more about [querying subgraphs](/querying/querying-the-graph/). + +### Updating Subgraphs + +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. + +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. + +### Deleting & Transferring Subgraphs + +If you no longer need a published subgraph, you can [delete](/managing/delete-a-subgraph/) or [transfer](/managing/transfer-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/de/substreams.mdx b/website/pages/de/substreams.mdx index 710e110012cc..6385439f89f0 100644 --- a/website/pages/de/substreams.mdx +++ b/website/pages/de/substreams.mdx @@ -4,25 +4,27 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps 1. **You write a Rust program, which defines the transformations that you want to apply to the blockchain data.** For example, the following Rust function extracts relevant information from an Ethereum block (number, hash, and parent hash). -```rust -fn get_my_block(blk: Block) -> Result { - let header = blk.header.as_ref().unwrap(); + ```rust + fn get_my_block(blk: Block) -> Result { + let header = blk.header.as_ref().unwrap(); - Ok(MyBlock { - number: blk.number, - hash: Hex::encode(&blk.hash), - parent_hash: Hex::encode(&header.parent_hash), - }) -} -``` + Ok(MyBlock { + number: blk.number, + hash: Hex::encode(&blk.hash), + parent_hash: Hex::encode(&header.parent_hash), + }) + } + ``` 2. **You wrap up your Rust program into a WASM module just by running a single CLI command.** @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/de/sunrise.mdx b/website/pages/de/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/de/sunrise.mdx +++ b/website/pages/de/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/de/tap.mdx b/website/pages/de/tap.mdx index 0a41faab9c11..c259f0dbbf32 100644 --- a/website/pages/de/tap.mdx +++ b/website/pages/de/tap.mdx @@ -13,7 +13,7 @@ Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, T - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +## Besonderheiten TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. @@ -45,15 +45,15 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed ### Contracts -| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| Contract | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) | | ------------------- | -------------------------------------------- | -------------------------------------------- | -| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | -| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | -| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | +| TAP Verifier | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | +| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | +| Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | ### Gateway -| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| Component | Edge and Node Mainnet (Aribtrum Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) | | ---------- | --------------------------------------------- | --------------------------------------------- | | Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | @@ -190,4 +190,4 @@ You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs ### Launchpad -Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/main/charts/graph-network-indexer) diff --git a/website/pages/de/tokenomics.mdx b/website/pages/de/tokenomics.mdx index 2359a03db100..135ec1c56fb8 100644 --- a/website/pages/de/tokenomics.mdx +++ b/website/pages/de/tokenomics.mdx @@ -1,25 +1,25 @@ --- title: Tokenomics des The Graph Netzwerks -description: The Graph Network wird durch leistungsstarke Tokenomics unterstützt. Hier ist, wie GRT, The Graph's eigener Work Utility Token funktioniert. +description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. --- -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +## Overview -- GRT-Token-Adresse auf Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. -The Graph ist ein dezentrales Protokoll, das einen einfachen Zugang zu Blockchain-Daten ermöglicht. +## Besonderheiten -Es ähnelt einem B2B2C-Modell, nur dass es von einem dezentralen Netzwerk von Teilnehmern betrieben wird. Die Netzwerkteilnehmer arbeiten zusammen, um den Endnutzern Daten im Austausch für GRT-Belohnungen zur Verfügung zu stellen. GRT ist der Arbeits-Utility-Token, der Datenanbieter und -nutzer koordiniert. GRT dient als Dienstprogramm zur Koordinierung von Datenanbietern und -nachfragern innerhalb des Netzwerks und schafft Anreize für die Protokollteilnehmer, Daten effektiv zu organisieren. +The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. -Durch die Verwendung von The Graph können Nutzer einfach auf Daten aus der Blockchain zugreifen und zahlen nur für die spezifischen Informationen, die sie benötigen. The Graph wird heute von vielen [populären Dapps](https://thegraph.com/explorer) im web3-Ökosystem verwendet. +The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/billing/). -The Graph indexiert Blockchain-Daten ähnlich wie Google das Web. Es kann sogar sein, dass Sie The Graph bereits nutzen, ohne es zu merken. Wenn Sie sich das Frontend einer Dapp angesehen haben, die ihre Daten aus einem Subgraph bezieht, haben Sie Daten aus einem Subgraph abgefragt! +- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -The Graph spielt eine entscheidende Rolle, wenn es darum geht, Blockchain-Daten besser zugänglich zu machen und einen Marktplatz für deren Austausch zu schaffen. +- GRT-Token-Adresse auf Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -## Die Rollen der Teilnehmer des Netzwerks +## The Roles of Network Participants -Es gibt vier primäre Netzwerkteilnehmer: +There are four primary network participants: 1. Delegatoren - Delegieren Sie GRT an Indexer & sichern Sie das Netzwerk @@ -29,82 +29,74 @@ Es gibt vier primäre Netzwerkteilnehmer: 4. Indexer - Das Rückgrat der Blockchain-Daten -Fischer und Schiedsrichter tragen auch durch andere Beiträge zum Erfolg des Netzwerks bei und unterstützen die Arbeit der anderen Hauptakteure. Weitere Informationen über die Rollen im Netzwerk finden Sie in diesem Artikel[Lesen Sie diesen Artikel](https://thegraph.com/blog/the-graph-grt-token-economics/). +Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). -![Diagramm zur Tokenomik](/img/updated-tokenomics-image.png) +![Tokenomics diagram](/img/updated-tokenomics-image.png) -## Delegatoren (verdienen passiv GRT) +## Delegators (Passively earn GRT) -Indexer werden von Delegatoren mit GRT betraut, die den Anteil des Indexers an den Subgraphen im Netzwerk erhöhen. Im Gegenzug verdienen die Delegatoren einen Prozentsatz aller Abfragegebühren und Indexierungsbelohnungen von den Indexern. Jeder Indexer legt den Anteil, den er an die Delegatoren vergütet, selbständig fest, wodurch ein Wettbewerb zwischen den Indexern entsteht, um Delegatoren anzuziehen. Die meisten Indexierer bieten zwischen 9-12% jährlich. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. -Wenn zum Beispiel ein Delegator 15.000 BRT an einen Indexer delegiert, der 10 % anbietet, würde der Delegator jährlich ~1500 GRT an Belohnungen erhalten. +For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. -Es gibt eine Delegationssteuer von 0,5 %, die jedes Mal erhoben wird, wenn ein Delegator GRT an das Netzwerk delegiert. Wenn ein Delegator beschließt, sein delegiertes GRT zurückzuziehen, muss er die 28-Epochen-Frist abwarten, in der die Bindung aufgehoben wird. Jede Epoche besteht aus 6.646 Blöcken, was bedeutet, dass 28 Epochen ungefähr 26 Tagen entsprechen. +There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. -Wenn Sie dies lesen, können Sie sofort Delegator werden, indem Sie auf die [Netzwerkteilnehmerseite](https://thegraph.com/explorer/participants/indexers) gehen und GRT an einen Indexer Ihrer Wahl delegieren. +If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. -## Kuratoren (verdienen GRT) +## Curators (Earn GRT) -Kuratoren identifizieren qualitativ hochwertige Untergraphen und "kuratieren" sie (d.h. signalisieren GRT auf ihnen), um Kurationsanteile zu verdienen, die einen Prozentsatz aller zukünftigen Abfragegebühren garantieren, die durch den Untergraphen generiert werden. Obwohl jeder unabhängige Netzwerkteilnehmer ein Kurator sein kann, gehören die Entwickler von Subgraphen in der Regel zu den ersten Kuratoren für ihre eigenen Subgraphen, da sie sicherstellen wollen, dass ihr Subgraph indiziert wird. +Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. -As of April 11th, 2024, subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Kuratoren zahlen eine Kurationssteuer von 1%, wenn sie einen neuen Untergraphen kuratieren. Diese Kuratierungssteuer wird verbrannt, wodurch das Angebot an GRT sinkt. +Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. -## Entwickler +## Developers -Entwickler erstellen Subgraphen und fragen sie ab, um Blockchain-Daten abzurufen. Da Subgraphen quelloffen sind, können Entwickler bestehende Subgraphen abfragen, um Blockchain-Daten in ihre Dapps zu laden. Entwickler zahlen für Abfragen, die sie in GRT machen, das an die Netzwerkteilnehmer verteilt wird. +Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. ### Erstellung eines Untergraphen -Entwickler können [einen Subgraph](/developing/creating-a-subgraph/) erstellen, um Daten auf der Blockchain zu indizieren. Subgraphen sind Anweisungen für Indexer darüber, welche Daten an Verbraucher geliefert werden sollen. +Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Sobald Entwickler ihren Subgraphen erstellt und getestet haben, können sie [ihren Subgraphen](/publishing/publishing-a-subgraph/) im dezentralen Netzwerk von The Graph veröffentlichen. +Once developers have built and tested their subgraph, they can [publish their subgraph](/publishing/publishing-a-subgraph/) on The Graph's decentralized network. ### Abfrage eines vorhandenen Untergraphen Once a subgraph is [published](/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. -Subgraphen werden [mit GraphQL](/querying/querying-the-graph/) abgefragt, und die Abfragegebühren werden mit GRT in [Subgraph Studio](https://thegraph.com/studio/) bezahlt. Die Abfragegebühren werden an die Netzwerkteilnehmer auf der Grundlage ihrer Beiträge zum Protokoll verteilt. - -1 % der an das Netzwerk gezahlten Abfragegebühren werden verbrannt. - -## Indexierer (verdienen GRT) - -Indexer sind das Rückgrat von The Graph. Sie betreiben unabhängige Hardware und Software, die das dezentrale Netzwerk von The Graph antreiben. Indexer liefern Daten an Verbraucher auf der Grundlage von Anweisungen von Untergraphen. - -Indexierer können auf zwei Arten GRT-Belohnungen verdienen: +Subgraphs are [queried using GraphQL](/querying/querying-the-graph/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. -1. Abfragegebühren: GRT, die von Entwicklern oder Nutzern für Abfragen von Subgraphen-Daten gezahlt werden. Die Abfragegebühren werden gemäß der exponentiellen Rabattfunktion (siehe GIP [hier](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)) direkt an die Indexierer verteilt. +1% of the query fees paid to the network are burned. -2. Indexierungsprämien: Die jährliche Ausgabe von 3 % wird an die Indexierer auf der Grundlage der Anzahl der von ihnen indexierten Untergraphen verteilt. Diese Belohnungen sind ein Anreiz für Indexer, Untergraphen zu indexieren, gelegentlich bevor die Abfragegebühren beginnen, um Proofs of Indexing (POIs) zu sammeln und einzureichen, die bestätigen, dass sie Daten korrekt indexiert haben. +## Indexers (Earn GRT) -Jedem Untergraphen wird ein Teil der gesamten Netzwerk-Token-Ausgabe zugeteilt, basierend auf der Höhe des Kurationssignals des Untergraphen. Dieser Betrag wird dann an die Indexer auf der Grundlage ihres zugewiesenen Anteils an dem Subgraphen vergütet. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. -Um einen Indexierungs-Knoten zu betreiben, müssen Indexer 100.000 GRT oder mehr in das Netzwerk einbringen. Für Indexer besteht ein Anreiz, GRT im Verhältnis zur Anzahl der von ihnen bearbeiteten Abfragen einzusetzen. +Indexers can earn GRT rewards in two ways: -Indexer können ihre GRT-Zuteilungen auf Untergraphen erhöhen, indem sie GRT-Delegierung von Delegatoren akzeptieren, und sie können bis zum 16-fachen ihres ursprünglichen Einsatzes akzeptieren. Wenn ein Indexer "überdelegiert" wird (d.h. mehr als das 16-fache seines ursprünglichen Einsatzes), kann er die zusätzlichen GRT von Delegatoren nicht nutzen, bis er seinen Einsatz im Netzwerk erhöht. +1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -Die Höhe der Belohnungen, die ein Indexer erhält, kann je nach anfänglichem Einsatz, akzeptierter Delegation, Qualität des Dienstes und vielen weiteren Faktoren variieren. Das folgende Diagramm ist ein öffentlich zugängliches Diagramm eines aktiven Indexers im dezentralen Netzwerk von The Graph. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -### Der Indexer Einsatz & Belohnung von allnodes-com.eth +Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. -![Indexierung von Einsatz und Belohnungen](/img/indexing-stake-and-income.png) +In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Diese Daten beziehen sich auf den Zeitraum von Februar 2021 bis September 2022. +Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. -> Bitte beachten Sie, dass sich diese Situation verbessern wird, wenn die [Arbitrum-Migration](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551) abgeschlossen ist, so dass die Gaskosten für die Teilnehmer des Netzes eine deutlich geringere Belastung darstellen. +The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. -## Token-Versorgung: Burning & Ausgabe +## Token Supply: Burning & Issuance -Das anfängliche Token-Angebot beträgt 10 Milliarden GRT, mit einem Ziel von 3 % Neuemissionen pro Jahr, um Indexer für die Zuweisung von Anteilen an Subgraphen zu belohnen. Das bedeutet, dass das Gesamtangebot an GRT-Token jedes Jahr um 3 % steigen wird, da neue Token an Indexer für ihren Beitrag zum Netzwerk ausgegeben werden. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph ist mit mehreren Brennmechanismen ausgestattet, um die Ausgabe neuer Token auszugleichen. Ungefähr 1 % des GRT-Angebots wird jährlich durch verschiedene Aktivitäten im Netzwerk verbrannt, und diese Zahl steigt, da die Netzwerkaktivität weiter zunimmt. Zu diesen Burning-Aktivitäten gehören eine Delegationssteuer von 0,5 %, wenn ein Delegator GRT an einen Indexer delegiert, eine Kurationssteuer von 1 %, wenn Kuratoren ein Signal auf einem Untergraphen geben, und 1 % der Abfragegebühren für Blockchain-Daten. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. -![Verbrannte GRT insgesamt](/img/total-burned-grt.jpeg) +![Total burned GRT](/img/total-burned-grt.jpeg) -Zusätzlich zu diesen regelmäßig stattfindenden Burning-Aktivitäten verfügt der GRT-Token auch über einen Slashing-Mechanismus, um böswilliges oder unverantwortliches Verhalten von Indexern zu bestrafen. Wenn ein Indexer geslashed wird, werden 50% seiner Indexierungsbelohnungen für die Epoche verbrannt (während die andere Hälfte an den Fischer geht), und sein Eigenanteil wird um 2,5% gekürzt, wobei die Hälfte dieses Betrags verbrannt wird. Dies trägt dazu bei, dass Indexer einen starken Anreiz haben, im besten Interesse des Netzwerks zu handeln und zu dessen Sicherheit und Stabilität beizutragen. +In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. -## Verbesserung des Protokolls +## Improving the Protocol -Das The Graph Network entwickelt sich ständig weiter, und es werden laufend Verbesserungen an der wirtschaftlichen Gestaltung des Protokolls vorgenommen, um allen Netzwerkteilnehmern die bestmögliche Erfahrung zu bieten. DerThe Graph-Rat überwacht die Protokolländerungen, und die Mitglieder der Community sind aufgerufen, sich daran zu beteiligen. Beteiligen Sie sich an der Verbesserung des Protokolls im [Das Graph Forum](https://forum.thegraph.com/). +The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). diff --git a/website/pages/es/about.mdx b/website/pages/es/about.mdx index c745dcedd131..8b1a092a77b5 100644 --- a/website/pages/es/about.mdx +++ b/website/pages/es/about.mdx @@ -2,46 +2,66 @@ title: Acerca de The Graph --- -En esta página se explica qué es The Graph y cómo puedes empezar a utilizarlo. - ## Que es The Graph? -The Graph es un protocolo descentralizado que permite indexar y consultar datos de la blockchain. The Graph permite consultar datos los cuales son difíciles de consultar directamente. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Los proyectos con contratos inteligentes complejos como [Uniswap](https://uniswap.org/) y las iniciativas de NFTs como [Bored Ape Yacht Club](https://boredapeyachtclub.com/) almacenan los datos en la blockchain de Ethereum, lo que hace realmente difícil leer algo más que los datos básicos directamente desde la blockchain. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -También podrías crear tu propio servidor, procesar las transacciones allí, guardarlas en una base de datos y construir un punto de conexión de API encima de todo eso para consultar los datos. Sin embargo, esta opción [requiere muchos recursos](/network/benefits/), necesita mantenimiento, presenta un único punto de fallo y compromete las propiedades de seguridad importantes necesarias para la descentralización. +### How The Graph Functions -**Indexar los datos de la blockchain es muy, muy difícil.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## ¿Cómo funciona The Graph? +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph aprende, qué y cómo indexar los datos de Ethereum, basándose en las descripciones de los subgrafos, conocidas como el manifiesto de los subgrafos. La descripción del subgrafo define los contratos inteligentes de interés para este subgrafo, los eventos en esos contratos a los que prestar atención, y cómo mapear los datos de los eventos a los datos que The Graph almacenará en su base de datos. +- When creating a subgraph, you need to write a subgraph manifest. -Una vez que has escrito el `subgraph manifest`, utilizas el CLI de The Graph para almacenar la definición en IPFS y decirle al indexador que empiece a indexar los datos de ese subgrafo. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -Este diagrama ofrece más detalles sobre el flujo de datos una vez que se ha deployado en el manifiesto para un subgrafo, que trata de las transacciones en Ethereum: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![Un gráfico explicando como The Graph usa Graph Node para servir consultas a los consumidores de datos](/img/graph-dataflow.png) El flujo sigue estos pasos: -1. Una aplicación descentralizada (dapp) añade datos a Ethereum a través de una transacción en un contrato inteligente. -2. El contrato inteligente emite uno o más eventos mientras procesa la transacción. -3. Graph Node escanea continuamente la red de Ethereum en busca de nuevos bloques y los datos de tu subgrafo que puedan contener. -4. Graph Node encuentra los eventos de la red Ethereum, a fin de proveerlos en tu subgrafo mediante estos bloques y ejecuta los mapping handlers que proporcionaste. El mapeo (mapping) es un módulo WASM que crea o actualiza las entidades de datos que Graph Node almacena en respuesta a los eventos de Ethereum. -5. La dapp consulta a través de Graph Node los datos indexados de la blockchain, utilizando el [GraphQL endpoint](https://graphql.org/learn/) del nodo. El Nodo de The Graph, a su vez, traduce las consultas GraphQL en consultas para su almacenamiento de datos subyacentes con el fin de obtener estos datos, haciendo uso de las capacidades de indexación que ofrece el almacenamiento. La dapp muestra estos datos en una interfaz muy completa para el usuario, a fin de que los end users que usan este subgrafo puedan emitir nuevas transacciones en Ethereum. El ciclo se repite. +1. Una aplicación descentralizada (dapp) añade datos a Ethereum a través de una transacción en un contrato inteligente. +2. El contrato inteligente emite uno o más eventos mientras procesa la transacción. +3. Graph Node escanea continuamente la red de Ethereum en busca de nuevos bloques y los datos de tu subgrafo que puedan contener. +4. Graph Node encuentra los eventos de la red Ethereum, a fin de proveerlos en tu subgrafo mediante estos bloques y ejecuta los mapping handlers que proporcionaste. El mapeo (mapping) es un módulo WASM que crea o actualiza las entidades de datos que Graph Node almacena en respuesta a los eventos de Ethereum. +5. La dapp consulta a través de Graph Node los datos indexados de la blockchain, utilizando el [GraphQL endpoint](https://graphql.org/learn/) del nodo. El Nodo de The Graph, a su vez, traduce las consultas GraphQL en consultas para su almacenamiento de datos subyacentes con el fin de obtener estos datos, haciendo uso de las capacidades de indexación que ofrece el almacenamiento. La dapp muestra estos datos en una interfaz muy completa para el usuario, a fin de que los end users que usan este subgrafo puedan emitir nuevas transacciones en Ethereum. El ciclo se repite. ## Próximos puntos -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/es/arbitrum/arbitrum-faq.mdx b/website/pages/es/arbitrum/arbitrum-faq.mdx index 2b8812590990..b418ae4af15c 100644 --- a/website/pages/es/arbitrum/arbitrum-faq.mdx +++ b/website/pages/es/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Preguntas frecuentes sobre Arbitrum Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitrum Billing FAQs. -## ¿Por qué The Graph está implementando una solución L2? +## Why did The Graph implement an L2 Solution? -Al escalar The Graph en L2, los participantes de la red pueden esperar: +By scaling The Graph on L2, network participants can now benefit from: - Upwards of 26x savings on gas fees @@ -14,7 +14,7 @@ Al escalar The Graph en L2, los participantes de la red pueden esperar: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers could open and close allocations to index a greater number of subgraphs with greater frequency, developers could deploy and update subgraphs with greater ease, Delegators could delegate GRT with increased frequency, and Curators could add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -41,27 +41,21 @@ Para aprovechar el uso de The Graph en L2, usa este conmutador desplegable para ## Como developer de subgrafos, consumidor de datos, Indexador, Curador o Delegador, ¿qué debo hacer ahora? -There is no immediate action required, however, network participants are encouraged to begin moving to Arbitrum to take advantage of the benefits of L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Core developer teams are working to create L2 transfer tools that will make it significantly easier to move delegation, curation, and subgraphs to Arbitrum. Network participants can expect L2 transfer tools to be available by summer of 2023. +All indexing rewards are now entirely on Arbitrum. -A partir del 10 de abril de 2023, el 5% de todas las recompensas de indexación se están generando en Arbitrum. A medida que aumenta la participación en la red, y según lo apruebe el Council, las recompensas de indexación se desplazarán gradualmente de Ethereum a Arbitrum, moviéndose eventualmente por completo a Arbitrum. - -## Si me gustaría participar en la red en L2, ¿qué debo hacer? - -Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and report feedback about your experience in [Discord](https://discord.gg/graphprotocol). - -## ¿Existe algún riesgo asociado con escalar la red a L2? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## ¿Seguirán funcionando los subgrafos existentes en Ethereum? +## Are existing subgraphs on Ethereum working? -Sí, los contratos de The Graph Network operarán en paralelo tanto en Ethereum como en Arbitrum hasta que pasen completamente a Arbitrum en una fecha posterior. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## ¿GRT tendrá un nuevo contrato inteligente implementado en Arbitrum? +## Does GRT have a new smart contract deployed on Arbitrum? Yes, GRT has an additional [smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). However, the Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) will remain operational. diff --git a/website/pages/es/billing.mdx b/website/pages/es/billing.mdx index e73e074a8b44..604244a22148 100644 --- a/website/pages/es/billing.mdx +++ b/website/pages/es/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Haz clic en el botón "Conectar wallet" en la esquina superior derecha de la página. Serás redirigido a la página de selección de wallet. Selecciona tu wallet y haz clic en "Conectar". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/es/chain-integration-overview.mdx b/website/pages/es/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/es/chain-integration-overview.mdx +++ b/website/pages/es/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/es/cookbook/arweave.mdx b/website/pages/es/cookbook/arweave.mdx index 5c09b46e3e9a..7c07a486f887 100644 --- a/website/pages/es/cookbook/arweave.mdx +++ b/website/pages/es/cookbook/arweave.mdx @@ -105,7 +105,7 @@ La definición de esquema describe la estructura de la base de datos de subgrafo Los handlers para procesar eventos están escritos en [AssemblyScript](https://www.assemblyscript.org/). -La indexación de Arweave introduce tipos de datos específicos de Arweave en la [API de AssemblyScript](/developing/graph-ts/api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { @@ -155,7 +155,7 @@ Escribir los mappings de un subgrafo de Arweave es muy similar a escribir los ma Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash -graph deploy --studio --access-token +graph deploy --access-token ``` ## Consultando un subgrafo de Arweave diff --git a/website/pages/es/cookbook/avoid-eth-calls.mdx b/website/pages/es/cookbook/avoid-eth-calls.mdx index 446b0e8ecd17..8897ecdbfdc7 100644 --- a/website/pages/es/cookbook/avoid-eth-calls.mdx +++ b/website/pages/es/cookbook/avoid-eth-calls.mdx @@ -99,4 +99,18 @@ Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0 ## Conclusion -We can significantly improve indexing performance by minimizing or eliminating `eth_calls` in our subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/es/cookbook/cosmos.mdx b/website/pages/es/cookbook/cosmos.mdx index 5d931ce535aa..fe36514f6de9 100644 --- a/website/pages/es/cookbook/cosmos.mdx +++ b/website/pages/es/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and Los controladores para procesar eventos están escritos en [AssemblyScript](https://www.assemblyscript.org/). -La indexación de Cosmos introduce tipos de datos específicos de Cosmos en la [AssemblyScript API](/developing/graph-ts/api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { @@ -203,7 +203,7 @@ Una vez que se haya creado su subgrafo, puede implementar su subgrafo usando el Visit the Subgraph Studio to create a new subgraph. ```bash -graph deploy --studio subgraph-name +graph deploy subgraph-name ``` **Local Graph Node (based on default configuration):** diff --git a/website/pages/es/cookbook/derivedfrom.mdx b/website/pages/es/cookbook/derivedfrom.mdx index 69dd48047744..09ba62abde3f 100644 --- a/website/pages/es/cookbook/derivedfrom.mdx +++ b/website/pages/es/cookbook/derivedfrom.mdx @@ -69,6 +69,20 @@ This will not only make our subgraph more efficient, but it will also unlock thr ## Conclusion -Adopting the `@derivedFrom` directive in subgraphs effectively handles dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. -To learn more detailed strategies to avoid large arrays, read this blog from Kevin Jones: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). +For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/es/cookbook/enums.mdx b/website/pages/es/cookbook/enums.mdx index a10970c1539f..29b5b2d0bf38 100644 --- a/website/pages/es/cookbook/enums.mdx +++ b/website/pages/es/cookbook/enums.mdx @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## Recursos Adicionales For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/pages/es/cookbook/grafting-hotfix.mdx b/website/pages/es/cookbook/grafting-hotfix.mdx index 4be0a0b07790..61da49e08f7b 100644 --- a/website/pages/es/cookbook/grafting-hotfix.mdx +++ b/website/pages/es/cookbook/grafting-hotfix.mdx @@ -1,12 +1,12 @@ --- -Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment --- ## TLDR Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. -### Overview +### Descripción This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. @@ -164,7 +164,7 @@ Grafting is an effective strategy for deploying hotfixes in subgraph development However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. -## Additional Resources +## Recursos Adicionales - **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. @@ -173,14 +173,14 @@ By incorporating grafting into your subgraph development workflow, you can enhan ## Subgraph Best Practices 1-6 -1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/es/cookbook/grafting.mdx b/website/pages/es/cookbook/grafting.mdx index 30df42fca0a1..212250c06428 100644 --- a/website/pages/es/cookbook/grafting.mdx +++ b/website/pages/es/cookbook/grafting.mdx @@ -22,7 +22,7 @@ Para más información, puedes consultar: - [Grafting](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -En este tutorial vamos a cubrir un caso de uso básico. Reemplazaremos un contrato existente con un contrato idéntico (con una nueva dirección, pero el mismo código). Luego, haremos grafting del subgrafo existente en el subgrafo "base" que rastrea el nuevo contrato. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ En este tutorial vamos a cubrir un caso de uso básico. Reemplazaremos un contra ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - La fuente de datos de `Lock` es el ABI y la dirección del contrato que obtendremos cuando compilemos y realicemos el deploy del contrato -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - La sección de `mapeo` define los disparadores de interés y las funciones que deben ejecutarse en respuesta a esos disparadores. En este caso, estamos escuchando el evento `Withdrawal` y llamando a la función `handleWithdrawal` cuando se emite. ## Definición del manifiesto de grafting @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Recursos Adicionales -Si quieres tener más experiencia con el grafting, aquí tienes algunos ejemplos de contratos populares: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/es/cookbook/immutable-entities-bytes-as-ids.mdx b/website/pages/es/cookbook/immutable-entities-bytes-as-ids.mdx index f38c33385604..541212617f9f 100644 --- a/website/pages/es/cookbook/immutable-entities-bytes-as-ids.mdx +++ b/website/pages/es/cookbook/immutable-entities-bytes-as-ids.mdx @@ -174,3 +174,17 @@ Query Response: Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/es/cookbook/near.mdx b/website/pages/es/cookbook/near.mdx index 2d31f4a2c31b..f915ec580057 100644 --- a/website/pages/es/cookbook/near.mdx +++ b/website/pages/es/cookbook/near.mdx @@ -37,7 +37,7 @@ Hay tres aspectos de la definición de subgrafo: **schema.graphql:** un archivo de esquema que define qué datos se almacenan para su subgrafo y cómo consultarlos a través de GraphQL. Los requisitos para los subgrafos NEAR están cubiertos por [la documentación existente](/developing/creating-a-subgraph#the-graphql-schema). -**Asignaciones de AssemblyScript:** [Código de AssemblyScript](/developing/graph-ts/api) que traduce los datos del evento a las entidades definidas en su esquema. La compatibilidad con NEAR introduce tipos de datos específicos de NEAR y una nueva funcionalidad de análisis de JSON. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. Durante el desarrollo del subgrafo hay dos comandos clave: @@ -98,7 +98,7 @@ La definición de esquema describe la estructura de la base de datos de subgrafo Los handlers para procesar eventos están escritos en [AssemblyScript](https://www.assemblyscript.org/). -La indexación NEAR introduce tipos de datos específicos de NEAR en la [API de AssemblyScript](/developing/graph-ts/api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ Estos tipos se pasan a block & handlers de recibos: - Los handlers de bloques recibirán un `Block` - Los handlers de recibos recibirán un `ReceiptWithOutcome` -De lo contrario, el resto de la [API de AssemblyScript](/developing/graph-ts/api) está disponible para los desarrolladores de subgrafos NEAR durante la ejecución del mapeo. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -Esto incluye una nueva función de análisis de JSON: los registros en NEAR se emiten con frecuencia como JSON en cadena. Una nueva función `json.fromString(...)` está disponible como parte de la [API JSON](/developing/graph-ts/api#json-api) para permitir a los desarrolladores para procesar fácilmente estos registros. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Deployando un subgrafo NEAR @@ -194,8 +194,8 @@ La configuración del nodo dependerá de dónde se implemente el subgrafo. ### Subgraph Studio ```sh -graph auth --studio -graph deploy --studio +graph auth +graph deploy ``` ### Graph Node Local (basado en la configuración predeterminada) diff --git a/website/pages/es/cookbook/pruning.mdx b/website/pages/es/cookbook/pruning.mdx index f22a2899f1de..d86bf50edf42 100644 --- a/website/pages/es/cookbook/pruning.mdx +++ b/website/pages/es/cookbook/pruning.mdx @@ -39,3 +39,17 @@ dataSources: ## Conclusion Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/es/cookbook/subgraph-uncrashable.mdx b/website/pages/es/cookbook/subgraph-uncrashable.mdx index d6ab2b8a0878..d7a39a67df81 100644 --- a/website/pages/es/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/es/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Generador de código de subgrafo seguro - El marco también incluye una forma (a través del archivo de configuración) para crear funciones de establecimiento personalizadas, pero seguras, para grupos de variables de entidad. De esta forma, es imposible que el usuario cargue/utilice una entidad gráfica obsoleta y también es imposible olvidarse de guardar o configurar una variable requerida por la función. -- Los registros de advertencia se registran como registros que indican donde hay una infracción de la lógica del subgrafo para ayudar a solucionar el problema y garantizar la precisión de los datos. Estos registros se pueden ver en el servicio alojado de The Graph en la sección 'Registros'. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable se puede ejecutar como un indicador opcional mediante el comando codegen Graph CLI. diff --git a/website/pages/es/cookbook/timeseries.mdx b/website/pages/es/cookbook/timeseries.mdx index 88ee70005a6e..8eaed50b0ea3 100644 --- a/website/pages/es/cookbook/timeseries.mdx +++ b/website/pages/es/cookbook/timeseries.mdx @@ -6,7 +6,7 @@ title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggr Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. -## Overview +## Descripción Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. @@ -44,7 +44,7 @@ A timeseries entity represents raw data points collected over time. It is define - `id`: Must be of type `Int8!` and is auto-incremented. - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. -Example: +Ejemplo: ```graphql type Data @entity(timeseries: true) { @@ -61,7 +61,7 @@ An aggregation entity computes aggregated values from a timeseries source. It is - Annotation Arguments: - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). -Example: +Ejemplo: ```graphql type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { @@ -77,7 +77,7 @@ In this example, Stats aggregates the price field from Data over hourly and dail Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. -Example: +Ejemplo: ```graphql { @@ -101,7 +101,7 @@ Example: Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. -Example: +Ejemplo: ### Timeseries Entity @@ -181,14 +181,14 @@ By adopting this pattern, developers can build more efficient and scalable subgr ## Subgraph Best Practices 1-6 -1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/es/cookbook/transfer-to-the-graph.mdx b/website/pages/es/cookbook/transfer-to-the-graph.mdx index 287cd7d81b4b..d86f6f31fc62 100644 --- a/website/pages/es/cookbook/transfer-to-the-graph.mdx +++ b/website/pages/es/cookbook/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -48,7 +48,7 @@ graph init --product subgraph-studio In The Graph CLI, use the auth command seen in Subgraph Studio: ```sh -graph auth --studio +graph auth ``` ## 2. Deploy Your Subgraph to Studio @@ -58,7 +58,7 @@ If you have your source code, you can easily deploy it to Studio. If you don't h In The Graph CLI, run the following command: ```sh -graph deploy --studio --ipfs-hash +graph deploy --ipfs-hash ``` @@ -74,7 +74,7 @@ graph deploy --studio --ipfs-hash You can start [querying](/querying/querying-the-graph/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### Ejemplo [CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: @@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### Recursos Adicionales - To quickly create and publish a new subgraph, check out the [Quick Start](/quick-start/). - To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). diff --git a/website/pages/es/deploying/deploy-using-subgraph-studio.mdx b/website/pages/es/deploying/deploy-using-subgraph-studio.mdx index 502169b4ccfa..1ea90990a2d9 100644 --- a/website/pages/es/deploying/deploy-using-subgraph-studio.mdx +++ b/website/pages/es/deploying/deploy-using-subgraph-studio.mdx @@ -12,13 +12,13 @@ In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: - View a list of subgraphs you've created - Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- Crear y gestionar sus claves API para subgrafos específicos - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph through the Studio UI -- Deploy your subgraph using the The Graph CLI +- Create your subgraph +- Deploy your subgraph using The Graph CLI - Test your subgraph in the playground environment - Integrate your subgraph in staging using the development query URL -- Publish your subgraph with the Studio UI +- Publish your subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -27,21 +27,19 @@ Before deploying, you must install The Graph CLI. You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use The Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -**Install with yarn:** +### Install with yarn ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +### Install with npm ```bash npm install -g @graphprotocol/graph-cli ``` -## Create Your Subgraph - -Before deploying your subgraph you need to create an account in [Subgraph Studio](https://thegraph.com/studio/). +## Comenzar 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. @@ -55,16 +53,16 @@ Before deploying your subgraph you need to create an account in [Subgraph Studio -> For additional written detail, review the [Quick-Start](/quick-start/). +> For additional written detail, review the [Quick Start](/quick-start/). -### Subgraph Compatibility with The Graph Network +### Compatibilidad de los Subgrafos con The Graph Network In order to be supported by Indexers on The Graph Network, subgraphs must: - Index a [supported network](/developing/supported-networks) -- Must not use any of the following features: +- No debe utilizar ninguna de las siguientes funciones: - ipfs.cat & ipfs.map - - Non-fatal errors + - Errores no fatales - Grafting ## Initialize Your Subgraph @@ -72,7 +70,7 @@ In order to be supported by Indexers on The Graph Network, subgraphs must: Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash -graph init --studio +graph init ``` You can find the `` value on your subgraph details page in Subgraph Studio, see image below: @@ -83,24 +81,24 @@ After running `graph init`, you will be asked to input the contract address, net ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to login into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. Then, use the following command to authenticate from the CLI: ```bash -graph auth --studio +graph auth ``` ## Deploying a Subgraph Once you are ready, you can deploy your subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. Use the following CLI command to deploy your subgraph: ```bash -graph deploy --studio +graph deploy ``` After running this command, the CLI will ask for a version label. @@ -126,11 +124,11 @@ If you want to update your subgraph, you can do the following: - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). - This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an on-chain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. > Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/network/curating/). -## Automatic Archiving of Subgraph Versions +## Archivado Automático de Versiones de Subgrafos Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. diff --git a/website/pages/es/deploying/multiple-networks.mdx b/website/pages/es/deploying/multiple-networks.mdx index dc2b8e533430..276e10f5d0d4 100644 --- a/website/pages/es/deploying/multiple-networks.mdx +++ b/website/pages/es/deploying/multiple-networks.mdx @@ -4,9 +4,9 @@ title: Deploying a Subgraph to Multiple Networks This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). -## Deploying the subgraph to multiple networks +## Desplegando el subgráfo en múltiples redes -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +En algunos casos, querrás desplegar el mismo subgrafo en múltiples redes sin duplicar todo su código. El principal reto que conlleva esto es que las direcciones de los contratos en estas redes son diferentes. ### Using `graph-cli` @@ -69,7 +69,7 @@ dataSources: kind: ethereum/events ``` -This is what your networks config file should look like: +Este es el aspecto que debe tener el archivo de configuración de tu red: ```json { @@ -86,7 +86,7 @@ This is what your networks config file should look like: } ``` -Now we can run one of the following commands: +Ahora podemos ejecutar uno de los siguientes comandos: ```sh # Using default networks.json file @@ -123,7 +123,7 @@ yarn deploy --network sepolia yarn deploy --network sepolia --network-file path/to/config ``` -### Using subgraph.yaml template +### Usando la plantilla subgraph.yaml One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). @@ -136,7 +136,7 @@ To illustrate this approach, let's assume a subgraph should be deployed to mainn } ``` -and +y ```json { @@ -195,7 +195,7 @@ A working example of this can be found [here](https://github.com/graphprotocol/e This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Política de archivo de subgrafos en Subgraph Studio A subgraph version in Studio is archived if and only if it meets the following criteria: @@ -205,11 +205,11 @@ A subgraph version in Studio is archived if and only if it meets the following c In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Cada subgrafo afectado por esta política tiene una opción para recuperar la versión en cuestión. -## Checking subgraph health +## Comprobando la salud del subgrafo -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +Si un subgrafo se sincroniza con éxito, es una buena señal de que seguirá funcionando bien para siempre. Sin embargo, los nuevos activadores en la red pueden hacer que tu subgrafo alcance una condición de error no probada o puede comenzar a retrasarse debido a problemas de rendimiento o problemas con los operadores de nodos. Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: diff --git a/website/pages/es/developing/creating-a-subgraph/advanced.mdx b/website/pages/es/developing/creating-a-subgraph/advanced.mdx new file mode 100644 index 000000000000..01332c56e82f --- /dev/null +++ b/website/pages/es/developing/creating-a-subgraph/advanced.mdx @@ -0,0 +1,555 @@ +--- +title: Advance Subgraph Features +--- + +## Descripción + +Add and implement advanced subgraph features to enhanced your subgraph's built. + +Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: + +| Feature | Name | +| ---------------------------------------------------- | ---------------- | +| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | +| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | + +For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +features: + - fullTextSearch + - nonFatalErrors +dataSources: ... +``` + +> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. + +## Timeseries and Aggregations + +Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, etc. + +This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the Timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. + +### Example Schema + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} + +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +### Defining Timeseries and Aggregations + +Timeseries entities are defined with `@entity(timeseries: true)` in schema.graphql. Every timeseries entity must have a unique ID of the int8 type, a timestamp of the Timestamp type, and include data that will be used for calculation by aggregation entities. These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the Aggregation entities. + +Aggregation entities are defined with `@aggregation` in schema.graphql. Every aggregation entity defines the source from which it will gather data (which must be a Timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. + +#### Available Aggregation Intervals + +- `hour`: sets the timeseries period every hour, on the hour. +- `day`: sets the timeseries period every day, starting and ending at 00:00. + +#### Available Aggregation Functions + +- `sum`: Total of all values. +- `count`: Number of values. +- `min`: Minimum value. +- `max`: Maximum value. +- `first`: First value in the period. +- `last`: Last value in the period. + +#### Example Aggregations Query + +```graphql +{ + stats(interval: "hour", where: { timestamp_gt: 1704085200 }) { + id + timestamp + sum + } +} +``` + +Note: + +To use Timeseries and Aggregations, a subgraph must have a spec version ≥1.1.0. Note that this feature might undergo significant changes that could affect backward compatibility. + +[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations. + +## Errores no fatales + +Los errores de indexación en subgrafos ya sincronizados provocarán, por defecto, que el subgrafo falle y deje de sincronizarse. Los subgrafos pueden ser configurados de manera alternativa para continuar la sincronización en presencia de errores, ignorando los cambios realizados por el handler que provocó el error. Esto da a los autores de los subgrafos tiempo para corregir sus subgrafos mientras las consultas continúan siendo servidas contra el último bloque, aunque los resultados serán posiblemente inconsistentes debido al bug que provocó el error. Nótese que algunos errores siguen siendo siempre fatales, para que el error no sea fatal debe saberse que es deterministico. + +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. + +Para activar los errores no fatales es necesario establecer el siguiente indicador en el manifiesto del subgrafo: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +features: + - nonFatalErrors + ... +``` + +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: + +```graphql +foos(first: 100, subgraphError: allow) { + id +} + +_meta { + hasIndexingErrors +} +``` + +If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: + +```graphql +"data": { + "foos": [ + { + "id": "0xdead" + } + ], + "_meta": { + "hasIndexingErrors": true + } +}, +"errors": [ + { + "message": "indexing_error" + } +] +``` + +## IPFS/Arweave File Data Sources + +File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. + +> Esto también establece las bases para la indexación determinista de datos off-chain, así como la posible introducción de datos arbitrarios procedentes de HTTP. + +### Descripción + +Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found. + +This is similar to the [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources. + +> This replaces the existing `ipfs.cat` API + +### Upgrade guide + +#### Update `graph-ts` and `graph-cli` + +File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1 + +#### Añadir un nuevo tipo de entidad que se actualizará cuando se encuentren archivos + +Las fuentes de datos de archivos no pueden acceder a entidades basadas en cadenas ni actualizarlas, pero deben actualizar entidades específicas de archivos. + +Esto puede significar dividir campos de entidades existentes en entidades separadas, vinculadas entre sí. + +Entidad combinada original: + +```graphql +type Token @entity { + id: ID! + tokenID: BigInt! + tokenURI: String! + externalURL: String! + ipfsURI: String! + image: String! + name: String! + description: String! + type: String! + updatedAtTimestamp: BigInt + owner: User! +} +``` + +Nueva, entidad dividida: + +```graphql +type Token @entity { + id: ID! + tokenID: BigInt! + tokenURI: String! + ipfsURI: TokenMetadata + updatedAtTimestamp: BigInt + owner: String! +} + +type TokenMetadata @entity { + id: ID! + image: String! + externalURL: String! + name: String! + description: String! +} +``` + +Si la relación es 1:1 entre la entidad padre y la entidad fuente de datos de archivo resultante, el patrón más sencillo es vincular la entidad padre a una entidad de archivo resultante utilizando el CID IPFS como búsqueda. Pónte en contacto con nosotros en Discord si tienes dificultades para modelar tus nuevas entidades basadas en archivos! + +> You can use [nested filters](/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities. + +#### Add a new templated data source with `kind: file/ipfs` or `kind: file/arweave` + +Esta es la fuente de datos que se generará cuando se identifique un archivo de interés. + +```yaml +templates: + - name: TokenMetadata + kind: file/ipfs + mapping: + apiVersion: 0.0.7 + language: wasm/assemblyscript + file: ./src/mapping.ts + handler: handleMetadata + entities: + - TokenMetadata + abis: + - name: Token + file: ./abis/Token.json +``` + +> Currently `abis` are required, though it is not possible to call contracts from within file data sources + +The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#limitations) for more details. + +#### Crear un nuevo handler para procesar archivos + +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). + +The CID of the file as a readable string can be accessed via the `dataSource` as follows: + +```typescript +const cid = dataSource.stringParam() +``` + +Ejemplo de handler: + +```typescript +import { json, Bytes, dataSource } from '@graphprotocol/graph-ts' +import { TokenMetadata } from '../generated/schema' + +export function handleMetadata(content: Bytes): void { + let tokenMetadata = new TokenMetadata(dataSource.stringParam()) + const value = json.fromBytes(content).toObject() + if (value) { + const image = value.get('image') + const name = value.get('name') + const description = value.get('description') + const externalURL = value.get('external_url') + + if (name && image && description && externalURL) { + tokenMetadata.name = name.toString() + tokenMetadata.image = image.toString() + tokenMetadata.externalURL = externalURL.toString() + tokenMetadata.description = description.toString() + } + + tokenMetadata.save() + } +} +``` + +#### Generar fuentes de datos de archivos cuando sea necesario + +Ahora puedes crear fuentes de datos de archivos durante la ejecución de handlers basados en cadenas: + +- Import the template from the auto-generated `templates` +- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave + +For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). + +For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). + +Ejemplo: + +```typescript +import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' + +const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' +//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. + +export function handleTransfer(event: TransferEvent): void { + let token = Token.load(event.params.tokenId.toString()) + if (!token) { + token = new Token(event.params.tokenId.toString()) + token.tokenID = event.params.tokenId + + token.tokenURI = '/' + event.params.tokenId.toString() + '.json' + const tokenIpfsHash = ipfshash + token.tokenURI + //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" + + token.ipfsURI = tokenIpfsHash + + TokenMetadataTemplate.create(tokenIpfsHash) + } + + token.updatedAtTimestamp = event.block.timestamp + token.owner = event.params.to.toHexString() + token.save() +} +``` + +This will create a new file data source, which will poll Graph Node's configured IPFS or Arweave endpoint, retrying if it is not found. When the file is found, the file data source handler will be executed. + +This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. + +> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file + +¡Felicitaciones, estás utilizando fuentes de datos de archivos! + +#### Deploy de tus subgrafos + +You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. + +#### Limitaciones + +Los handlers y entidades de fuentes de datos de archivos están aislados de otras entidades del subgrafo, asegurando que son deterministas cuando se ejecutan, y asegurando que no se contaminan las fuentes de datos basadas en cadenas. En concreto: + +- Las entidades creadas por File Data Sources son inmutables y no pueden actualizarse +- Los handlers de File Data Source no pueden acceder a entidades de otras fuentes de datos de archivos +- Los handlers basados en cadenas no pueden acceder a las entidades asociadas a File Data Sources + +> Aunque esta restricción no debería ser problemática para la mayoría de los casos de uso, puede introducir complejidad para algunos. Si tienes problemas para modelar tus datos basados en archivos en un subgrafo, ponte en contacto con nosotros a través de Discord! + +Además, no es posible crear fuentes de datos a partir de una File Data Source, ya sea una fuente de datos on-chain u otra File Data Source. Es posible que esta restricción se elimine en el futuro. + +#### Mejores Prácticas + +Si estás vinculando metadatos NFT a los tokens correspondientes, utiliza el hash IPFS de los metadatos para hacer referencia a una entidad Metadata desde la entidad Token. Guarda la entidad de metadatos utilizando el hash IPFS como ID. + +You can use [DataSource context](/developing/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. + +If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. + +> Estamos trabajando para mejorar la recomendación anterior, de modo que las consultas sólo devuelvan la versión "más reciente" + +#### Problemas conocidos + +File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. + +Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. + +#### Ejemplos + +[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) + +#### Referencias + +[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) + +## Indexed Argument Filters / Topic Filters + +> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` + +Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. + +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. + +- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. + +### How Topic Filters Work + +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. + +- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. + +```solidity +// SPDX-License-Identifier: MIT +pragma solidity ^0.8.0; + +contract Token { + // Event declaration with indexed parameters for addresses + event Transfer(address indexed from, address indexed to, uint256 value); + + // Function to simulate transferring tokens + function transfer(address to, uint256 value) public { + // Emitting the Transfer event with from, to, and value + emit Transfer(msg.sender, to, value); + } +} +``` + +In this example: + +- The `Transfer` event is used to log transactions of tokens between addresses. +- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. +- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. + +#### Configuration in Subgraphs + +Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: + +```yaml +eventHandlers: + - event: SomeEvent(indexed uint256, indexed address, indexed uint256) + handler: handleSomeEvent + topic1: ['0xValue1', '0xValue2'] + topic2: ['0xAddress1', '0xAddress2'] + topic3: ['0xValue3'] +``` + +In this setup: + +- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. +- Each topic can have one or more values, and an event is only processed if it matches one of the values in each specified topic. + +#### Filter Logic + +- Within a Single Topic: The logic functions as an OR condition. The event will be processed if it matches any one of the listed values in a given topic. +- Between Different Topics: The logic functions as an AND condition. An event must satisfy all specified conditions across different topics to trigger the associated handler. + +#### Example 1: Tracking Direct Transfers from Address A to Address B + +```yaml +eventHandlers: + - event: Transfer(indexed address,indexed address,uint256) + handler: handleDirectedTransfer + topic1: ['0xAddressA'] # Sender Address + topic2: ['0xAddressB'] # Receiver Address +``` + +In this configuration: + +- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. +- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. +- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. + +#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses + +```yaml +eventHandlers: + - event: Transfer(indexed address,indexed address,uint256) + handler: handleTransferToOrFrom + topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Sender Address + topic2: ['0xAddressB', '0xAddressC'] # Receiver Address +``` + +In this configuration: + +- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. +- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. +- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. + +## Declared eth_call + +> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. + +Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. + +This feature does the following: + +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Allows faster data fetching, resulting in quicker query responses and a better user experience. +- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. + +### Key Concepts + +- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. +- Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously. +- Time Efficiency: The total time taken for all the calls changes from the sum of the individual call times (sequential) to the time taken by the longest call (parallel). + +#### Scenario without Declarative `eth_calls` + +Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. + +Traditionally, these calls might be made sequentially: + +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds + +Total time taken = 3 + 2 + 4 = 9 seconds + +#### Scenario with Declarative `eth_calls` + +With this feature, you can declare these calls to be executed in parallel: + +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds + +Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. + +Total time taken = max (3, 2, 4) = 4 seconds + +#### How it Works + +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. + +#### Example Configuration in Subgraph Manifest + +Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. + +`Subgraph.yaml` using `event.address`: + +```yaml +eventHandlers: +event: Swap(indexed address,indexed address,int256,int256,uint160,uint128,int24) +handler: handleSwap +calls: + global0X128: Pool[event.address].feeGrowthGlobal0X128() + global1X128: Pool[event.address].feeGrowthGlobal1X128() +``` + +Details for the example above: + +- `global0X128` is the declared `eth_call`. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` +- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. + +`Subgraph.yaml` using `event.params` + +```yaml +calls: + - ERC20DecimalsToken0: ERC20[event.params.token0].decimals() +``` + +### Grafting sobre subgrafos existentes + +> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). + +When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. + +A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: + +```yaml +description: ... +graft: + base: Qm... # Subgraph ID of base subgraph + block: 7345624 # Block number +``` + +When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. + +Debido a que el grafting copia en lugar de indexar los datos base, es mucho más rápido llevar el subgrafo al bloque deseado que indexar desde cero, aunque la copia inicial de los datos aún puede llevar varias horas para subgrafos muy grandes. Mientras se inicializa el subgrafo grafted, Graph Node registrará información sobre los tipos de entidad que ya han sido copiados. + +El subgrafo grafteado puede utilizar un esquema GraphQL que no es idéntico al del subgrafo base, sino simplemente compatible con él. Tiene que ser un esquema de subgrafo válido por sí mismo, pero puede diferir del esquema del subgrafo base de las siguientes maneras: + +- Agrega o elimina tipos de entidades +- Elimina los atributos de los tipos de entidad +- Agrega atributos anulables a los tipos de entidad +- Convierte los atributos no anulables en atributos anulables +- Añade valores a los enums +- Agrega o elimina interfaces +- Cambia para qué tipos de entidades se implementa una interfaz + +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. diff --git a/website/pages/es/developing/creating-a-subgraph/assemblyscript-mappings.mdx b/website/pages/es/developing/creating-a-subgraph/assemblyscript-mappings.mdx new file mode 100644 index 000000000000..792a6521f82d --- /dev/null +++ b/website/pages/es/developing/creating-a-subgraph/assemblyscript-mappings.mdx @@ -0,0 +1,113 @@ +--- +title: Writing AssemblyScript Mappings +--- + +## Descripción + +The mappings take data from a particular source and transform it into entities that are defined within your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. + +## Escribir Mappings + +For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. + +In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: + +```javascript +import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' +import { Gravatar } from '../generated/schema' + +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let id = event.params.id + let gravatar = Gravatar.load(id) + if (gravatar == null) { + gravatar = new Gravatar(id) + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. + +The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on-demand. The entity is then updated to match the new event parameters before it is saved back to the store using `gravatar.save()`. + +### ID recomendados para la creación de nuevas entidades + +It is highly recommended to use `Bytes` as the type for `id` fields, and only use `String` for attributes that truly contain human-readable text, like the name of a token. Below are some recommended `id` values to consider when creating new entities. + +- `transfer.id = event.transaction.hash` + +- `let id = event.transaction.hash.concatI32(event.logIndex.toI32())` + +- For entities that store aggregated data, for e.g, daily trade volumes, the `id` usually contains the day number. Here, using a `Bytes` as the `id` is beneficial. Determining the `id` would look like + +```typescript +let dayID = event.block.timestamp.toI32() / 86400 +let id = Bytes.fromI32(dayID) +``` + +- Convert constant addresses to `Bytes`. + +`const id = Bytes.fromHexString('0xdead...beef')` + +There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) which contains utilities for interacting with the Graph Node store and conveniences for handling smart contract data and entities. It can be imported into `mapping.ts` from `@graphprotocol/graph-ts`. + +### Handling of entities with identical IDs + +When creating and saving a new entity, if an entity with the same ID already exists, the properties of the new entity are always preferred during the merge process. This means that the existing entity will be updated with the values from the new entity. + +If a null value is intentionally set for a field in the new entity with the same ID, the existing entity will be updated with the null value. + +If no value is set for a field in the new entity with the same ID, the field will result in null as well. + +## Generación de código + +Para que trabajar con contratos inteligentes, eventos y entidades sea fácil y seguro desde el punto de vista de los tipos, Graph CLI puede generar tipos AssemblyScript a partir del esquema GraphQL del subgrafo y de las ABIs de los contratos incluidas en las fuentes de datos. + +Esto se hace con + +```sh +graph codegen [--output-dir ] [] +``` + +but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: + +```sh +# Yarn +yarn codegen + +# NPM +npm run codegen +``` + +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. + +```javascript +import { + // The contract class: + Gravity, + // The events classes: + NewGravatar, + UpdatedGravatar, +} from '../generated/Gravity/Gravity' +``` + +In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with + +```javascript +import { Gravatar } from '../generated/schema' +``` + +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. + +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/pages/es/developing/creating-a-subgraph/install-the-cli.mdx b/website/pages/es/developing/creating-a-subgraph/install-the-cli.mdx new file mode 100644 index 000000000000..b70948811960 --- /dev/null +++ b/website/pages/es/developing/creating-a-subgraph/install-the-cli.mdx @@ -0,0 +1,119 @@ +--- +title: Instalar The Graph CLI +--- + +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/network/curating/). + +## Descripción + +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/creating-a-subgraph/subgraph-manifest/) and compiles the [mappings](/creating-a-subgraph/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. + +## Empezando + +### Instalar The Graph CLI + +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +En tu dispositivo, ejecuta alguno de los siguientes comandos: + +#### Using [npm](https://www.npmjs.com/) + +```bash +npm install -g @graphprotocol/graph-cli@latest +``` + +#### Using [yarn](https://yarnpkg.com/) + +```bash +yarn global add @graphprotocol/graph-cli +``` + +The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. + +## Crear un Subgrafo + +### Desde un Contrato Existente + +The following command creates a subgraph that indexes all events of an existing contract: + +```sh +graph init \ + --product subgraph-studio + --from-contract \ + [--network ] \ + [--abi ] \ + [] +``` + +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. + +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. + +### De un Subgrafo de Ejemplo + +The following command initializes a new project from an example subgraph: + +```sh +graph init --from-example=example-subgraph +``` + +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. + +### Add New `dataSources` to an Existing Subgraph + +`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. + +Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: + +```sh +graph add
[] + +Options: + + --abi Path to the contract ABI (default: download from Etherscan) + --contract-name Name of the contract (default: Contract) + --merge-entities Whether to merge entities with the same name (default: false) + --network-file Networks config file path (default: "./networks.json") +``` + +#### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. + +### Getting The ABIs + +Los archivos ABI deben coincidir con tu(s) contrato(s). Hay varias formas de obtener archivos ABI: + +- Si estás construyendo tu propio proyecto, es probable que tengas acceso a tus ABIs más actuales. +- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. + +## SpecVersion Releases + +| Version | Notas del lanzamiento | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/pages/es/developing/creating-a-subgraph/ql-schema.mdx b/website/pages/es/developing/creating-a-subgraph/ql-schema.mdx new file mode 100644 index 000000000000..4ecbc03b60a8 --- /dev/null +++ b/website/pages/es/developing/creating-a-subgraph/ql-schema.mdx @@ -0,0 +1,312 @@ +--- +title: The Graph QL Schema +--- + +## Descripción + +The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. + +> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/querying/graphql-api/) section. + +### Defining Entities + +Before defining entities, it is important to take a step back and think about how your data is structured and linked. + +- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- It may be useful to imagine entities as "objects containing data", rather than as events or functions. +- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. +- Each type that should be an entity is required to be annotated with an `@entity` directive. +- By default, entities are mutable, meaning that mappings can load existing entities, modify them and store a new version of that entity. + - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. + - If changes happen in the same block in which the entity was created, then mappings can make changes to immutable entities. Immutable entities are much faster to write and to query so they should be used whenever possible. + +#### Un buen ejemplo + +The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. + +```graphql +type Gravatar @entity(immutable: true) { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String + accepted: Boolean +} +``` + +#### Un mal ejemplo + +The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. + +```graphql +type GravatarAccepted @entity { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String +} + +type GravatarDeclined @entity { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String +} +``` + +#### Campos opcionales y obligatorios + +Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If the field is a scalar field, you get an error when you try to store the entity. If the field references another entity then you get this error: + +``` +Null value resolved for non-null field 'name' +``` + +Each entity must have an `id` field, which must be of type `Bytes!` or `String!`. It is generally recommended to use `Bytes!`, unless the `id` contains human-readable text, since entities with `Bytes!` id's will be faster to write and query as those with a `String!` `id`. The `id` field serves as the primary key, and needs to be unique among all entities of the same type. For historical reasons, the type `ID!` is also accepted and is a synonym for `String!`. + +For some entity types the `id` for `Bytes!` is constructed from the id's of two other entities; that is possible using `concat`, e.g., `let id = left.id.concat(right.id) ` to form the id from the id's of `left` and `right`. Similarly, to construct an id from the id of an existing entity and a counter `count`, `let id = left.id.concatI32(count)` can be used. The concatenation is guaranteed to produce unique id's as long as the length of `left` is the same for all such entities, for example, because `left.id` is an `Address`. + +### Tipos de Scalars incorporados + +#### GraphQL admite Scalars + +The following scalars are supported in the GraphQL API: + +| Tipo | Descripción | +| --- | --- | +| `Bytes` | Byte array, representado como un string hexadecimal. Comúnmente utilizado para los hashes y direcciones de Ethereum. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | + +### Enums + +También puedes crear enums dentro de un esquema. Los enums tienen la siguiente sintaxis: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner"`. The example below demonstrates what the Token entity would look like with an enum field: + +More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). + +### Relaciones entre Entidades + +Una entidad puede tener una relación con otra u otras entidades de su esquema. Estas relaciones pueden ser recorridas en sus consultas. Las relaciones en The Graph son unidireccionales. Es posible simular relaciones bidireccionales definiendo una relación unidireccional en cada "extremo" de la relación. + +Las relaciones se definen en las entidades como cualquier otro campo, salvo que el tipo especificado es el de otra entidad. + +#### Relaciones Uno a Uno + +Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: + +```graphql +type Transaction @entity(immutable: true) { + id: Bytes! + transactionReceipt: TransactionReceipt +} + +type TransactionReceipt @entity(immutable: true) { + id: Bytes! + transaction: Transaction +} +``` + +#### Relaciones one-to-many + +Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: + +```graphql +type Token @entity(immutable: true) { + id: Bytes! +} + +type TokenBalance @entity { + id: Bytes! + amount: Int! + token: Token! +} +``` + +### Búsquedas Inversas + +Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. + +En el caso de las relaciones one-to-many, la relación debe almacenarse siempre en el lado "one", y el lado "many" debe derivarse siempre. Almacenar la relación de esta manera, en lugar de almacenar una array de entidades en el lado "many", resultará en un rendimiento dramáticamente mejor tanto para la indexación como para la consulta del subgrafo. En general, debe evitarse, en la medida de lo posible, el almacenamiento de arrays de entidades. + +#### Ejemplo + +We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: + +```graphql +type Token @entity(immutable: true) { + id: Bytes! + tokenBalances: [TokenBalance!]! @derivedFrom(field: "token") +} + +type TokenBalance @entity { + id: Bytes! + amount: Int! + token: Token! +} +``` + +#### Relaciones de many-to-many + +Para las relaciones de many-to-many, como los usuarios pueden pertenecer a cualquier número de organizaciones, la forma más directa, pero generalmente no la más eficaz, de modelar la relación es en un array en cada una de las dos entidades implicadas. Si la relación es simétrica, sólo es necesario almacenar un lado de la relación y el otro puede derivarse. + +#### Ejemplo + +Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. + +```graphql +type Organization @entity { + id: Bytes! + name: String! + members: [User!]! +} + +type User @entity { + id: Bytes! + name: String! + organizations: [Organization!]! @derivedFrom(field: "members") +} +``` + +A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like + +```graphql +type Organization @entity { + id: Bytes! + name: String! + members: [UserOrganization!]! @derivedFrom(field: "organization") +} + +type User @entity { + id: Bytes! + name: String! + organizations: [UserOrganization!] @derivedFrom(field: "user") +} + +type UserOrganization @entity { + id: Bytes! # Set to `user.id.concat(organization.id)` + user: User! + organization: Organization! +} +``` + +Este enfoque requiere que las consultas desciendan a un nivel adicional para recuperar, por ejemplo, las organizaciones para los usuarios: + +```graphql +query usersWithOrganizations { + users { + organizations { + # this is a UserOrganization entity + organization { + name + } + } + } +} +``` + +Esta forma más elaborada de almacenar las relaciones many-to-many se traducirá en menos datos almacenados para el subgrafo y, por tanto, en un subgrafo que suele ser mucho más rápido de indexar y consultar. + +### Agregar comentarios al esquema + +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: + +```graphql +type MyFirstEntity @entity { + # unique identifier and primary key of the entity + id: Bytes! + address: Bytes! +} +``` + +## Definición de campos de búsqueda de texto completo + +Las consultas de búsqueda de texto completo filtran y clasifican las entidades basándose en una entrada de búsqueda de texto. Las consultas de texto completo pueden devolver coincidencias de palabras similares procesando el texto de la consulta en stems antes de compararlo con los datos del texto indexado. + +La definición de una consulta de texto completo incluye el nombre de la consulta, el diccionario lingüístico utilizado para procesar los campos de texto, el algoritmo de clasificación utilizado para ordenar los resultados y los campos incluidos en la búsqueda. Cada consulta de texto completo puede abarcar varios campos, pero todos los campos incluidos deben ser de un solo tipo de entidad. + +To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. + +```graphql +type _Schema_ + @fulltext( + name: "bandSearch" + language: en + algorithm: rank + include: [{ entity: "Band", fields: [{ name: "name" }, { name: "description" }, { name: "bio" }] }] + ) + +type Band @entity { + id: Bytes! + name: String! + description: String! + bio: String + wallet: Address + labels: [Label!]! + discography: [Album!]! + members: [Musician!]! +} +``` + +The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/querying/graphql-api#queries) for a description of the fulltext search API and more example usage. + +```graphql +query { + bandSearch(text: "breaks & electro & detroit") { + id + name + description + wallet + } +} +``` + +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. + +## Idiomas admitidos + +La elección de un idioma diferente tendrá un efecto definitivo, aunque a veces sutil, en la API de búsqueda de texto completo. Los campos cubiertos por un campo de consulta de texto completo se examinan en el contexto de la lengua elegida, por lo que los lexemas producidos por las consultas de análisis y búsqueda varían de un idioma a otro. Por ejemplo: al utilizar el diccionario turco compatible, "token" se convierte en "toke", mientras que el diccionario inglés lo convierte en "token". + +Diccionarios de idiomas admitidos: + +| Code | Diccionario | +| ------ | ----------- | +| simple | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | Portugués | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | + +### Algoritmos de Clasificación + +Algoritmos admitidos para ordenar los resultados: + +| Algorithm | Description | +| --- | --- | +| rank | Usa la calidad de coincidencia (0-1) de la consulta de texto completo para ordenar los resultados. | +| rango de proximidad | Similar to rank but also includes the proximity of the matches. | diff --git a/website/pages/es/developing/creating-a-subgraph/starting-your-subgraph.mdx b/website/pages/es/developing/creating-a-subgraph/starting-your-subgraph.mdx new file mode 100644 index 000000000000..316da18524ef --- /dev/null +++ b/website/pages/es/developing/creating-a-subgraph/starting-your-subgraph.mdx @@ -0,0 +1,21 @@ +--- +title: Starting Your Subgraph +--- + +## Descripción + +The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. + +When you create a [subgraph](/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. + +Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. + +### Start Building + +Start the process and build a subgraph that matches your needs: + +1. [Install the CLI](/developing/creating-a-subgraph/install-the-cli/) - Set up your infrastructure +2. [Subgraph Manifest](/developing/creating-a-subgraph/subgraph-manifest/) - Understand a subgraph's key component +3. [The Graph Ql Schema](/developing/creating-a-subgraph/ql-schema/) - Write your schema +4. [Writing AssemblyScript Mappings](/developing/creating-a-subgraph/assemblyscript-mappings/) - Write your mappings +5. [Advanced Features](/developing/creating-a-subgraph/advanced/) - Customize your subgraph with advanced features diff --git a/website/pages/es/developing/creating-a-subgraph/subgraph-manifest.mdx b/website/pages/es/developing/creating-a-subgraph/subgraph-manifest.mdx new file mode 100644 index 000000000000..d8d4f07fd43a --- /dev/null +++ b/website/pages/es/developing/creating-a-subgraph/subgraph-manifest.mdx @@ -0,0 +1,534 @@ +--- +title: Subgraph Manifest +--- + +## Descripción + +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. + +The **subgraph definition** consists of the following files: + +- `subgraph.yaml`: Contains the subgraph manifest + +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL + +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +### Subgraph Capabilities + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +repository: https://github.com/graphprotocol/graph-tooling +schema: + file: ./schema.graphql +indexerHints: + prune: auto +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + abi: Gravity + startBlock: 6175244 + endBlock: 7175245 + context: + foo: + type: Bool + data: true + bar: + type: String + data: 'bar' + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + - event: UpdatedGravatar(uint256,address,string,string) + handler: handleUpdatedGravatar + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCall + filter: + kind: call + file: ./src/mapping.ts +``` + +## Subgraph Entries + +> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/developing/creating-a-subgraph/ql-schema/). + +Las entradas importantes a actualizar para el manifiesto son: + +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. + +- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. + +- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. + +- `features`: a list of all used [feature](#experimental-features) names. + +- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. + +- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. + +- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. + +- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. + +- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. + +- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. + +- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. + +- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. + +- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. + +- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. + +A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. + +## Event Handlers + +Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. + +### Defining an Event Handler + +An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: Approval(address,address,uint256) + handler: handleApproval + - event: Transfer(address,address,uint256) + handler: handleTransfer + topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optional topic filter which filters only events with the specified topic. +``` + +## Call Handlers + +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. + +Los call handlers solo se activarán en uno de estos dos casos: cuando la función especificada sea llamada por una cuenta distinta del propio contrato o cuando esté marcada como externa en Solidity y sea llamada como parte de otra función en el mismo contrato. + +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. + +### Definición de un Call Handler + +To define a call handler in your manifest, simply add a `callHandlers` array under the data source you would like to subscribe to. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar +``` + +The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. + +### Función mapeo + +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: + +```typescript +import { CreateGravatarCall } from '../generated/Gravity/Gravity' +import { Transaction } from '../generated/schema' + +export function handleCreateGravatar(call: CreateGravatarCall): void { + let id = call.transaction.hash + let transaction = new Transaction(id) + transaction.displayName = call.inputs._displayName + transaction.imageUrl = call.inputs._imageUrl + transaction.save() +} +``` + +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. + +## Handlers de bloques + +Además de suscribirse a eventos del contracto o calls de funciones, un subgrafo puede querer actualizar sus datos a medida que se añaden nuevos bloques a la cadena. Para ello, un subgrafo puede ejecutar una función después de cada bloque o después de los bloques que coincidan con un filtro predefinido. + +### Filtros admitidos + +#### Call Filter + +```yaml +filter: + kind: call +``` + +_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ + +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. + +La ausencia de un filtro para un handler de bloque asegurará que el handler sea llamado en cada bloque. Una fuente de datos solo puede contener un handler de bloque para cada tipo de filtro. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCallToContract + filter: + kind: call +``` + +#### Polling Filter + +> **Requires `specVersion` >= 0.0.8** +> +> **Note:** Polling filters are only available on dataSources of `kind: ethereum`. + +```yaml +blockHandlers: + - handler: handleBlock + filter: + kind: polling + every: 10 +``` + +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. + +#### Once Filter + +> **Requires `specVersion` >= 0.0.8** +> +> **Note:** Once filters are only available on dataSources of `kind: ethereum`. + +```yaml +blockHandlers: + - handler: handleOnce + filter: + kind: once +``` + +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. + +```ts +export function handleOnce(block: ethereum.Block): void { + let data = new InitialData(Bytes.fromUTF8('initial')) + data.data = 'Setup data here' + data.save() +} +``` + +### Función mapeo + +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. + +```typescript +import { ethereum } from '@graphprotocol/graph-ts' + +export function handleBlock(block: ethereum.Block): void { + let id = block.hash + let entity = new Block(id) + entity.save() +} +``` + +## Eventos anónimos + +Si necesitas procesar eventos anónimos en Solidity, puedes hacerlo proporcionando el tema 0 del evento, como en el ejemplo: + +```yaml +eventHandlers: + - event: LogNote(bytes4,address,bytes32,bytes32,uint256,bytes) + topic0: '0x644843f351d3fba4abcd60109eaff9f54bac8fb8ccf0bab941009c21df21cf31' + handler: handleGive +``` + +An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. + +## Recepción de transacciones en Event Handlers + +Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. + +To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. + +```yaml +eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + receipt: true +``` + +Inside the handler function, the receipt can be accessed in the `Event.receipt` field. When the `receipt` key is set to `false` or omitted in the manifest, a `null` value will be returned instead. + +## Order of Triggering Handlers + +Las triggers de una fuente de datos dentro de un bloque se ordenan mediante el siguiente proceso: + +1. Las triggers de eventos y calls se ordenan primero por el índice de la transacción dentro del bloque. +2. Los triggers de eventos y calls dentro de la misma transacción se ordenan siguiendo una convención: primero los triggers de eventos y luego los de calls, respetando cada tipo el orden en que se definen en el manifiesto. +3. Las triggers de bloques se ejecutan después de las triggers de eventos y calls, en el orden en que están definidos en el manifiesto. + +Estas normas de orden están sujetas a cambios. + +> **Note:** When new [dynamic data source](#data-source-templates-for-dynamically-created-contracts) are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. + +## Plantillas para fuentes de datos + +Un patrón común en los contratos inteligentes de Ethereum es el uso de contratos de registro o fábrica, donde un contrato crea, gestiona o hace referencia a un número arbitrario de otros contratos que tienen cada uno su propio estado y eventos. + +The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. + +### Fuente de Datos para el Contrato Principal + +First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.org) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on-chain by the factory contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: Factory + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - Directory + abis: + - name: Factory + file: ./abis/factory.json + eventHandlers: + - event: NewExchange(address,address) + handler: handleNewExchange +``` + +### Plantillas de fuentes de datos para contratos creados dinámicamente + +Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a pre-defined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + # ... other source fields for the main contract ... +templates: + - name: Exchange + kind: ethereum/contract + network: mainnet + source: + abi: Exchange + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/exchange.ts + entities: + - Exchange + abis: + - name: Exchange + file: ./abis/exchange.json + eventHandlers: + - event: TokenPurchase(address,uint256,uint256) + handler: handleTokenPurchase + - event: EthPurchase(address,uint256,uint256) + handler: handleEthPurchase + - event: AddLiquidity(address,uint256,uint256) + handler: handleAddLiquidity + - event: RemoveLiquidity(address,uint256,uint256) + handler: handleRemoveLiquidity +``` + +### Instanciación de una plantilla de fuente de datos + +In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + // Start indexing the exchange; `event.params.exchange` is the + // address of the new exchange contract + Exchange.create(event.params.exchange) +} +``` + +> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> +> Si los bloques anteriores contienen datos relevantes para la nueva fuente de datos, lo mejor es indexar esos datos leyendo el estado actual del contrato y creando entidades que representen ese estado en el momento de crear la nueva fuente de datos. + +### Contexto de la fuente de datos + +Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + let context = new DataSourceContext() + context.setString('tradingPair', event.params.tradingPair) + Exchange.createWithContext(event.params.exchange, context) +} +``` + +Inside a mapping of the `Exchange` template, the context can then be accessed: + +```typescript +import { dataSource } from '@graphprotocol/graph-ts' + +let context = dataSource.context() +let tradingPair = context.getString('tradingPair') +``` + +There are setters and getters like `setString` and `getString` for all value types. + +## Bloques iniciales + +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. + +```yaml +dataSources: + - kind: ethereum/contract + name: ExampleSource + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: ExampleContract + startBlock: 6627917 + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - User + abis: + - name: ExampleContract + file: ./abis/ExampleContract.json + eventHandlers: + - event: NewEvent(address,address) + handler: handleNewEvent +``` + +> **Note:** The contract creation block can be quickly looked up on Etherscan: +> +> 1. Busca el contrato introduciendo su dirección en la barra de búsqueda. +> 2. Click on the creation transaction hash in the `Contract Creator` section. +> 3. Carga la página de detalles de la transacción, donde encontrarás el bloque inicial de ese contrato. + +## Indexer Hints + +The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. + +> This feature is available from `specVersion: 1.0.0` + +### Prune + +`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: + +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. + +``` + indexerHints: + prune: auto +``` + +> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. + +History as of a given block is required for: + +- [Time travel queries](/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history +- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block +- Rewinding the subgraph back to that block + +If historical data as of the block has been pruned, the above capabilities will not be available. + +> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. + +For subgraphs leveraging [time travel queries](/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: + +To retain a specific amount of historical data: + +``` + indexerHints: + prune: 1000 # Replace 1000 with the desired number of blocks to retain +``` + +To preserve the complete history of entity states: + +``` +indexerHints: + prune: never +``` diff --git a/website/pages/es/developing/developer-faqs.mdx b/website/pages/es/developing/developer-faqs.mdx index 55357e42a4ef..3cf0a885e79c 100644 --- a/website/pages/es/developing/developer-faqs.mdx +++ b/website/pages/es/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: Preguntas Frecuentes de los Desarrolladores --- -## 1. ¿Qué es un subgrafo? +This page summarizes some of the most common questions for developers building on The Graph. -Un subgrafo es una API personalizada construida sobre datos de blockchain. Los subgrafos se consultan mediante el lenguaje de consulta GraphQL y son deployados en un Graph Node usando Graph CLI. Una vez deployados y publicados en la red descentralizada de The Graph, los indexadores procesan los subgrafos y los ponen a disposición de los consumidores de subgrafos para que los consulten. +## Subgraph Related -## 2. ¿Puedo eliminar mi subgrafo? +### 1. ¿Qué es un subgrafo? -No es posible eliminar los subgrafos una vez creados. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. ¿Puedo cambiar el nombre de mi subgrafo? +### 2. What is the first step to create a subgraph? -No. Una vez que se crea un subgrafo, no se puede cambiar el nombre. Asegúrate de pensar en esto cuidadosamente antes de crear tu subgrafo para que sea fácil de buscar e identificar por otras dApps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. ¿Puedo cambiar la cuenta de GitHub asociada con mi subgrafo? +### 3. Can I still create a subgraph if my smart contracts don't have events? -No. Una vez que se crea un subgrafo, la cuenta de GitHub asociada no puede ser modificada. Asegúrate de pensarlo bien antes de crear tu subgrafo. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. ¿Todavía puedo crear un subgrafo si mis contratos inteligentes no tienen eventos? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -Es muy recomendable que estructures tus contratos inteligentes para tener eventos asociados a los datos que te interesa consultar. Los handlers de eventos en el subgrafo son activados por los eventos del contrato y son, con mucho, la forma más rápida de recuperar datos útiles. +### 4. ¿Puedo cambiar la cuenta de GitHub asociada con mi subgrafo? -Si los contratos con los que estás trabajando no contienen eventos, tu subgrafo puede utilizar handlers de llamadas y bloques para activar la indexación. Aunque esto no se recomienda, ya que el rendimiento será significativamente más lento. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. ¿Es posible deployar un subgrafo con el mismo nombre para varias redes? +### 5. How do I update a subgraph on mainnet? -Necesitarás nombres separados para varias redes. Si bien no puedes tener diferentes subgrafos con el mismo nombre, existen formas convenientes de tener una base de código única para varias redes. Encuentra más sobre esto en nuestra documentación: [Redeploying a Subgraph](/implementación/implementación-de-un-subgráfico-a-alojamiento#reimplementación-de-un-subgráfico) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. ¿En qué se diferencian las plantillas de las fuentes de datos? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Las plantillas te permiten crear fuentes de datos sobre la marcha, mientras tu subgrafo está indexando. Puede darse el caso de que tu contrato genere nuevos contratos a medida que la gente interactúe con él, y dado que conoces la forma de esos contratos (ABI, eventos, etc.) por adelantado puedes definir cómo quieres indexarlos en una plantilla y cuando se generen tu subgrafo creará una fuente de datos dinámica proporcionando la dirección del contrato. +Tienes que volver a realizar el deploy del subgrafo, pero si el ID del subgrafo (hash IPFS) no cambia, no tendrá que sincronizarse desde el principio. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Dentro de un subgrafo, los eventos se procesan siempre en el orden en que aparecen en los bloques, independientemente de que sea a través de múltiples contratos o no. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Consulta la sección "Instantiating a data source template" en: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. ¿Cómo puedo asegurarme de que estoy utilizando la última versión de graph-node para mis deploys locales? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -Puedes ejecutar el siguiente comando: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**NOTA:** docker/docker-compose siempre utilizará la versión de graph-node que se sacó la primera vez que se ejecutó, por lo que es importante hacer esto para asegurarse de que estás al día con la última versión de graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. ¿Cómo llamo a una función de contrato o accedo a una variable de estado pública desde mis mapeos de subgrafos? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. ¿Es posible configurar un subgrafo usando `graph init` de `graph-cli` con dos contratos? ¿O debo agregar manualmente otra fuente de datos en `subgraph.yaml` después de ejecutar `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +Puedes ejecutar el siguiente comando: -## 11. Quiero contribuir o agregar un problema de GitHub. ¿Dónde puedo encontrar los repositorios de código abierto? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. ¿Cuál es la forma recomendada para crear ids "autogeneradas" para una entidad al manejar eventos? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? Si sólo se crea una entidad durante el evento y si no hay nada mejor disponible, entonces el hash de la transacción + el índice del registro serían únicos. Puedes ofuscar esto convirtiendo eso en Bytes y luego pasándolo por `crypto.keccak256` pero esto no lo hará más único. -## Cuando se escuchan varios contratos, ¿es posible seleccionar el orden de los contratos para escuchar los eventos? +### 15. Can I delete my subgraph? -Dentro de un subgrafo, los eventos se procesan siempre en el orden en que aparecen en los bloques, independientemente de que sea a través de múltiples contratos o no. +Yes, you can [delete](/managing/delete-a-subgraph/) and [transfer](/managing/transfer-a-subgraph/) your subgraph. -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +Puedes encontrar la lista de redes admitidas [aquí](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Sí. Puedes hacerlo importando `graph-ts` como en el ejemplo siguiente: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. ¿Puedo importar ethers.js u otras bibliotecas JS en mis mappings de subgrafos? - -Actualmente no, ya que los mapeos se escriben en AssemblyScript. Una posible solución alternativa a esto es almacenar los datos en bruto en entidades y realizar la lógica que requiere las bibliotecas JS en el cliente. +## Indexing & Querying Related -## 17. ¿Es posible especificar en qué bloque comenzar a indexar? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. ¿Hay algunos consejos para aumentar el rendimiento de la indexación? Mi subgrafo está tardando mucho en sincronizarse +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Sí, debes echar un vistazo a la función de bloque de inicio opcional para comenzar a indexar desde el bloque en el que se implementó el contrato: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. ¿Hay alguna forma de consultar el subgrafo directamente para determinar el último número de bloque que ha indexado? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? ¡Sí es posible! Prueba el siguiente comando, sustituyendo "organization/subgraphName" por la organización bajo la que se publica y el nombre de tu subgrafo: @@ -102,44 +121,27 @@ Sí, debes echar un vistazo a la función de bloque de inicio opcional para come curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. ¿Qué redes son compatibles con The Graph? - -Puedes encontrar la lista de redes admitidas [aquí](/developing/supported-networks). - -## 21. ¿Es posible duplicar un subgrafo en otra cuenta o endpoint sin volver a realizar el deploy? - -Tienes que volver a realizar el deploy del subgrafo, pero si el ID del subgrafo (hash IPFS) no cambia, no tendrá que sincronizarse desde el principio. - -## 22. ¿Es posible usar Apollo Federation encima de graph-node? +### 22. Is there a limit to how many objects The Graph can return per query? -Federation aún no es compatible, aunque queremos apoyarla en el futuro. Por el momento, algo que se puede hacer es utilizar el stitching de esquemas, ya sea en el cliente o a través de un servicio proxy. - -## 23. ¿Existe un límite en el número de objetos que The Graph puede devolver por consulta? - -Por defecto, las respuestas a las consultas están limitadas a 100 elementos por colección. Si quieres recibir más, puedes llegar hasta 1000 elementos por colección y más allá, puedes paginar con: +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -## 24. Si mi interfaz de dapp usa The Graph para realizar consultas, ¿debo escribir mi clave de consulta directamente en la interfaz? ¿Qué pasa si pagamos tarifas de consulta para los usuarios? ¿Los usuarios malintencionados harán que nuestras tarifas de consulta sean muy altas? - -Actualmente, el enfoque recomendado para una dapp es añadir la clave al frontend y exponerla a los usuarios finales. Dicho esto, puedes limitar esa clave a un nombre de host, como _yourdapp.io_ y subgrafo. La gateway se ejecuta actualmente por Edge & Node. Parte de la responsabilidad de un gateway es monitorear el comportamiento abusivo y bloquear el tráfico de clientes maliciosos. - -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/es/developing/graph-ts/api.mdx b/website/pages/es/developing/graph-ts/api.mdx index b2309f29cc83..5399f3838722 100644 --- a/website/pages/es/developing/graph-ts/api.mdx +++ b/website/pages/es/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -Esta página documenta qué API integradas se pueden usar al escribir mappings de subgrafos. Hay dos tipos de API disponibles listas para usar: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## Referencias de API @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Cada entidad debe tener un identificador único para evitar colisiones con otras entidades. Es bastante común que los parámetros de los eventos incluyan un identificador único que pueda ser utilizado. Nota: El uso del hash de la transacción como ID asume que ningún otro evento en la misma transacción crea entidades con este hash como ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Carga de entidades desde el store @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Buscando entidades creadas dentro de un bloque As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -La API de almacenamiento facilita la recuperación de entidades que se crearon o actualizaron en el bloque actual. Una situación típica para esto es cuando un handler crea una Transacción a partir de algún evento en la cadena, y un handler posterior quiere acceder a esta transacción si existe. En el caso de que la transacción no exista, el subgrafo tendrá que ir a la base de datos solo para averiguar que la entidad no existe; si el autor del subgrafo ya sabe que la entidad debe haber sido creada en el mismo bloque, el uso de loadInBlock evita este viaje de ida y vuelta a la base de datos. Para algunos subgrafos, estas búsquedas perdidas pueden contribuir significativamente al tiempo de indexación. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Cualquier otro contrato que forme parte del subgrafo puede ser importado desde e #### Tratamiento de las Llamadas Revertidas -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Ten en cuenta que un nodo Graph conectado a un cliente Geth o Infura puede no detectar todas las reversiones, si confías en esto te recomendamos que utilices un nodo Graph conectado a un cliente Parity. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Codificación/Descodificación ABI diff --git a/website/pages/es/developing/supported-networks.mdx b/website/pages/es/developing/supported-networks.mdx index dc663b17b3f1..86c379d637f5 100644 --- a/website/pages/es/developing/supported-networks.mdx +++ b/website/pages/es/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/es/developing/unit-testing-framework.mdx b/website/pages/es/developing/unit-testing-framework.mdx index 6522e0531914..63f6babad3d9 100644 --- a/website/pages/es/developing/unit-testing-framework.mdx +++ b/website/pages/es/developing/unit-testing-framework.mdx @@ -2,23 +2,32 @@ title: Marco de Unit Testing --- -¡Matchstick es un marco de unit testing, desarrollado por [LimeChain](https://limechain.tech/), que permite a los developers de subgrafos probar su lógica de mapeo en un entorno sandbox y deployar sus subgrafos con confianza! +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and sucessfully deploy their subgraphs. + +## Benefits of Using Matchstick + +- It's written in Rust and optimized for high performance. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. ## Empezando -### Instalar dependencias +### Install Dependencies -Para utilizar los métodos auxiliares de prueba y ejecutar las pruebas, deberás instalar las siguientes dependencias: +In order to use the test helper methods and run tests, you need to install the following dependencies: ```sh yarn add --dev matchstick-as ``` -❗ `graph-node` depende de PostgreSQL, por lo que si aún no lo tienes, deberás instalarlo. ¡Recomendamos ampliamente usar los comandos a continuación, ya que agregarlos de otra manera puede causar errores inesperados! +### Install PostgreSQL + +`graph-node` depends on PostgreSQL, so if you don't already have it, then you will need to install it. + +> Note: It's highly recommended to use the commands below to avoid unexpected errors. -#### MacOS +#### Using MacOS -Comando de instalación de Postgres: +Installation command: ```sh brew install postgresql @@ -30,15 +39,15 @@ Crea un symlynk al último libpq.5.lib _Es posible que primero debas crear este ln -sf /usr/local/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/opt/postgresql/lib/libpq.5.dylib ``` -#### Linux +#### Using Linux -Comando de instalación de Postgres (depende de tu distribución): +Installation command (depends on your distro): ```sh sudo apt install postgresql ``` -### WSL (Subsistema de Windows para Linux) +### Using WSL (Windows Subsystem for Linux) Puedes usar Matchstick en WSL tanto con el enfoque de Docker como con el enfoque binario. Ya que WSL puede ser un poco complicado, aquí hay algunos consejos en caso de que encuentres problemas como @@ -76,7 +85,7 @@ Y finalmente, no uses `graph test` (que usa tu instalación global de graph-cli } ``` -### Uso +### Using Matchstick Para usar **Matchstick** en tu proyecto de subgrafo simplemente abre una terminal, navega a la carpeta raíz de tu proyecto y simplemente ejecuta `graph test [options] `: descarga el binario **Matchstick** más reciente y ejecuta la prueba especificada o todas las pruebas en una carpeta de prueba (o todas las pruebas existentes si no se especifica un indicador de fuente de datos). @@ -1368,7 +1377,7 @@ La salida del log incluye la duración de la ejecución de la prueba. Aquí hay > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -Esto significa que has utilizado `console.log` en tu código, que no es compatible con AssemblyScript. Considera usar la [API de registro](/developing/graph-ts/api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. > @@ -1384,6 +1393,10 @@ Esto significa que has utilizado `console.log` en tu código, que no es compatib La falta de coincidencia en los argumentos se debe a la falta de coincidencia en `graph-ts` y `matchstick-as`. La mejor manera de solucionar problemas como este es actualizar todo a la última versión publicada. +## Recursos Adicionales + +For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). + ## Comentario Si tiene preguntas, comentarios, solicitudes de funciones o simplemente deseas comunicarte, el mejor lugar sería The Graph Discord, donde tenemos un canal dedicado para Matchstick, llamado 🔥| unit-testing. diff --git a/website/pages/es/glossary.mdx b/website/pages/es/glossary.mdx index adabc2b4b467..b2b41524b887 100644 --- a/website/pages/es/glossary.mdx +++ b/website/pages/es/glossary.mdx @@ -10,11 +10,9 @@ title: Glosario - **Endpoint**: Una URL que se puede utilizar para consultar un subgrafo. El endpoint de prueba para Subgraph Studio es `https://api.studio.thegraph.com/query///` y el endpoint de Graph Explorer es `https://gateway.thegraph.com/api//subgraphs/id/`. El endpoint de Graph Explorer se utiliza para consultar subgrafos en la red descentralizada de The Graph. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexadores (Indexers)**: Participantes de la red que ejecutan nodos de indexación para indexar datos de la blockchain y servir consultas GraphQL. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Flujos de ingresos de los indexadores (Indexer Revenue Streams)**: Los Indexadores son recompensados en GRT con dos componentes: reembolsos de tarifas de consulta y recompensas de indexación. @@ -22,19 +20,19 @@ title: Glosario 2. **Recompensas de Indexación (Indexing Rewards)**: Las recompensas que reciben los Indexadores por indexar subgrafos. Las recompensas de indexación se generan mediante una nueva emisión anual del 3% de GRT. -- **Stake propio del Indexador (Indexer's Self Stake)**: La cantidad de GRT que los Indexadores depositan en stake para participar en la red descentralizada. El mínimo es de 100.000 GRT, y no hay límite superior. +- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegadores (Delegators)**: Participantes de la red que poseen GRT y delegan su GRT en Indexadores. Esto permite a los Indexadores aumentar su stake en los subgrafos de la red. A cambio, los Delegadores reciben una parte de las recompensas de indexación que reciben los Indexadores por procesar los subgrafos. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Impuesto a la Delegación (Delegation Tax)**: Una tasa del 0,5% que pagan los Delegadores cuando delegan GRT en los Indexadores. El GRT utilizado para pagar la tasa se quema. -- **Curadores (Curators)**: Participantes de la red que identifican subgrafos de alta calidad y los "curan" (es decir, señalan GRT sobre ellos) a cambio de cuotas de curación. Cuando los Indexadores reclaman tarifas de consulta sobre un subgrafo, el 10% se distribuye entre los Curadores de ese subgrafo. Los Indexadores obtienen recompensas de indexación proporcionales a la señal en un subgrafo. Vemos una correlación entre la cantidad de GRT señalada y el número de Indexadores que indexan un subgrafo. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Impuesto a la Curación (Curation Tax)**: Una tasa del 1% pagada por los Curadores cuando señalan GRT en los subgrafos. El GRT utilizado para pagar la tasa se quema. -- **Consumidor de Subgrafos (Subgraph Consumer)**: Cualquier aplicación o usuario que consulte un subgrafo. +- **Data Consumer**: Any application or user that queries a subgraph. - **Developer de subgrafos (Subgraph developer)**: Developer que construye y realiza el deploy de un subgrafo en la red descentralizada de The Graph. @@ -46,15 +44,15 @@ title: Glosario 1. **Activa (Active)**: Una allocation se considera activa cuando se crea on-chain. Esto se llama abrir una allocation, e indica a la red que el Indexador está indexando activamente y sirviendo consultas para un subgrafo en particular. Las allocations activas acumulan recompensas de indexación proporcionales a la señal del subgrafo y a la cantidad de GRT asignada. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: Una potente aplicación para crear, deployar y publicar subgrafos. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. -- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. +- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. - **Recompensas de Indexación (Indexing Rewards)**: Las recompensas que reciben los Indexadores por indexar subgrafos. Las recompensas de indexación se distribuyen en GRT. @@ -62,11 +60,11 @@ title: Glosario - **GRT**: El token de utilidad de trabajo de The Graph. GRT ofrece incentivos económicos a los participantes en la red por contribuir a ella. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node es el componente que indexa los subgrafos, y hace que los datos resultantes estén disponibles para su consulta a través de una API GraphQL. Como tal, es fundamental para el stack del Indexador, y el correcto funcionamiento de Graph Node es crucial para ejecutar un Indexador con éxito. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Agente indexador (Indexer Agent)**: El agente del Indexador forma parte del stack del Indexador. Facilita las interacciones on-chain del Indexador, incluido el registro en la red, la gestión de deploys de subgrafos en su(s) Graph Node y la gestión de allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **Cliente The Graph (The Graph Client)**: Una biblioteca para construir dapps basadas en GraphQL de forma descentralizada. @@ -76,12 +74,8 @@ title: Glosario - **Período de enfriamiento (Cooldown Period)**: El tiempo restante hasta que un Indexador que cambió sus parámetros de delegación pueda volver a hacerlo. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. - -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/es/index.json b/website/pages/es/index.json index 7abab377ea71..0fd4bc82e691 100644 --- a/website/pages/es/index.json +++ b/website/pages/es/index.json @@ -56,10 +56,6 @@ "graphExplorer": { "title": "Graph Explorer", "description": "Explora los distintos subgrafos e interactua con el protocolo" - }, - "hostedService": { - "title": "Servicio Alojado", - "description": "Create and explore subgraphs on the hosted service" } } }, diff --git a/website/pages/es/managing/delete-a-subgraph.mdx b/website/pages/es/managing/delete-a-subgraph.mdx index 68ef0a37da75..0bd18777b42f 100644 --- a/website/pages/es/managing/delete-a-subgraph.mdx +++ b/website/pages/es/managing/delete-a-subgraph.mdx @@ -9,7 +9,9 @@ Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). ## Step-by-Step 1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). + 2. Click on the three-dots to the right of the "publish" button. + 3. Click on the option to "delete this subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) @@ -24,6 +26,6 @@ Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). ### Important Reminders - Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. +- Los Curadores ya no podrán señalar en el subgrafo. - Curators that already signaled on the subgraph can withdraw their signal at an average share price. - Deleted subgraphs will show an error message. diff --git a/website/pages/es/managing/transfer-a-subgraph.mdx b/website/pages/es/managing/transfer-a-subgraph.mdx index c4060284d5d9..19999c39b1e3 100644 --- a/website/pages/es/managing/transfer-a-subgraph.mdx +++ b/website/pages/es/managing/transfer-a-subgraph.mdx @@ -1,19 +1,17 @@ --- -title: Transfer and Deprecate a Subgraph +title: Transfer a Subgraph --- -## Transferring ownership of a subgraph - Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. -**Please note the following:** +## Reminders - Whoever owns the NFT controls the subgraph. - If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. - You can easily move control of a subgraph to a multi-sig. - A community member can create a subgraph on behalf of a DAO. -### View your subgraph as an NFT +## View Your Subgraph as an NFT To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: @@ -27,39 +25,18 @@ Or a wallet explorer like **Rainbow.me**: https://rainbow.me/your-wallet-addres ``` -### Step-by-Step +## Step-by-Step To transfer ownership of a subgraph, do the following: -1. Use the UI built into Subgraph Studio: +1. Use the UI built into Subgraph Studio: - ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the subgraph to: - ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: ![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) - -## Deprecating a subgraph - -Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. - -### Step-by-Step - -To deprecate your subgraph, do the following: - -1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). -2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. -3. Your subgraph will no longer appear in searches on Graph Explorer. - -**Please note the following:** - -- The owner's wallet should call the `deprecateSubgraph` function. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deprecated subgraphs will show an error message. - -> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/es/network/benefits.mdx b/website/pages/es/network/benefits.mdx index f3740f62ffea..1b8a5510f076 100644 --- a/website/pages/es/network/benefits.mdx +++ b/website/pages/es/network/benefits.mdx @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy En conclusión: The Graph Network es menos costoso, más fácil de usar y produce resultados superiores en comparación con ejecutar un `graph-node` localmente. -Start using The Graph Network today, and learn how to [upgrade your subgraph to The Graph's decentralized network](/cookbook/upgrading-a-subgraph). +Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/quick-start). diff --git a/website/pages/es/network/curating.mdx b/website/pages/es/network/curating.mdx index e8cdc12ea206..a11299ddd57e 100644 --- a/website/pages/es/network/curating.mdx +++ b/website/pages/es/network/curating.mdx @@ -8,9 +8,7 @@ Curators are critical to The Graph's decentralized economy. They use their knowl Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. -Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. - -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +16,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -30,11 +28,11 @@ Within the Curator tab in Graph Explorer, curators will be able to signal and un Un curador puede optar por señalar una versión especifica de un subgrafo, o puede optar por que su señal migre automáticamente a la versión de producción mas reciente de ese subgrafo. Ambas son estrategias válidas y tienen sus pros y sus contras. -Señalar una versión específica es especialmente útil cuando un subgrafo es utilizado por múltiples dApps. Una dApp puede necesitar actualizar regularmente el subgrafo con nuevas características. Otra dApp puede preferir utilizar una versión del subgrafo más antigua y probada. Tras la curación inicial, se incurre en un impuesto estándar del 1%. +Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. Hacer que tu señal migre automáticamente a la más reciente compilación de producción puede ser valioso para asegurarse de seguir acumulando tarifas de consulta. Cada vez que curas, se incurre en un impuesto de curación del 1%. También pagarás un impuesto de curación del 0,5% en cada migración. Se desaconseja a los desarrolladores de Subgrafos que publiquen con frecuencia nuevas versiones - tienen que pagar un impuesto de curación del 0,5% en todas las acciones de curación auto-migradas. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,8 +47,8 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Riesgos 1. El mercado de consultas es inherentemente joven en The Graph y existe el riesgo de que su APY (Rentabilidad anualizada) sea más bajo de lo esperado debido a la dinámica del mercado que recién está empezando. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. Un subgrafo puede fallar debido a un error. Un subgrafo fallido no acumula tarifas de consulta. Como resultado, tendrás que esperar hasta que el desarrollador corrija el error e implemente una nueva versión. - Si estás suscrito a la versión más reciente de un subgrafo, tus acciones se migrarán automáticamente a esa nueva versión. Esto incurrirá un impuesto de curación del 0.5%. - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. @@ -63,9 +61,9 @@ By signalling on a subgraph, you will earn a share of all the query fees that th ### 2. ¿Cómo decido qué subgrafos son de alta calidad para señalar? -Encontrar subgrafos de alta calidad es una tarea compleja, pero se puede abordar de muchas formas diferentes. Como Curador, quieres buscar subgrafos confiables que impulsen el volumen de consultas. Un subgrafo confiable puede ser valioso si es completo, preciso y respalda las necesidades de dicha dApp. Es posible que un subgrafo con una arquitectura deficiente deba revisarse o volver a publicarse, y también puede terminar fallando. Es fundamental que los Curadores revisen la arquitectura o el código de un subgrafo para evaluar si un subgrafo es valioso. Como resultado: +Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: -- Los curadores pueden usar su conocimiento de una red para intentar predecir cómo un subgrafo puede generar un volumen de consultas mayor o menor a largo plazo +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. What’s the cost of updating a subgraph? @@ -78,50 +76,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. ¿Puedo vender mis acciones de curación? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Bonding Curve 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Precio por acciones](/img/price-per-share.png) - -Como resultado, el precio aumenta linealmente, lo que significa que con el tiempo resultará más caro comprar una participación. A continuación, se muestra un ejemplo de lo que queremos decir; consulta la bonding curve a continuación: - -![Bonding curve](/img/bonding-curve.png) - -Imagina que tenemos dos curadores que acuñan acciones para un subgrafo: - -- El Curador A es el primero en señalar en el subgrafo. Al agregar 120.000 GRT en la curva, puede acuñar 2000 participaciones. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Dado que ambos curadores poseen la mitad participativa de dicha curación, recibirían una cantidad igual en las recompensas por ser curador. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- El curador restante recibiría todas las recompensas en ese subgrafo. Si quemaran sus participaciones a fin de retirar sus GRT, recibirían 120.000 GRT. -- **TLDR (en resumen):** La valoración de GRT de las acciones de curación viene determinada por la bonding curva y puede ser volátil. Existe la posibilidad de incurrir grandes pérdidas. Señalar temprano significa que pones menos GRT por cada acción. Por extensión, esto significa que se ganan más derechos de curador por GRT que los curadores posteriores por el mismo subgrafo. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -En el caso de The Graph, se aprovecha [la implementación de una fórmula por parte de Bancor para la bonding curve](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA). - ¿Sigues confundido? Te invitamos a echarle un vistazo a nuestra guía en un vídeo que aborda todo sobre la curación: diff --git a/website/pages/es/network/delegating.mdx b/website/pages/es/network/delegating.mdx index 2d1b5ab66ee3..8b5633c5e01b 100644 --- a/website/pages/es/network/delegating.mdx +++ b/website/pages/es/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegar --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Guía del Delegador -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly. The Ethereum community provides a comprehensive resource regarding wallets through the following link ([source](https://ethereum.org/en/wallets/)). There are three sections in this guide: @@ -24,15 +34,19 @@ A continuación se enumeran los principales riesgos de ser un Delegador en el pr Los Delegadores no pueden ser recortados por mal comportamiento, pero existe un impuesto sobre los Delegadores para desincentivar la toma de malas decisiones que puedan perjudicar la integridad de la red. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### El período de unbonding (desvinculación) de la delegación Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
![Delegation unbonding](/img/Delegation-Unbonding.png) _Ten en cuenta la tasa del 0,5% en la UI de la Delegación, así @@ -41,9 +55,13 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### Elige un Indexador fiable, que pague recompensas justas a sus Delegadores -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *El Indexador de arriba está dando a los Delegadores el 90% de @@ -51,38 +69,52 @@ Indexing Reward Cut - The indexing reward cut is the portion of the rewards that Delegadores*
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Calculando el retorno esperado para los Delegadores +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- Un Delegador técnico también puede ver la capacidad de los Indexadores para usar los tokens que han sido delegados y la capacidad de disponibilidad a su favor. Si un Indexador no está asignando todos los tokens disponibles, no está obteniendo el beneficio máximo que podría obtener para sí mismo o para sus Delegadores. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Siempre ten en cuenta la tarifa por consulta y el recorte de recompensas para el Indexador -Como se ha descrito en las secciones anteriores, debes elegir un indexador que sea transparente y honesto a la hora de establecer su corte de tarifa de consulta y cortes de tarifa de indexación. Un Delegador también debe fijarse en el tiempo de enfriamiento de los parámetros para ver de cuánto tiempo disponen. Una vez hecho esto, es bastante sencillo calcular la cantidad de recompensas que reciben los Delegadores. La fórmula es: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegación Imagen 3](/img/Delegation-Reward-Formula.png) ### Tener en cuenta el pool de delegación de cada Indexador -Otra cosa que tiene que tener en cuenta un Delegador es qué proporción del Pool de Delegación posee. Todas las recompensas de la delegación se reparten de forma equitativa, con un simple reequilibrio del pool determinado por la cantidad que el Delegador haya depositado en el pool. De este modo, el Delegador recibe una parte del pool: +Delegators should consider the proportion of the Delegation Pool they own. -![Fórmula para compartir](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Fórmula para compartir](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Considerar la capacidad de delegación -Otra cosa a tener en cuenta es la capacidad de delegación. Actualmente, el Ratio de Delegación está fijado en 16. Esto significa que si un Indexador ha stakeado 1.000.000 GRT, su Capacidad de Delegación es de 16.000.000 GRT de tokens delegados que puede utilizar en el protocolo. Cualquier token delegado que supere esta cantidad diluirá todas las recompensas del Delegador. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -90,16 +122,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### Error de "Transacción Pendiente" en MetaMask -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Ejemplo -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Video guía de la interfaz de usuario de la red +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/es/network/developing.mdx b/website/pages/es/network/developing.mdx index 223472818228..9f172dd06432 100644 --- a/website/pages/es/network/developing.mdx +++ b/website/pages/es/network/developing.mdx @@ -2,52 +2,29 @@ title: Desarrollando --- -Los desarrolladores representan el lado de la demanda del ecosistema The Graph. Los developers construyen subgrafos y los publican en The Graph Network. A continuación, consultan los subgrafos activos con GraphQL para potenciar sus aplicaciones. +To start coding right away, go to [Developer Quick Start](/quick-start/). -## Ciclo de vida de un Subgrafo +## Descripción -Los subgrafos deployados en la red tienen un ciclo de vida definido. +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. -### Construir a nivel local +On The Graph, you can: -Al igual que con todo el desarrollo de subgrafos, se comienza con el desarrollo y prueba local. Los desarrolladores pueden utilizar la misma configuración local tanto si construyen para The Graph Network, el Servicio Alojado o un Graph Node local, aprovechando `graph-cli` y `graph-ts` para construir su subgrafo. Se anima a los desarrolladores a utilizar herramientas como [Matchstick](https://github.com/LimeChain/matchstick) para realizar pruebas unitarias y mejorar la solidez de sus subgrafos. +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. -> Existen ciertas limitaciones en The Graph Network, en términos de características y soporte de red. Solo los subgrafos en [redes suportadas](/developing/supported-networks) obtienen recompensas de indexación, y los subgrafos que obtienen datos de IPFS tampoco son elegibles. +### What is GraphQL? -### Deploy to Subgraph Studio +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. +### Developer Actions -### Publicar a la red +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. -Cuando el desarrollador está satisfecho con su subgrafo, puede publicarlo en The Graph Network. Esta es una acción on-chain, que registra el subgrafo para que pueda ser descubierto por los Indexadores. Los subgrafos publicados tienen su correspondiente NFT, que es fácilmente transferible. El subgrafo publicado tiene metadatos asociados, que proporcionan a otros participantes de la red un contexto e información útiles. +### What are subgraphs? -### Señalar para fomentar la indexación +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Es poco probable que los subgrafos publicados sean recogidos por los Indexadores sin la adición de la señal. La señal es GRT bloqueado asociado a un subgrafo determinado, que indica a los Indexadores que un subgrafo determinado recibirá un volumen de consultas, y también contribuye a las recompensas de indexación disponibles por procesarlo. Los desarrolladores de subgrafos generalmente añadirán una señal a su subgrafo para fomentar la indexación. Los Curadores de terceros también pueden señalar un subgrafo determinado, si consideran que el subgrafo puede generar un volumen de consultas. - -### Consultas & desarrollo de aplicaciones - -Una vez que un subgrafo ha sido procesado por los Indexadores y está disponible para su consulta, los desarrolladores pueden empezar a utilizar el subgrafo en sus aplicaciones. Los desarrolladores consultan los subgrafos a través de una Gateway, que reenvía sus consultas a un Indexador que haya procesado el subgrafo, pagando las tarifas de consulta en GRT. - -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. - -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. - -### Updating Subgraphs - -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. - -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. - -### Deprecar un Subgrafo - -En algún momento un developer puede decidir que ya no necesita un subgrafo publicado. En ese momento pueden deprecar el subgrafo, lo que devuelve cualquier GRT señalada a los Curadores. - -### Diversos roles de desarrollador - -Algunos desarrolladores participarán en el ciclo de vida completo de los subgrafos en la red, publicando, consultando e iterando sobre sus propios subgrafos. Algunos se centrarán en el desarrollo de subgrafos, creando APIs abiertas en las que otros puedan basarse. Otros pueden centrarse en la aplicación, consultando subgrafos deployados por otros. - -### Desarrolladores y economía de la red - -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +Check out the documentation on [subgraphs](/subgraphs/) to learn specifics. diff --git a/website/pages/es/network/explorer.mdx b/website/pages/es/network/explorer.mdx index b2f43cebf2a2..7f8dee22a2e7 100644 --- a/website/pages/es/network/explorer.mdx +++ b/website/pages/es/network/explorer.mdx @@ -2,21 +2,35 @@ title: Explorador de Graph --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgrafos -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -Cuando hagas clic en un subgrafo, podrás probar consultas en el playground y podrás aprovechar los detalles de la red para tomar decisiones informadas. También podrás señalar GRT en tu propio subgrafo o en los subgrafos de otros para que los indexadores sean conscientes de su importancia y calidad. Esto es fundamental porque señalar en un subgrafo incentiva su indexación, lo que significa que saldrá a la luz en la red para eventualmente entregar consultas. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Imagen de Explorer 2](/img/Subgraph-Details.png) -En la página de cada subgrafo, aparecen varios detalles. Entre ellos se incluyen: +On each subgraph’s dedicated page, you can do the following: - Señalar/dejar de señalar un subgrafo - Ver más detalles como gráficos, ID de implementación actual y otros metadatos @@ -31,26 +45,32 @@ En la página de cada subgrafo, aparecen varios detalles. Entre ellos se incluye ## Participantes -Dentro de esta pestaña, tendras una mirada general de todas las personas que están participando en las actividades de la red, como los Indexadores, los Delegadores y los Curadores. A continuación, revisaremos en profundidad lo que significa cada pestaña para ti. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexadores ![Imagen de Explorer 4](/img/Indexer-Pane.png) -Comencemos con los Indexadores. Los Indexadores son la columna vertebral del protocolo, ya que son los que stakean en los subgrafos, los indexan y proveen consultas a cualquiera que consuma subgrafos. En la tabla de Indexadores, podrás ver los parámetros de delegación de un Indexador, su participación, cuánto han stakeado en cada subgrafo y cuántos ingresos han obtenido por las tarifas de consulta y las recompensas de indexación. Profundizaremos un poco más a continuación: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut: es el porcentaje de los reembolsos obtenidos por la tarifa de consulta que el Indexador conserva cuando se divide con los Delegadores -- Effective Reward Cut: es el recorte de recompensas por indexación que se aplica al pool de delegación. Si es negativo, significa que el Indexador está regalando parte de sus beneficios. Si es positivo, significa que el Indexador se queda con alguno de tus beneficios -- Cooldown Remaining: el tiempo restante que le permitirá al Indexador cambiar los parámetros de delegación. Los plazos de configuración son ajustados por los Indexadores cuando ellos actualizan sus parámetros de delegación -- Owned: esta es la participación (o el stake) depositado por el Indexador, la cual puede reducirse por su mal comportamiento -- Delegated: participación de los Delegadores que puede ser asignada por el Indexador, pero que no se puede recortar -- Allocated: es el stake que los indexadores están asignando activamente a los subgrafos que están indexando -- Available Delegation Capacity: la cantidad de participación delegada que los Indexadores aún pueden recibir antes de que se sobredeleguen +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity: la cantidad máxima de participación delegada que el Indexador puede aceptar de forma productiva. Un exceso de participación delegada no puede utilizarse para asignaciones o cálculos de recompensas. -- Query Fees: estas son las tarifas totales que los usuarios (clientes) han pagado por todas las consultas de un Indexador +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Indexer Rewards: este es el total de recompensas del Indexador obtenidas por el Indexador y sus Delegadores durante todo el tiempo que trabajaron en conjunto. Las recompensas de los Indexadores se pagan mediante la emisión de GRT. -Los Indexadores pueden ganar tanto comisiones de consulta como recompensas de indexación. Funcionalmente, esto ocurre cuando los participantes de la red delegan GRT a un Indexador. Esto permite a los Indexadores recibir tarifas de consulta y recompensas en función de sus parámetros de indexación. Los parámetros de indexación se establecen haciendo clic en la parte derecha de la tabla, o entrando en el perfil de un Indexador y haciendo clic en el botón "Delegar". +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. Para obtener más información sobre cómo convertirte en un Indexador, puedes consultar la [documentación oficial](/network/indexing) o [The Graph Academy Indexer Guides.](https://thegraph.academy/delegators/ eligiendo-indexadores/) @@ -58,9 +78,13 @@ Para obtener más información sobre cómo convertirte en un Indexador, puedes c ### 2. Curadores -Los Curadores analizan los subgrafos para identificar cuáles son los de mayor calidad. Una vez que un Curador ha encontrado un subgrafo potencialmente atractivo, puede curarlo señalando su bonding curve. De este modo, los Curadores hacen saber a los Indexadores qué subgrafos son de alta calidad y deben ser indexados. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Los Curadores pueden ser miembros de la comunidad, consumidores de datos o incluso developers de subgrafos que señalan en sus propios subgrafos depositando tokens GRT en una bonding curve. Al depositar GRT, los Curadores anclan sus participaciones como curadores de un subgrafo. Como resultado, los Curadores son elegibles para ganar una parte de las tarifas de consulta que genera el subgrafo que han señalado. La bonding curve incentiva a los Curadores a curar fuentes de datos de la más alta calidad. La tabla de Curador en esta sección te permitirá ver: +In the The Curator table listed below you can see: - La fecha en que el Curador comenzó a curar - El número de GRT que se depositaron @@ -68,34 +92,36 @@ Los Curadores pueden ser miembros de la comunidad, consumidores de datos o inclu ![Imagen de Explorer 6](/img/Curation-Overview.png) -Si deseas obtener más información sobre el rol de Curador, puedes hacerlo visitando los siguientes enlaces de [The Graph Academy](https://thegraph.academy/curators/) o [documentación oficial.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegadores -Los Delegadores juegan un rol esencial en la seguridad y descentralización que conforman la red de The Graph. Participan en la red delegando (es decir, "stakeado") tokens GRT a uno o varios Indexadores. Sin Delegadores, es menos probable que los Indexadores obtengan recompensas y tarifas significativas. Por lo tanto, los Indexadores buscan atraer Delegadores ofreciéndoles una parte de las recompensas de indexación y las tarifas de consulta que ganan. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Imagen de Explorer 7](/img/Delegation-Overview.png) -La tabla de Delegadores te permitirá ver los Delegadores activos en la comunidad, así como las siguientes métricas: +In the Delegators table you can see the active Delegators in the community and important metrics: - El número de Indexadores a los que delega este Delegador - La delegación principal de un Delegador - Las recompensas que han ido acumulando, pero que aún no han retirado del protocolo - Las recompensas realizadas, es decir, las que ya retiraron del protocolo - Cantidad total de GRT que tienen actualmente dentro del protocolo -- La fecha en la que delegaron por última vez +- The date they last delegated -Si deseas obtener más información sobre cómo convertirte en Delegador, ¡no busques más! Todo lo que tienes que hacer es dirigirte a la [documentación oficial](/network/delegating) o [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Red -En la sección red, verás los KPI globales, así como la capacidad de cambiar a una base por ciclo y analizar las métricas de la red con más detalle. Estos detalles te darán una idea de cómo se está desempeñando la red a lo largo del tiempo. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Descripción -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - La cantidad total de stake que circula en estos momentos - La participación que se divide entre los Indexadores y sus Delegadores @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Parámetros del protocolo como las recompensas de curación, tasa de inflación y más - Recompensas y tarifas del ciclo actual -Algunos detalles clave que vale la pena mencionar: +A few key details to note: -- **Las tarifas de consulta representan las tarifas generadas por los consumidores**, y que pueden ser reclamadas (o no) por los Indexadores después de un período de al menos 7 ciclos (ver más abajo) después de que se han cerrado las asignaciones hacia los subgrafos y los datos que servían han sido validados por los consumidores. -- **Las recompensas de indexación representan la cantidad de recompensas que los Indexadores reclamaron por la emisión de la red durante el ciclo.** Aunque la emisión del protocolo es fija, las recompensas solo se anclan una vez que los Indexadores cierran sus asignaciones hacia los subgrafos que han indexado. Por lo tanto, el número de recompensas por ciclo suele variar (es decir, durante algunos ciclos, es posible que los Indexadores hayan cerrado colectivamente asignaciones que han estado abiertas durante muchos días). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Imagen de Explorer 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ En la sección de Epochs, puedes analizar, por cada epoch, métricas como: - El ciclo activo es aquel en la que los indexadores actualmente asignan su participación (staking) y cobran tarifas por consultas - Los ciclos liquidados son aquellos en los que ya se han liquidado las recompensas y demás métricas. Esto significa que los Indexadores están sujetos a recortes si los consumidores abren disputas en su contra. - Los ciclos de distribución son los ciclos en los que los canales correspondientes a los ciclos son establecidos y los Indexadores pueden reclamar sus reembolsos correspondientes a las tarifas de consulta. - - Los ciclos finalizados son los ciclos que no tienen reembolsos en cuanto a las tarifas de consulta, estos son reclamados por parte de los Indexadores, por lo que estos se consideran como finalizados. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Imagen de Explorer 9](/img/Epoch-Stats.png) ## Tu perfil de usuario -Ahora que hemos hablado de las estadísticas de la red, pasemos a tu perfil personal. Tu perfil personal es el lugar donde puedes ver tu actividad personal dentro de la red, sin importar cómo estés participando en la red. Tu crypto wallet actuará como tu perfil de usuario, y desde tu dashboard podrás ver lo siguiente: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Información general del perfil -Aquí es donde puedes ver las acciones actuales que realizaste. Aquí también podrás encontrar la información de tu perfil, la descripción y el sitio web (si agregaste uno). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Imagen de Explorer 10](/img/Profile-Overview.png) ### Pestaña de subgrafos -Si haces clic en la pestaña subgrafos, verás tus subgrafos publicados. Esto no incluirá ningún subgrafo implementado con la modalidad de CLI o con fines de prueba; los subgrafos solo aparecerán cuando se publiquen en la red descentralizada. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Imagen de Explorer 11](/img/Subgraphs-Overview.png) ### Pestaña de indexación -Si haces clic en la pestaña Indexación, encontrarás una tabla con todas las asignaciones activas e históricas hacia los subgrafos, así como gráficos que puedes analizar y ver tu desempeño anterior como Indexador. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Esta sección también incluirá detalles sobre las recompensas netas que obtienes como Indexador y las tarifas netas que recibes por cada consulta. Verás las siguientes métricas: @@ -158,7 +189,9 @@ Esta sección también incluirá detalles sobre las recompensas netas que obtien ### Pestaña de delegación -Los Delegadores son importantes para la red de The Graph. Un Delegador debe usar su conocimiento para elegir un Indexador que le proporcionará un retorno saludable y sostenible. Aquí puedes encontrar detalles de tus delegaciones activas e históricas, junto con las métricas de los Indexadores a los que delegaste en el pasado. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. En la primera mitad de la página, puedes ver tu gráfico de delegación, así como el gráfico de recompensas históricas. A la izquierda, puedes ver los KPI que reflejan tus métricas de delegación actuales. diff --git a/website/pages/es/network/indexing.mdx b/website/pages/es/network/indexing.mdx index a57c640869f9..e85cc01e0b86 100644 --- a/website/pages/es/network/indexing.mdx +++ b/website/pages/es/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Muchos de los paneles creados por la comunidad incluyen valores de recompensas pendientes y se pueden verificar fácilmente de forma manual siguiendo estos pasos: -1. Consulta el [ mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) para obtener los ID de todas las allocations activas: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -477,7 +477,7 @@ graph-indexer-agent start \ --index-node-ids default \ --indexer-management-port 18000 \ --metrics-port 7040 \ - --network-subgraph-endpoint https://gateway.network.thegraph.com/network \ + --network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \ --default-allocation-amount 100 \ --register true \ --inject-dai true \ @@ -512,7 +512,7 @@ graph-indexer-service start \ --postgres-username \ --postgres-password \ --postgres-database is_staging \ - --network-subgraph-endpoint https://gateway.network.thegraph.com/network \ + --network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \ | pino-pretty ``` @@ -545,7 +545,7 @@ La **CLI del Indexador** se conecta al agente Indexador, normalmente a través d - `graph indexer rules maybe [options] ` - Configura `thedecisionBasis` para un deploy en `rules`, de modo que el agente Indexador use las reglas de indexación para decidir si debe indexar este deploy. -- `graph indexer actions get [options] ` - Obtiene una o más acciones usando `all` o deja `action-id` vacío para obtener todas las acciones. Un argumento adicional `--status` se puede utilizar para imprimir todas las acciones de un determinado estado. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Acción de allocation en fila @@ -810,7 +810,7 @@ To set the delegation parameters using Graph Explorer interface, follow these st ### La vida de una allocation -Después de ser creada por un Indexador, una allocation saludable pasa por cuatro fases. +After being created by an Indexer a healthy allocation goes through two states. - **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. diff --git a/website/pages/es/network/overview.mdx b/website/pages/es/network/overview.mdx index bd83a749410e..79bf2e4c1921 100644 --- a/website/pages/es/network/overview.mdx +++ b/website/pages/es/network/overview.mdx @@ -2,14 +2,20 @@ title: Visión general de la red --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Descripción +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Economía de los tokens](/img/Network-roles@2x.png) -Para garantizar la seguridad económica de la red de The Graph y la integridad de los datos que se consultan, los participantes hacen stake y utilizan Graph Tokens ([GRT](/tokenomics)). GRT es un token de utilidad que se utiliza para asignar recursos en la red y es un estándar ERC-20. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/es/new-chain-integration.mdx b/website/pages/es/new-chain-integration.mdx index 652d1a26d51a..6e61fadecf0c 100644 --- a/website/pages/es/new-chain-integration.mdx +++ b/website/pages/es/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integración de nuevas redes +title: New Chain Integration --- -El Graph Node actualmente puede indexar datos de los siguientes tipos de cadena: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, a través de EVM JSON-RPC y [Ethereum Firehose] (https://github.com/streamingfast/firehose-ethereum) -- NEAR, a través de [NEAR Firehose] (https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, a través de [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, a través de [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -Si estás interesado en alguna de esas cadenas, la integración es una cuestión de configuración y prueba de Graph Node. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -Si la cadena de bloques es equivalente a EVM y el cliente/nodo expone la EVM JSON-RPC API estándar, Graph Node debería poder indexar la nueva cadena. Para obtener más información, consulte [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Probando un EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Diferencia entre EVM JSON-RPC y Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, en una solicitud por lotes JSON-RPC +- `trace_filter` *(optionally required for Graph Node to support call handlers)* -Si bien los dos son adecuados para subgrafos, siempre se requiere un Firehose para los desarrolladores que quieran compilar con [Substreams](substreams/), como crear [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). Además, Firehose permite velocidades de indexación mejoradas en comparación con JSON-RPC. +### 2. Firehose Integration -Los nuevos integradores de cadenas EVM también pueden considerar el enfoque basado en Firehose, dados los beneficios de los substreams y sus enormes capacidades de indexación en paralelo. El soporte de ambos permite a los desarrolladores elegir entre crear substreams o subgrafos para la nueva cadena. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTA**: Una integración basada en Firehose para cadenas EVM aún requerirá que los indexadores ejecuten el nodo RPC de archivo de la cadena para indexar correctamente los subgrafos. Esto se debe a la incapacidad de Firehose para proporcionar un estado de contrato inteligente al que normalmente se puede acceder mediante el método RPC `eth_call`. (Vale la pena recordar que eth_calls [no es una buena práctica para desarrolladores] (https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Probando un EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -Para que Graph Node pueda ingerir datos de una cadena EVM, el nodo RPC debe exponer los siguientes métodos EVM JSON RPC: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(para bloques históricos, con EIP-1898 - requiere nodo de archivo): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, en una solicitud por lotes JSON-RPC -- _`trace_filter`_ _(opcionalmente necesario para que Graph Node admita call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Configuración del Graph Node +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Empiece por preparar su entorno local** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Configuración del Graph Node + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modifique [esta línea](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) para incluir el nuevo nombre de la red y la URL compatible con EVM JSON RPC - > No cambie el nombre de la var env. Debe seguir siendo "ethereum" incluso si el nombre de la red es diferente. -3. Ejecute un nodo IPFS o use el utilizado por The Graph: https://api.thegraph.com/ipfs/ -**Prueba la integración implementando localmente un subgrafo** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Crea un subgrafo simple de prueba. Algunas opciones están a continuación: - 1. El contrato inteligente y el subgrafo [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) preempaquetados son un buen comienzo - 2. Arranca un subgrafo local desde cualquier contrato inteligente existente o entorno de desarrollo de solidity [usando Hardhat con un plugin Graph] (https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Crea tu subgrafo en Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publica tu subgrafo en Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node debería sincronizar el subgrafo implementado si no hay errores. Dale tiempo para que se sincronice y luego envíe algunas queries GraphQL al punto final de la API impreso en los registros. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integración de una nueva cadena habilitada para Firehose +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Crea un subgrafo simple de prueba. Algunas opciones están a continuación: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node debería sincronizar el subgrafo implementado si no hay errores. Dale tiempo para que se sincronice y luego envíe algunas queries GraphQL al punto final de la API impreso en los registros. -También es posible integrar una nueva cadena utilizando el enfoque Firehose. Actualmente, esta es la mejor opción para cadenas que no son EVM y un requisito para el soporte de substreams. La documentación adicional se centra en cómo funciona Firehose, agregando soporte de Firehose para una nueva cadena e integrándola con Graph Node. Documentos recomendados para integradores: +## Substreams-powered Subgraphs -1. [Documentos generales sobre Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integración de Graph Node con una nueva cadena a través de Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/es/querying/graphql-api.mdx b/website/pages/es/querying/graphql-api.mdx index 2086e994cd0a..0817a89f807e 100644 --- a/website/pages/es/querying/graphql-api.mdx +++ b/website/pages/es/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: API GraphQL --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Consultas +## What is GraphQL? -En tu esquema de subgrafos defines tipos llamados `Entities`. Por cada tipo de `Entity`, se generará un campo `entity` y `entities` en el nivel superior del tipo `Query`. Ten en cuenta que no es necesario incluir `query` en la parte superior de la consulta `graphql` cuando se utiliza The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Ejemplos @@ -21,7 +29,7 @@ Consulta por un solo `Token` definido en tu esquema: } ``` -> **Nota:** Cuando se consulta una sola entidad, el campo `id` es obligatorio y debe ser un string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Consulta todas las entidades `Token`: @@ -36,7 +44,10 @@ Consulta todas las entidades `Token`: ### Clasificación -Al consultar una colección, el parámetro `orderBy` puede utilizarse para ordenar por un atributo específico. Además, el `orderDirection` se puede utilizar para especificar la dirección de orden, `asc` para ascendente o `desc` para descendente. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Ejemplo @@ -53,7 +64,7 @@ Al consultar una colección, el parámetro `orderBy` puede utilizarse para orden A partir de Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), las entidades se pueden ordenar con base en entidades anidadas. -En el siguiente ejemplo, ordenamos los tokens por el nombre de su propietario: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ En el siguiente ejemplo, ordenamos los tokens por el nombre de su propietario: ### Paginación -Al consultar una colección, el parámetro `first` puede utilizarse para paginar desde el principio de la colección. Cabe destacar que el orden por defecto es por ID en orden alfanumérico ascendente, no por tiempo de creación. - -Además, el parámetro `skip` puede utilizarse para saltar entidades y paginar. por ejemplo, `first:100` muestra las primeras 100 entidades y `first:100, skip:100` muestra las siguientes 100 entidades. +When querying a collection, it's best to: -Las consultas deben evitar el uso de valores de `skip` muy grandes, ya que suelen tener un rendimiento deficiente. Para recuperar un gran número de elementos, es mucho mejor para paginar recorrer las entidades basándose en un atributo, como se muestra en el último ejemplo. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Ejemplo usando `first` @@ -106,7 +118,7 @@ Consulta 10 entidades `Token`, desplazadas 10 lugares desde el principio de la c #### Ejemplo usando `first` y `id_ge` -Si un cliente necesita recuperar un gran número de entidades, es mucho más eficaz basar las consultas en un atributo y filtrar por ese atributo. Por ejemplo, un cliente podría recuperar un gran número de tokens utilizando esta consulta: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -La primera vez, enviaría la consulta con `lastID = ""`, y para las siguientes peticiones establecería `lastID` al atributo `id` de la última entidad de la petición anterior. Este enfoque tendrá un rendimiento significativamente mejor que el uso de valores crecientes de `skip`. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtrado -Puedes utilizar el parámetro `where` en tus consultas para filtrar por diferentes propiedades. Puedes filtrar por múltiples valores dentro del parámetro `where`. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Ejemplo usando `where` @@ -155,7 +168,7 @@ Puedes utilizar sufijos como `_gt`, `_lte` para la comparación de valores: #### Ejemplo de filtrado de bloques -También puedes filtrar entidades por el `_change_block(number_gte: Int)`: esto filtra las entidades que se actualizaron en o después del bloque especificado. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. Esto puede ser útil si buscas obtener solo las entidades que han cambiado, por ejemplo, desde la última vez que realizaste una encuesta. O, alternativamente, puede ser útil para investigar o depurar cómo cambian las entidades en tu subgrafo (si se combina con un filtro de bloque, puedes aislar solo las entidades que cambiaron en un bloque específico). @@ -193,7 +206,7 @@ A partir de Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/r ##### Operador `AND` -En el siguiente ejemplo, estamos filtrando desafíos con `coutcome` `succeeded` y `number` mayor o igual a `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -223,7 +236,7 @@ En el siguiente ejemplo, estamos filtrando desafíos con `coutcome` `succeeded` ##### Operador `OR` -En el siguiente ejemplo, estamos filtrando desafíos con `coutcome` `succeeded` y `number` mayor o igual a `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) Puedes consultar el estado de tus entidades no solo para el último bloque, que es el predeterminado, sino también para un bloque arbitrario en el pasado. El bloque en el que debe ocurrir una consulta se puede especificar por su número de bloque o su hash de bloque al incluir un argumento `block` en los campos de nivel superior de las consultas. -El resultado de dicha consulta no cambiará con el tiempo, por ejemplo, consultar en un determinado bloque anterior devolverá el mismo resultado sin importar cuándo se ejecute, con la excepción de que si consultas en un bloque muy cerca de la cabecera de la cadena Ethereum, el resultado podría cambiar si ese bloque resulta no estar en la cadena principal y la cadena se reorganiza. Una vez que un bloque puede considerarse final, el resultado de la consulta no cambiará. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Ten en cuenta que la implementación actual todavía está sujeta a ciertas limitaciones que podrían violar estas garantías. La implementación no siempre puede demostrar que un hash de bloque dado no está en la cadena principal, o que el resultado de una consulta por hash de bloque para un bloque que no puede considerarse final aún podría estar influenciado por una reorganización de bloque que se ejecuta simultáneamente con la consulta. Esto no afecta los resultados de consultas por hash de bloque cuando el bloque es final y se sabe que está en la cadena principal. [Este problema](https://github.com/graphprotocol/graph-node/issues/1405) explica en detalle cuáles son estas limitaciones. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Ejemplo @@ -376,11 +389,11 @@ Graph Node implementa una validación [basada en especificaciones](https://spec. ## Esquema -El esquema de tu fuente de datos, es decir, los tipos de entidad, los valores y las relaciones que están disponibles para consultar, se definen a través de [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/# sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -Los esquemas de GraphQL generalmente definen tipos raíz para `queries`, `subscriptions` y `mutations`. The Graph solo admite `queries`. El tipo raíz `Query` para tu subgrafo se genera automáticamente a partir del esquema de GraphQL que se incluye en tu manifiesto de subgrafo. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Nota:** nuestra API no expone mutaciones porque se espera que los desarrolladores emitan transacciones directamente contra la cadena de bloques subyacente desde sus aplicaciones. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entidades diff --git a/website/pages/es/querying/managing-api-keys.mdx b/website/pages/es/querying/managing-api-keys.mdx index cdb6f9b3fb17..cdbad6cb7c81 100644 --- a/website/pages/es/querying/managing-api-keys.mdx +++ b/website/pages/es/querying/managing-api-keys.mdx @@ -2,23 +2,33 @@ title: Administración de tus claves API --- -Independientemente de si eres un developer de aplicaciones descentralizadas (apps) o un developer de subgrafos, necesitarás administrar tus claves de API. Esto es importante para que puedas consultar los subgrafos porque las claves API aseguran que las conexiones entre los servicios de la aplicación sean válidas y están autorizadas. Esto incluye la autenticación del usuario final y del dispositivo que utiliza la aplicación. +## Descripción -The "API keys" table lists out existing API keys, which will give you the ability to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, as well as total query numbers. You can click the "three dots" menu to edit a given API key: +API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. + +### Create and Manage API Keys + +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. + +The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. + +You can click the "three dots" menu to the right of a given API key to: - Rename API key - Regenerate API key - Delete API key - Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +### API Key Details + You can click on an individual API key to view the Details page: -1. La sección ** Visión General ** te permitirá: +1. Under the **Overview** section, you can: - Editar el nombre de tu clave - Regenerar las claves API - Ver el uso actual de la clave API con estadísticas: - Número de consultas - Cantidad de GRT gastado -2. En **Seguridad**, podrás optar por la configuración de seguridad en función del nivel de control que quieras tener sobre tus claves API. En esta sección, puedes: +2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - Ver y administrar los nombres de dominio autorizados a utilizar tu clave API - Asignar subgrafos que puedan ser consultados con tu clave API diff --git a/website/pages/es/querying/querying-best-practices.mdx b/website/pages/es/querying/querying-best-practices.mdx index 82e8d0cb9da2..8336be7374a7 100644 --- a/website/pages/es/querying/querying-best-practices.mdx +++ b/website/pages/es/querying/querying-best-practices.mdx @@ -2,17 +2,15 @@ title: Mejores Prácticas para Consultas --- -The Graph proporciona una forma descentralizada de consultar datos de la blockchain. +The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Los datos de The Graph Network se exponen a través de una API GraphQL, lo que facilita la consulta de datos con el lenguaje GraphQL. - -Esta página te guiará a través de las reglas esenciales del lenguaje GraphQL y las mejores prácticas de consulta GraphQL. +Learn the essential GraphQL language rules and best practices to optimize your subgraph. --- ## Consulta de una API GraphQL -### Anatomía de una consulta GraphQL +### The Anatomy of a GraphQL Query A diferencia de la API REST, una API GraphQL se basa en un esquema que define las consultas que se pueden realizar. @@ -52,7 +50,7 @@ query [operationName]([variableName]: [variableType]) { } ``` -Aunque la lista de lo que se debe y no se debe hacer sintácticamente es larga, estas son las reglas esenciales que hay que tener en cuenta a la hora de escribir consultas GraphQL: +## Rules for Writing GraphQL Queries - Cada `queryName` sólo debe utilizarse una vez por operación. - Cada `field` debe utilizarse una sola vez en una selección (no podemos consultar el `id` dos veces bajo `token`) @@ -61,9 +59,9 @@ Aunque la lista de lo que se debe y no se debe hacer sintácticamente es larga, - En una lista dada de variables, cada una de ellas debe ser única. - Deben utilizarse todas las variables definidas. -Si no se siguen las reglas anteriores, se producirá un error de la API Graph. +> Note: Failing to follow these rules will result in an error from The Graph API. -For a complete list of rules with code examples, please look at our [GraphQL Validations guide](/release-notes/graphql-validations-migration-guide/). +For a complete list of rules with code examples, check out [GraphQL Validations guide](/release-notes/graphql-validations-migration-guide/). ### Envío de una consulta a una API GraphQL @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as mentioned in ["Querying from an Application"](/querying/querying-from-an-application), it's recommended to use `graph-client`, which supports the following unique features: - Manejo de subgrafos cross-chain: Consulta de varios subgrafos en una sola consulta - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -164,11 +160,11 @@ Doing so brings **many advantages**: - **Las variables pueden almacenarse en caché** a nivel de servidor - **Las consultas pueden ser analizadas estáticamente por herramientas** (más información al respecto en las secciones siguientes) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -337,8 +332,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- más difícil de leer para consultas más extensas -- cuando se utilizan herramientas que generan tipos TypeScript basados en consultas (_más sobre esto en la última sección_), `newDelegate` y `oldDelegate` darán como resultado dos interfaces en línea distintas. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### Qué hacer y qué no hacer con los GraphQL Fragments -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- cuando se repiten campos del mismo tipo en una consulta, agruparlos en un Fragment -- cuando se repiten campos similares pero no iguales, crear varios Fragments, ej: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## Las herramientas esenciales +## The Essential Tools ### Exploradores web GraphQL @@ -473,11 +468,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- resaltado de sintaxis -- sugerencias de autocompletar -- validación según el esquema -- fragmentos -- ir a la definición de fragments y tipos de entrada +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -485,9 +480,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- resaltado de sintaxis -- sugerencias de autocompletar -- validación según el esquema -- fragmentos +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/es/querying/querying-from-an-application.mdx b/website/pages/es/querying/querying-from-an-application.mdx index 7a23241a3451..d15fea274da7 100644 --- a/website/pages/es/querying/querying-from-an-application.mdx +++ b/website/pages/es/querying/querying-from-an-application.mdx @@ -2,42 +2,46 @@ title: Consultar desde una Aplicación --- -Once a subgraph is deployed to Subgraph Studio or to Graph Explorer, you will be given the endpoint for your GraphQL API that should look something like this: +Learn how to query The Graph from your application. -**Subgraph Studio (endpoint de prueba)** +## Getting GraphQL Endpoint -```sh -Queries (HTTP) +Once a subgraph is deployed to [Subgraph Studio](https://thegraph.com/studio/) or [Graph Explorer](https://thegraph.com/explorer), you will be given the endpoint for your GraphQL API that should look something like this: + +### Subgraph Studio + +``` https://api.studio.thegraph.com/query/// ``` -**Graph Explorer** +### Graph Explorer -```sh -Queries (HTTP) +``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -Usando el endpoint de GraphQL, puedes usar varias librerías de Clientes de GraphQL para consultar el subgrafo y rellenar tu aplicación con los datos indexados por el subgrafo. - -A continuación se presentan un par de clientes GraphQL más populares en el ecosistema y cómo utilizarlos: +With your GraphQL endpoint, you can use various GraphQL Client libraries to query the subgraph and populate your app with data indexed by the subgraph. -## Clientes de GraphQL +## Using Popular GraphQL Clients -### Cliente de Graph +### Graph Client -The Graph proporciona su propio cliente GraphQL, `graph-client`, que admite características únicas como: +The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: - Manejo de subgrafos cross-chain: consultas desde múltiples subgrafos en una sola consulta - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Resultado completamente tipificado -Also integrated with popular GraphQL clients such as Apollo and URQL and compatible with all environments (React, Angular, Node.js, React Native), using `graph-client` will give you the best experience for interacting with The Graph. +> Note: `graph-client` is integrated with other popular GraphQL clients such as Apollo and URQL, which are compatible with environments such as React, Angular, Node.js, and React Native. As a result, using `graph-client` will provide you with an enhanced experience for working with The Graph. + +### Fetch Data with Graph Client + +Let's look at how to fetch data from a subgraph with `graph-client`: -Let's look at how to fetch data from a subgraph with `graphql-client`. +#### Paso 1 -To get started, make sure to install The Graph Client CLI in your project: +Install The Graph Client CLI in your project: ```sh yarn add -D @graphprotocol/client-cli @@ -45,6 +49,8 @@ yarn add -D @graphprotocol/client-cli npm install --save-dev @graphprotocol/client-cli ``` +#### Paso 2 + Define your query in a `.graphql` file (or inlined in your `.js` or `.ts` file): ```graphql @@ -72,7 +78,9 @@ query ExampleQuery { } ``` -Then, create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +#### Paso 3 + +Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: ```yaml # .graphclientrc.yml @@ -90,13 +98,17 @@ documents: - ./src/example-query.graphql ``` -Running the following The Graph Client CLI command will generate typed and ready to use JavaScript code: +#### Step 4 + +Run the following The Graph Client CLI command to generate typed and ready to use JavaScript code: ```sh graphclient build ``` -Finally, update your `.ts` file to use the generated typed GraphQL documents: +#### Step 5 + +Update your `.ts` file to use the generated typed GraphQL documents: ```tsx import React, { useEffect } from 'react' @@ -134,33 +146,35 @@ function App() { export default App ``` -**⚠️ Important notice** +> **Important Note:** `graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you can [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). However, if you choose to go with another client, keep in mind that **you won't be able to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. -`graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you will [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). +### Apollo Client -However, if you choose to go with another client, keep in mind that **you won't be able to get to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. +[Apollo client](https://www.apollographql.com/docs/) is a common GraphQL client on front-end ecosystems. It's available for React, Angular, Vue, Ember, iOS, and Android. -### Cliente Apollo +Although it's the heaviest client, it has many features to build advanced UI on top of GraphQL: -[Apollo client](https://www.apollographql.com/docs/) is the ubiquitous GraphQL client on the front-end ecosystem. +- Advanced error handling +- Paginación +- Data prefetching +- Optimistic UI +- Local state management -Available for React, Angular, Vue, Ember, iOS, and Android, Apollo Client, although the heaviest client, brings many features to build advanced UI on top of GraphQL: +### Fetch Data with Apollo Client -- advanced error handling (manejo avanzado de errores) -- pagination (paginado) -- data prefetching (captura previa de datos) -- optimistic UI (interfaz de usuario optimista) -- local state management (gestión de estado local) +Let's look at how to fetch data from a subgraph with Apollo client: -Let's look at how to fetch data from a subgraph with Apollo client in a web project. +#### Paso 1 -First, install `@apollo/client` and `graphql`: +Install `@apollo/client` and `graphql`: ```sh npm install @apollo/client graphql ``` -A continuación, puedes consultar la API con el siguiente código: +#### Paso 2 + +Query the API with the following code: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -193,6 +207,8 @@ client }) ``` +#### Paso 3 + To use variables, you can pass in a `variables` argument to the query: ```javascript @@ -224,24 +240,30 @@ client }) ``` -### URQL +### URQL Overview -Another option is [URQL](https://formidable.com/open-source/urql/) which is available within Node.js, React/Preact, Vue, and Svelte environments, with more advanced features: +[URQL](https://formidable.com/open-source/urql/) is available within Node.js, React/Preact, Vue, and Svelte environments, with some more advanced features: - Flexible cache system (Sistema de caché flexible) - Extensible design (Diseño extensible, que facilita agregar nuevas capacidades encima) - Lightweight bundle (Paquete ligero, ~5 veces más ligero que Apollo Client) - Soporte para carga de archivos y modo fuera de línea -Let's look at how to fetch data from a subgraph with URQL in a web project. +### Fetch data with URQL + +Let's look at how to fetch data from a subgraph with URQL: -First, install `urql` and `graphql`: +#### Paso 1 + +Install `urql` and `graphql`: ```sh npm install urql graphql ``` -A continuación, puedes consultar la API con el siguiente código: +#### Paso 2 + +Query the API with the following code: ```javascript import { createClient } from 'urql' diff --git a/website/pages/es/querying/querying-the-graph.mdx b/website/pages/es/querying/querying-the-graph.mdx index eb097ce2bdb6..fc87e026d7a1 100644 --- a/website/pages/es/querying/querying-the-graph.mdx +++ b/website/pages/es/querying/querying-the-graph.mdx @@ -2,7 +2,7 @@ title: Consultando The Graph --- -When a subgraph is published to The Graph Network, you can visit its subgraph details page on [Graph Explorer](https://thegraph.com/explorer) and use the "Playground" tab to explore the deployed GraphQL API for the subgraph, issuing queries and viewing the schema. +When a subgraph is published to The Graph Network, you can visit its subgraph details page on [Graph Explorer](https://thegraph.com/explorer) and use the "query" tab to explore the deployed GraphQL API for the subgraph, issuing queries and viewing the schema. > Please see the [Query API](/querying/graphql-api) for a complete reference on how to query the subgraph's entities. You can learn about GraphQL querying best practices [here](/querying/querying-best-practices) @@ -10,7 +10,9 @@ When a subgraph is published to The Graph Network, you can visit its subgraph de Each subgraph published to The Graph Network has a unique query URL in Graph Explorer for making direct queries that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. -![Panel de Consulta de Subgrafos](/img/query-subgraph-pane.png) +![Query Subgraph Button](/img/query-button-screenshot.png) + +![Query Subgraph URL](/img/query-url-screenshot.png) Learn more about querying from an application [here](/querying/querying-from-an-application). diff --git a/website/pages/es/quick-start.mdx b/website/pages/es/quick-start.mdx index b0765ca7fd36..f7c62e51c1f0 100644 --- a/website/pages/es/quick-start.mdx +++ b/website/pages/es/quick-start.mdx @@ -2,24 +2,26 @@ title: Comienzo Rapido --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily build, publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ensure that your subgraph will be indexing data from a [supported network](/developing/supported-networks). - -Esta guía está escrita asumiendo que tú tienes: +## Prerrequisitos - Una wallet crypto -- Una dirección de un smart contract en la red de tu preferencia +- A smart contract address on a [supported network](/developing/supported-networks/) +- [Node.js](https://nodejs.org/) installed +- A package manager of your choice (`npm`, `yarn` or `pnpm`) + +## How to Build a Subgraph -## 1. Crea un subgrafo en el Subgraph Studio +### 1. Create a subgraph in Subgraph Studio -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -## 2. Instala the graph CLI +Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +### 2. Instala the graph CLI En tu dispositivo, ejecuta alguno de los siguientes comandos: @@ -35,133 +37,148 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 3. Initialize your subgraph + +> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). + +The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. -Initialize your subgraph from an existing contract by running the initialize command: +The following command initializes your subgraph from an existing contract: ```sh -graph init --studio +graph init ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. -Cuando inicies tu subgrafo, la herramienta CLI te preguntará por la siguiente información: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protocol: elige el protocolo desde el cual tu subgrafo indexará datos -- Subgraph slug: crea un nombre para tu subgrafo. El slug de tu subgrafo es un identificador para el mismo. -- Directorio para crear el subgrafo: elige el directorio local de tu elección -- Red Ethereum (opcional): Es posible que debas especificar desde qué red compatible con EVM tu subgrafo indexará datos -- Dirección del contrato: Localiza la dirección del contrato inteligente del que deseas consultar los datos -- ABI: Si el ABI no se completa automáticamente, deberás ingresar los datos manualmente en formato JSON -- Start Block: se sugiere que ingreses el bloque de inicio para ahorrar tiempo mientras tu subgrafo indexa los datos de la blockchain. Puedes ubicar el bloque de inicio encontrando el bloque en el que se deployó tu contrato. -- Nombre del contrato: introduce el nombre de tu contrato -- Indexar eventos del contrato como entidades: se sugiere que lo establezcas en "verdadero" ya que automáticamente agregará mapeos a tu subgrafo para cada evento emitido -- Añade otro contrato(opcional): puedes añadir otro contrato +- **Protocol**: Choose the protocol your subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- **Directory**: Choose a directory to create your subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Contract address**: Locate the smart contract address you’d like to query data from. +- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Contract Name**: Input the name of your contract. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Add another contract** (optional): You can add another contract. Ve la siguiente captura para un ejemplo de que debes de esperar cuando inicializes tu subgrafo: -![Subgraph command](/img/subgraph-init-example.png) +![Subgraph command](/img/CLI-Example.png) + +### 4. Edit your subgraph + +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. + +When making changes to the subgraph, you will mainly work with three files: -## 4. Write your subgraph +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -Los comandos anteriores crean un subgrafo de andamio que puedes utilizar como punto de partida para construir tu subgrafo. Al realizar cambios en el subgrafo, trabajarás principalmente con tres archivos: +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +### 5. Deploy your subgraph -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +Remember, deploying is not the same as publishing. -## 5. Deploy to Subgraph Studio +When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. + +When you publish a subgraph, you are publishing it onchain to the decentralized network. Una vez escrito tu subgrafo, ejecuta los siguientes comandos: +```` ```sh -$ graph codegen -$ graph build +graph codegen && graph build ``` +```` + +Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. + +![Deploy key](/img/subgraph-studio-deploy-key.jpg) + +```` +```sh + +graph auth + +graph deploy +``` +```` + +The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -- Autentica y deploya tu subgrafo. La clave para deployar se puede encontrar en la página de Subgraph en Subgraph Studio. +### 6. Review your subgraph +If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: + +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: + + ![Subgraph logs](/img/subgraph-logs-image.png) + +### 7. Publish your subgraph to The Graph Network + +Publishing a subgraph to the decentralized network is an onchain action that makes your subgraph available for [Curators](/network/curating/) to curate it and [Indexers](/network/indexing/) to index it. + +#### Publishing with Subgraph Studio + +To publish your subgraph, click the Publish button in the dashboard. + +![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) + +Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +Open the `graph-cli`. + +Use the following commands: + +```` ```sh -$ graph auth --studio -$ graph deploy --studio +graph codegen && graph build ``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Prueba tu subgrafo - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -Los registros te indicarán si hay algún error con tu subgrafo. Los registros de un subgrafo operativo se verán así: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +Then, + +```sh +graph publish ``` +```` + +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). -## 7. Publish your subgraph to The Graph’s Decentralized Network +#### Adding signal to your subgraph -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +To learn more about curation, read [Curating](/network/curating/). -Para ahorrar en costos de gas, puedes curar tu subgrafo en la misma transacción en la que lo publicas seleccionando este botón al publicar tu subgrafo en la red descentralizada de The Graph: +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: -![Subgraph publish](/img/publish-and-signal-tx.png) +![Subgraph publish](/img/studio-publish-modal.png) -## 8. Query your subgraph +### 8. Query your subgraph -Ahora puedes hacer consultas a tu subgrafo enviando consultas GraphQL a la URL de consulta de tu subgrafo, que puedes encontrar haciendo clic en el botón de consulta. +You now have access to 100,000 free queries per month with your subgraph on The Graph Network! -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/es/sps/introduction.mdx b/website/pages/es/sps/introduction.mdx index 3e50521589af..12e3f81c6d53 100644 --- a/website/pages/es/sps/introduction.mdx +++ b/website/pages/es/sps/introduction.mdx @@ -14,6 +14,6 @@ It is really a matter of where you put your logic, in the subgraph or the Substr Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: -- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application/solana) -- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application/evm) -- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application/injective) +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/es/sps/triggers-example.mdx b/website/pages/es/sps/triggers-example.mdx index d8d61566295e..f5b1d99ba473 100644 --- a/website/pages/es/sps/triggers-example.mdx +++ b/website/pages/es/sps/triggers-example.mdx @@ -2,7 +2,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' --- -## Prerequisites +## Prerrequisitos Before starting, make sure to: @@ -11,6 +11,8 @@ Before starting, make sure to: ## Step 1: Initialize Your Project + + 1. Open your Dev Container and run the following command to initialize your project: ```bash @@ -18,6 +20,7 @@ Before starting, make sure to: ``` 2. Select the "minimal" project option. + 3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: ```yaml @@ -87,17 +90,7 @@ type MyTransfer @entity { This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. -## Step 4: Generate Protobuf Files - -To generate Protobuf objects in AssemblyScript, run the following command: - -```bash -npm run protogen -``` - -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. - -## Step 5: Handle Substreams Data in `mappings.ts` +## Step 4: Handle Substreams Data in `mappings.ts` With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: @@ -120,7 +113,7 @@ export function handleTriggers(bytes: Uint8Array): void { entity.designation = event.transfer!.accounts!.destination if (event.transfer!.accounts!.signer!.single != null) { - entity.signers = [event.transfer!.accounts!.signer!.single.signer] + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] } else if (event.transfer!.accounts!.signer!.multisig != null) { entity.signers = event.transfer!.accounts!.signer!.multisig!.signers } @@ -130,6 +123,16 @@ export function handleTriggers(bytes: Uint8Array): void { } ``` +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + ## Conclusion You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. diff --git a/website/pages/es/subgraphs.mdx b/website/pages/es/subgraphs.mdx index 27b452211477..f41177ea6fbf 100644 --- a/website/pages/es/subgraphs.mdx +++ b/website/pages/es/subgraphs.mdx @@ -1,5 +1,5 @@ --- -title: Subgraphs +title: Subgrafos --- ## What is a Subgraph? @@ -24,7 +24,13 @@ The **subgraph definition** consists of the following files: - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each of subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). + +## Ciclo de vida de un Subgrafo + +Here is a general overview of a subgraph’s lifecycle: + +![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development @@ -34,8 +40,47 @@ To learn more about each of subgraph component, check out [creating a subgraph]( 4. [Publish a subgraph](/publishing/publishing-a-subgraph/) 5. [Signal on a subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) -## Subgraph Lifecycle +### Build locally -Here is a general overview of a subgraph’s lifecycle: +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. -![Subgraph Lifecycle](/img/subgraph-lifecycle.png) +### Deploy to Subgraph Studio + +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: + +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. + +### Publish to the Network + +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. + +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. + +### Add Curation Signal for Indexing + +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. + +#### What is signal? + +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. + +### Querying & Application Development + +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). + +Learn more about [querying subgraphs](/querying/querying-the-graph/). + +### Updating Subgraphs + +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. + +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. + +### Deleting & Transferring Subgraphs + +If you no longer need a published subgraph, you can [delete](/managing/delete-a-subgraph/) or [transfer](/managing/transfer-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/es/substreams.mdx b/website/pages/es/substreams.mdx index 8fcd8349f986..34c7e40e05f0 100644 --- a/website/pages/es/substreams.mdx +++ b/website/pages/es/substreams.mdx @@ -4,25 +4,27 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps 1. **You write a Rust program, which defines the transformations that you want to apply to the blockchain data.** For example, the following Rust function extracts relevant information from an Ethereum block (number, hash, and parent hash). -```rust -fn get_my_block(blk: Block) -> Result { - let header = blk.header.as_ref().unwrap(); + ```rust + fn get_my_block(blk: Block) -> Result { + let header = blk.header.as_ref().unwrap(); - Ok(MyBlock { - number: blk.number, - hash: Hex::encode(&blk.hash), - parent_hash: Hex::encode(&header.parent_hash), - }) -} -``` + Ok(MyBlock { + number: blk.number, + hash: Hex::encode(&blk.hash), + parent_hash: Hex::encode(&header.parent_hash), + }) + } + ``` 2. **You wrap up your Rust program into a WASM module just by running a single CLI command.** @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/es/sunrise.mdx b/website/pages/es/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/es/sunrise.mdx +++ b/website/pages/es/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/es/tap.mdx b/website/pages/es/tap.mdx index 0a41faab9c11..2f0b75160fa9 100644 --- a/website/pages/es/tap.mdx +++ b/website/pages/es/tap.mdx @@ -4,7 +4,7 @@ title: TAP Migration Guide Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. -## Overview +## Descripción [TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: @@ -45,15 +45,15 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed ### Contracts -| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| Contract | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) | | ------------------- | -------------------------------------------- | -------------------------------------------- | -| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | -| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | -| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | +| TAP Verifier | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | +| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | +| Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | ### Gateway -| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| Component | Edge and Node Mainnet (Aribtrum Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) | | ---------- | --------------------------------------------- | --------------------------------------------- | | Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | @@ -168,7 +168,7 @@ max_amount_willing_to_lose_grt = 20 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" ``` -Notes: +Notas: - Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). - Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. @@ -190,4 +190,4 @@ You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs ### Launchpad -Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/main/charts/graph-network-indexer) diff --git a/website/pages/es/tokenomics.mdx b/website/pages/es/tokenomics.mdx index 97a07883e8bf..d2b3cdd86026 100644 --- a/website/pages/es/tokenomics.mdx +++ b/website/pages/es/tokenomics.mdx @@ -1,25 +1,25 @@ --- title: Tokenomics de The Graph Network -description: The Graph Network está incentivado por un poderoso tokenomics. Así es como funciona GRT, el token de utilidad de trabajo nativo de The Graph. +description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. --- -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +## Descripción -- Dirección del token GRT en Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. -The Graph es un protocolo descentralizado que permite un acceso sencillo a los datos de la blockchain. +## Specifics -Es similar a un modelo B2B2C, pero está impulsado por una red descentralizada de participantes. Los participantes de la red trabajan juntos para proporcionar datos a los usuarios finales a cambio de recompensas en GRT. GRT es el token de utilidad que coordina a los proveedores y consumidores de datos. GRT actúa como una utilidad para coordinar a los proveedores y consumidores de datos dentro de la red, e incentiva a los participantes del protocolo a organizar los datos de manera efectiva. +The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. -By using The Graph, users can easily access data from the blockchain, paying only for the specific information they need. The Graph is used by many [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem today. +The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/billing/). -The Graph indexa los datos de la blockchain de forma similar a como Google indexa la web. De hecho, es posible que ya estés utilizando The Graph sin darte cuenta. Si has visto la interfaz de una aplicación que obtiene sus datos de un subgrafo, ¡has consultado datos de un subgrafo! +- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -The Graph desempeña un papel crucial a la hora de hacer más accesibles los datos de la blockchain y proporcionar un mercado para su intercambio. +- Dirección del token GRT en Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -## Los roles de los participantes de la red +## The Roles of Network Participants -Hay cuatro participantes principales en la red: +There are four primary network participants: 1. Delegadores - Delegan GRT a los Indexadores y aseguran la red @@ -29,82 +29,74 @@ Hay cuatro participantes principales en la red: 4. Indexadores: Son la columna vertebral de los datos de la blockchain -Fishermen (o pescadores) y Arbitrators también contribuyen al éxito de la red con otros aportes, apoyando el trabajo de los demás roles principales de los participantes. Para más información sobre las funciones de la red, lee [este artículo](https://thegraph.com/blog/the-graph-grt-token-economics/). +Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). -![Diagrama del tokenomics](/img/updated-tokenomics-image.png) +![Tokenomics diagram](/img/updated-tokenomics-image.png) -## Delegadores (ganan GRT de manera pasiva) +## Delegators (Passively earn GRT) -Los Delegadores delegan GRT en los Indexadores, aumentando su stake en los subgrafos de la red. A cambio, los Delegadores obtienen un porcentaje de todas las tarifas de consulta y recompensas de indexación del Indexador. Cada Indexador fija el porcentaje que se recompensará a los Delegadores de forma independiente, lo que crea una competencia entre los Indexadores para atraer a los Delegadores. La mayoría de los Indexadores ofrecen entre un 9 y un 12% anual. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. -Por ejemplo, si un Delegador delegara 15.000 GRT a un Indexador que ofreciera el 10%, el Delegador recibiría ~1500 GRT anuales en recompensas. +For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. -Existe un impuesto de delegación del 0,5% que se quema cada vez que un Delegador delega GRT en la red. Si un Delegador decide retirar su GRT delegado, debe esperar al periodo de desbloqueo de 28 épocas. Cada época es de 6.646 bloques, lo que significa que 28 épocas terminan siendo aproximadamente 26 días. +There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. -Si estás leyendo esto, eres capaz de convertirte en Delegador ahora mismo dirigiéndote a la [página de participantes de la red](https://thegraph.com/explorer/participants/indexers) y delegando GRT en un Indexador de tu elección. +If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. -## Curadores (ganan GRT) +## Curators (Earn GRT) -Los Curadores identifican subgrafos de alta calidad y los "curan" (es decir, señalan GRT en ellos) para ganar cuotas de curación, que garantizan un porcentaje de todas las futuras tarifas de consulta generadas por el subgrafo. Aunque cualquier participante independiente de la red puede ser Curador, los desarrolladores de subgrafos suelen estar entre los primeros Curadores de sus propios subgrafos porque quieren asegurarse de que su subgrafo está indexado. +Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. -As of April 11th, 2024, subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Los Curadores pagan un impuesto de curación del 1% cuando curan un nuevo subgrafo. Este impuesto se quema, lo que reduce la oferta de GRT. +Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. -## Desarolladores +## Developers -Los Desarrolladores construyen y consultan subgrafos para recuperar datos de la blockchain. Dado que los subgrafos son de código abierto, los Desarrolladores pueden consultar subgrafos existentes para cargar datos de la blockchain en sus dapps. Los Desarrolladores pagan por las consultas que realizan en GRT, que se distribuye entre los participantes de la red. +Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. ### Creación de un subgrafo -Los desarrolladores pueden [crear un subgrafo](/developing/creating-a-subgraph/) para indexar datos en la blockchain. Los subgrafos son instrucciones para los Indexadores sobre qué datos deben servirse a los consumidores. +Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Una vez que los desarrolladores han construido y probado su subgrafo, pueden [publicarlo](/publishing/publishing-a-subgraph/) en la red descentralizada de The Graph. +Once developers have built and tested their subgraph, they can [publish their subgraph](/publishing/publishing-a-subgraph/) on The Graph's decentralized network. ### Consulta de un subgrafo existente Once a subgraph is [published](/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. -Los subgrafos se [consultan utilizando GraphQL](/querying/querying-the-graph/), y las tarifas de consulta se pagan con GRT en [Subgraph Studio](https://thegraph.com/studio/). Las tarifas de consulta se distribuyen entre los participantes de la red en función de sus contribuciones al protocolo. - -Se quema el 1% de las tarifas de consulta pagadas a la red. - -## Indexadores (Ganan GRT) - -Los indexadores son la columna vertebral de The Graph. Funcionan con hardware y software independientes que alimentan la red descentralizada de The Graph. Los Indexadores sirven datos a los consumidores siguiendo instrucciones de los subgrafos. - -Los Indexadores pueden obtener recompensas de GRT de dos maneras: +Subgraphs are [queried using GraphQL](/querying/querying-the-graph/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. -1. Query fees: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1% of the query fees paid to the network are burned. -2. Recompensas de indexación: la emisión anual del 3% se distribuye a los Indexadores en función del número de subgrafos que indexan. Estas recompensas incentivan a los Indexadores a indexar subgrafos, ocasionalmente antes de que comiencen las tarifas de consulta, para acumular y enviar Pruebas de Indexación (POIs) que verifiquen que han indexado datos con precisión. +## Indexers (Earn GRT) -A cada subgrafo se le asigna una parte de la emisión total de tokens de la red, en función de la cantidad de la señal de curación del subgrafo. Esa cantidad se recompensa a los Indexadores en función de su participación en el subgrafo. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. -Para poner en marcha un nodo de indexación, los Indexadores deben realizar stake de 100.000 GRT o más en la red. A los Indexadores se les incentiva a realizar stake de GRT en proporción a la cantidad de consultas que atienden. +Indexers can earn GRT rewards in two ways: -Los Indexadores pueden aumentar sus allocations de GRT en subgrafos aceptando la delegación de GRT de los Delegadores, y pueden aceptar hasta 16 veces su stake inicial. Si un Indexador se "sobredelega" (es decir, más de 16 veces su stake inicial), no podrá utilizar el GRT adicional de los Delegadores hasta que aumente su stake en la red. +1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -La cantidad de recompensas que recibe un Indexador puede variar en función del stake inicial, la delegación aceptada, la calidad del servicio y muchos más factores. El siguiente gráfico muestra datos públicos de un Indexador activo en la red descentralizada de The Graph. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -### El stake y las recompensas del Indexador allnodes-com.eth +Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. -![Stake y recompensas de indexación](/img/indexing-stake-and-income.png) +In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Estos datos son de febrero de 2021 a septiembre de 2022. +Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. -> Ten en cuenta que esta situación mejorará cuando finalice la [migración a Arbitrum](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551), con lo que el coste del gas será mucho menor para los participantes en la red. +The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. -## Suministro de tokens: Quema y emisión +## Token Supply: Burning & Issuance -El suministro inicial de tokens es de 10.000 millones de GRT, con un objetivo de 3% de nuevas emisiones anuales para recompensar a los Indexadores por asignar stake en subgrafos. Esto significa que la oferta total de tokens GRT aumentará un 3% cada año a medida que se emitan nuevos tokens a los Indexadores por su contribución a la red. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph está diseñado con múltiples mecanismos de quema para compensar la emisión de nuevos tokens. Aproximadamente el 1% de la oferta de GRT se quema anualmente a través de diversas actividades en la red, y este número ha ido aumentando a medida que la actividad de la red sigue creciendo. Estas actividades de quema incluyen un impuesto de delegación del 0,5% cada vez que un Delegador delega GRT a un Indexador, un impuesto de curación del 1% cuando los Curadores señalan en un subgrafo, y un 1% de las tarifas de consulta de datos de blockchain. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. -![Total GRT quemado](/img/total-burned-grt.jpeg) +![Total burned GRT](/img/total-burned-grt.jpeg) -Además de estas actividades de quema periódicas, el token GRT también cuenta con un mecanismo de slashing para penalizar el comportamiento malicioso o irresponsable de los Indexadores. Si un Indexador recibe slashing, se quema el 50% de sus recompensas de indexación de la época (mientras que la otra mitad va a parar al Fisherman), y su self-stake se reduce en un 2,5%, quemándose la mitad de esta cantidad. Esto ayuda a garantizar que los Indexadores tengan un fuerte incentivo para actuar en el mejor interés de la red y contribuir a su seguridad y estabilidad. +In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. -## Mejorando el protocolo +## Improving the Protocol -The Graph Network está en constante evolución y se introducen mejoras en el diseño económico del protocolo para ofrecer la mejor experiencia a todos los participantes en la red. The Graph Council supervisa los cambios en el protocolo y se anima a los miembros de la comunidad a participar. Participe en las mejoras del protocolo en el [Graph Forum](https://forum.thegraph.com/). +The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). diff --git a/website/pages/fr/about.mdx b/website/pages/fr/about.mdx index ded4167cf102..36904a652bd4 100644 --- a/website/pages/fr/about.mdx +++ b/website/pages/fr/about.mdx @@ -2,46 +2,66 @@ title: À propos de The Graph --- -Cette page expliquera ce qu'est The Graph et comment vous pouvez commencer. - ## Qu’est-ce que The Graph ? -The Graph est un protocole décentralisé pour l'indexation et l'interrogation de données blockchain. The Graph permet d'interroger des données qui sont difficiles à interroger directement. +The Graph est un puissant protocole décentralisé qui permet d'interroger et d'indexer facilement les données de la blockchain. Il simplifie le processus complexe de requête des données blockchain, rendant ainsi le développement des applications décentralisées (dapps) plus rapide et plus simple. + +## Comprendre les fondamentaux + +Des projets dotés de contrats intelligents complexes tels que [Uniswap](https://uniswap.org/) et les initiatives NFT comme [Bored Ape Yacht Club](https://boredapeyachtclub.com/) stockent leurs données sur la blockchain Ethereum, rendant très difficile la lecture directe de données autres que les données de base depuis la blockchain. + +### Défis sans The Graph⁠ + +Dans le cas de l'exemple mentionné ci-dessus, Bored Ape Yacht Club, vous pouvez effectuer de simples opérations de lecture sur [le contrat](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). Vous pouvez voir le propriétaire d'un certain Ape, lire l'URI du contenu d'un Ape en fonction de son ID, ou connaître l'offre totale en circulation. + +- Cela est possible car ces opérations de lecture sont programmées directement dans le contrat intelligent lui-même. Cependant, des requêtes et des opérations plus avancées, spécifiques et concrètes, telles que l'agrégation, la recherche, l'établissement de relations ou le filtrage complexe **ne sont pas possibles**. + +- Par exemple, si vous souhaitez identifier les Apes détenus par une adresse spécifique et affiner votre recherche en fonction d'une caractéristique particulière, il serait impossible d'obtenir cette information en interagissant directement avec le contrat. + +- Pour obtenir plus de données, vous devriez traiter chaque événement de [`transfert`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) jamais émis, lire les métadonnées d'IPFS en utilisant l'ID du Token et le hash IPFS, puis les agréger. + +### Pourquoi est-ce un problème ? + +Il faudrait des **heures, voire des jours,** pour qu'une application décentralisée (dapp) fonctionnant dans un navigateur obtienne une réponse à ces questions simples. + +Une alternative serait de configurer votre propre serveur, de traiter les transactions, de les stocker dans une base de données et de créer une API pour interroger les données. Cependant, cette solution est [coûteuse en ressources](/network/benefits/), nécessite une maintenance constante, présente un point de défaillance unique et compromet d'importantes propriétés de securité essentiels à la décentralisation. + +Les spécificités de la blockchain, comme la finalité des transactions, les réorganisations de chaîne et les blocs oncles (blocs rejetés lorsque deux blocs sont créés simultanément, ce qui entraîne l'omission d'un bloc de la blockchain.), ajoutent de la complexité au processus, rendant longue et conceptuellement difficile la récupération de résultats précis à partir des données de la blockchain. -Les projets avec des contrats intelligents complexes comme [Uniswap](https://uniswap.org/) et des projets NFT comme [Bored Ape](https://boredapeyachtclub.com/) Yacht Club stockent des données sur la blockchain Ethereum. La façon dont ces données sont stockées rend leur lecture difficile au-delà de quelques informations simples. +## The Graph apporte une solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph répond à ce défi grâce à un protocole décentralisé qui indexe les données de la blockchain et permet de les interroger de manière efficace et performantes. Ces API (appelées "subgraphs" indexés) peuvent ensuite être interrogées via une API standard GraphQL. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Aujourd'hui, il existe un protocole décentralisé soutenu par l'implémentation open source de [Graph Node](https://github.com/graphprotocol/graph-node) qui permet ce processus. -Vous pouvez également créer votre propre serveur, y traiter les transactions, les enregistrer dans une base de données et créer un point de terminaison d'API par-dessus tout cela afin d'interroger les données. Cependant, cette option est [consommatrice de ressources](/network/benefits/), nécessite une maintenance, présente un point de défaillance unique et brise d'importantes propriétés de sécurité requises pour la décentralisation. +### Comment fonctionne The Graph⁠ -**L’indexation des données blockchain est vraiment très difficile.** +Indexer les données de la blockchain est une tâche complexe, mais The Graph la simplifie. Il apprend à indexer les données d'Ethereum en utilisant des subgraphs. Les subgraphs sont des API personnalisées construites sur les données de la blockchain qui extraient, traitent et stockent ces données pour qu'elles puissent être interrogées facilement via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Spécificités⁠ -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph utilise des descriptions de subgraph, qui sont connues sous le nom de "manifeste de subgraph" à l'intérieur du subgraph. -## Fonctionnement du Graph +- Ce manifeste définit les contrats intelligents intéressants pour un subgraph, les événements spécifiques à surveiller au sein de ces contrats, et la manière de mapper les données de ces événements aux données que The Graph stockera dans sa base de données. -The Graph apprend quoi et comment indexer les données Ethereum en fonction des descriptions de subgraphs, connues sous le nom de manifeste de subgraph. La description du subgraph définit les contrats intelligents d'intérêt pour un subgraph, les événements de ces contrats auxquels il faut prêter attention et comment mapper les données d'événement aux données que The Graph stockera dans sa base de données. +- Lors de la création d'un subgraph, vous devez rédiger ce manifeste. -Une fois que vous avez écrit un `manifeste de subgraph`, vous utilisez le Graph CLI pour stocker la définition dans IPFS et vous indiquez par la même occasion à l'indexeur de commencer à indexer les données pour ce subgraph. +- Une fois le `manifeste du subgraph` écrit, vous pouvez utiliser l'outil en ligne de commande Graph CLI pour stocker la définition en IPFS et demander à un Indexeur de commencer à indexer les données pour ce subgraph. -Ce diagramme donne plus de détails sur le flux de données une fois qu'un manifeste de subgraph a été déployé, traitant des transactions Ethereum : +Le schéma ci-dessous illustre plus en détail le flux de données après le déploiement d'un manifeste de subgraph avec des transactions Ethereum. ![Un graphique expliquant comment The Graph utilise Graph Node pour répondre aux requêtes des consommateurs de données](/img/graph-dataflow.png) La description des étapes du flux : -1. Une dapp ajoute des données à Ethereum via une transaction sur un contrat intelligent. -2. Le contrat intelligent va alors produire un ou plusieurs événements lors du traitement de la transaction. -3. Parallèlement, Le nœud de The Graph scanne continuellement Ethereum à la recherche de nouveaux blocs et de nouvelles données intéressantes pour votre subgraph. -4. The Graph Node trouve alors les événements Ethereum d'intérêt pour votre subgraph dans ces blocs et vient exécuter les corrélations correspondantes que vous avez fournies. Le gestionnaire de corrélation se définit comme un module WASM qui crée ou met à jour les entités de données que le nœud de The Graph stocke en réponse aux événements Ethereum. -5. Le dapp interroge le Graph Node pour des données indexées à partir de la blockchain, à l'aide du [point de terminaison GraphQL](https://graphql.org/learn/) du noeud. À son tour, le Graph Node traduit les requêtes GraphQL en requêtes pour sa base de données sous-jacente afin de récupérer ces données, en exploitant les capacités d'indexation du magasin. Le dapp affiche ces données dans une interface utilisateur riche pour les utilisateurs finaux, qui s'en servent pour émettre de nouvelles transactions sur Ethereum. Le cycle se répète. +1. Une dapp ajoute des données à Ethereum via une transaction sur un contrat intelligent. +2. Le contrat intelligent va alors produire un ou plusieurs événements lors du traitement de la transaction. +3. Parallèlement, Le nœud de The Graph scanne continuellement Ethereum à la recherche de nouveaux blocs et de nouvelles données intéressantes pour votre subgraph. +4. The Graph Node trouve alors les événements Ethereum d'intérêt pour votre subgraph dans ces blocs et vient exécuter les corrélations correspondantes que vous avez fournies. Le gestionnaire de corrélation se définit comme un module WASM qui crée ou met à jour les entités de données que le nœud de The Graph stocke en réponse aux événements Ethereum. +5. Le dapp interroge le Graph Node pour des données indexées à partir de la blockchain, à l'aide du [point de terminaison GraphQL](https://graphql.org/learn/) du noeud. À son tour, le Graph Node traduit les requêtes GraphQL en requêtes pour sa base de données sous-jacente afin de récupérer ces données, en exploitant les capacités d'indexation du magasin. Le dapp affiche ces données dans une interface utilisateur riche pour les utilisateurs finaux, qui s'en servent pour émettre de nouvelles transactions sur Ethereum. Le cycle se répète. ## Les Étapes suivantes -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +Les sections suivantes proposent une exploration plus approfondie des subgraphs, de leur déploiement et de la manière d'interroger les données. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Avant de créer votre propre subgraph, il est conseillé de visiter Graph Explorer et d'examiner certains des subgraphs déjà déployés. Chaque page de subgraph comprend un playground (un espace de test) GraphQL, vous permettant d'interroger ses données. diff --git a/website/pages/fr/arbitrum/arbitrum-faq.mdx b/website/pages/fr/arbitrum/arbitrum-faq.mdx index 85632d92168b..23faf8dab29b 100644 --- a/website/pages/fr/arbitrum/arbitrum-faq.mdx +++ b/website/pages/fr/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: FAQ d'Arbitrum Cliquez [ici] (#billing-on-arbitrum-faqs) si vous souhaitez passer à la FAQ sur la facturation Arbitrum. -## Pourquoi The Graph met-il en place une solution L2 ? +## Pourquoi The Graph a-t-il mis en place une solution L2 ? -En faisant passer The Graph à l'échelle L2, les participants au réseau peuvent espérer : +Grâce à la mise à l'échelle de The Graph sur la L2, les participants du réseau peuvent désormais bénéficier de ce qui suit : - Jusqu'à 26 fois plus d'économies sur les frais de gaz @@ -14,26 +14,26 @@ En faisant passer The Graph à l'échelle L2, les participants au réseau peuven - La sécurité héritée d'Ethereum -La mise à l'échelle des contrats intelligents du protocole sur L2 permet aux participants au réseau d'interagir plus fréquemment pour un coût réduit en termes de frais de gaz. Par exemple, les indexeurs peuvent ouvrir et fermer des allocations pour indexer un plus grand nombre de subgraphs avec une plus grande fréquence, les développeurs peuvent déployer et mettre à jour des subgraphs plus facilement, les délégués peuvent déléguer des GRT avec une fréquence accrue, et les curateurs peuvent ajouter ou supprimer des signaux à un plus grand nombre de subgraphs - des actions auparavant considérées comme trop coûteuses pour être effectuées fréquemment en raison des frais de gaz. +La mise à l'échelle des contrats intelligents du protocole sur la L2 permet aux participants du réseau d'interagir plus fréquemment pour un coût réduit en termes de frais de gaz. Par exemple, les Indexeurs peuvent ouvrir et fermer des allocations plus fréquemment pour indexer un plus grand nombre de subgraphs. Les développeurs peuvent déployer et mettre à jour des subgraphs plus facilement, et les Déléguateurs peuvent déléguer des GRT plus fréquemment. Les Curateurs peuvent ajouter ou supprimer des signaux dans un plus grand nombre de subgraphs - des actions auparavant considérées comme trop coûteuses pour être effectuées fréquemment en raison des frais de gaz. La communauté Graph a décidé d'avancer avec Arbitrum l'année dernière après le résultat de la discussion [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). ## Que dois-je faire pour utiliser The Graph en L2 ? -The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One. +Le système de facturation de The Graph accepte le GRT sur Arbitrum, et les utilisateurs auront besoin d'ETH sur Arbitrum pour payer leurs frais de gaz. Bien que le protocole The Graph ait commencé sur Ethereum Mainnet, toute l'activité, y compris les contrats de facturation, est maintenant sur Arbitrum One. -Consequently, to pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this: +Par conséquent, pour payer les requêtes, vous avez besoin de GRT sur Arbitrum. Voici quelques façons d'y parvenir : -- If you already have GRT on Ethereum, you can bridge it to Arbitrum. You can do this via the GRT bridging option provided in Subgraph Studio or by using one of the following bridges: +- Si vous avez déjà des GRT sur Ethereum, vous pouvez les bridge vers Arbitrum. Vous pouvez le faire via l'option de bridge de GRT fournie dans Subgraph Studio ou en utilisant l'un des bridges suivants : - - [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) + - [Le Bridge Arbitrum](https://bridge.arbitrum.io/?l2ChainId=42161) - [TransferTo](https://transferto.xyz/swap) -- If you have other assets on Arbitrum, you can swap them for GRT through a swapping protocol like Uniswap. +- Si vous avez d'autres actifs sur Arbitrum, vous pouvez les échanger contre du GRT via un protocole de swap comme Uniswap. -- Alternatively, you can acquire GRT directly on Arbitrum through a decentralized exchange. +- Alternativement, vous pouvez obtenir des GRT directement sur Arbitrum via un échangeur décentralisé. -Once you have GRT on Arbitrum, you can add it to your billing balance. +Une fois que vous avez des GRT sur Arbitrum, vous pouvez l'ajouter à votre solde de facturation. Pour tirer parti de l'utilisation de The Graph sur L2, utilisez ce sélecteur déroulant pour passer d'une chaîne à l'autre. @@ -41,27 +41,21 @@ Pour tirer parti de l'utilisation de The Graph sur L2, utilisez ce sélecteur d ## En tant que développeur de subgraphs, consommateur de données, indexeur, curateur ou délégateur, que dois-je faire maintenant ? -Aucune action immédiate n'est requise, cependant, les participants au réseau sont encouragés à commencer à migrer vers Arbitrum pour profiter des avantages de L2. +Les participants du réseau doivent passer à Arbitrum pour continuer à participer à The Graph Network. Veuillez consulter le [Guide de l'outil de transfert L2](/arbitrum/l2-transfer-tools-guide/) pour une assistance supplémentaire. -Les équipes de développeurs de base travaillent à la création d'outils de transfert L2 qui faciliteront considérablement le transfert de la délégation, de la curation et des subgraphes vers Arbitrum. Les participants au réseau peuvent s'attendre à ce que les outils de transfert L2 soient disponibles d'ici l'été 2023. +Toutes les récompenses d'indexation sont désormais entièrement sur Arbitrum. -À partir du 10 avril 2023, 5 % de toutes les récompenses d'indexation sont frappées sur Arbitrum. Au fur et à mesure que la participation au réseau augmentera et que le Conseil l'approuvera, les récompenses d'indexation passeront progressivement de l'Ethereum à l'Arbitrum, pour finalement passer entièrement à l'Arbitrum. +## Y avait-il des risques associés à la mise à l'échelle du réseau vers la L2 ? -## Que dois-je faire si je souhaite participer au réseau L2 ? - -Veuillez aider à [tester le réseau](https://testnet.thegraph.com/explorer) sur L2 et signaler vos commentaires sur votre expérience dans [Discord](https://discord.gg/graphprotocol). - -## Existe-t-il des risques associés à la mise à l’échelle du réseau vers L2 ? - -All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). +Tous les contrats intelligents ont été soigneusement [vérifiés](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-aud.pdf). Tout a été testé minutieusement et un plan d’urgence est en place pour assurer une transition sûre et fluide. Les détails peuvent être trouvés [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and- considérations de sécurité-20). -## Les subgraphs existants sur Ethereum continueront-ils à fonctionner ? +## Les subgraphs existants sur Ethereum fonctionnent  t-ils? -Oui, les contrats The Graph Network fonctionneront en parallèle sur Ethereum et Arbitrum jusqu'à leur passage complet à Arbitrum à une date ultérieure. +Tous les subgraphs sont désormais sur Arbitrum. Veuillez consulter le [Guide de l'outil de transfert L2](/arbitrum/l2-transfer-tools-guide/) pour vous assurer que vos subgraphs fonctionnent sans problème. -## GRT disposera-t-il d'un nouveau contrat intelligent déployé sur Arbitrum ? +## GRT a-t-il un nouveau contrat intelligent déployé sur Arbitrum ? Oui, GRT dispose d'un [contrat intelligent sur Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) supplémentaire. Cependant, le réseau principal Ethereum [contrat GRT](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) restera opérationnel. @@ -83,4 +77,4 @@ Le pont a été [fortement audité](https://code4rena.com/contests/2022-10-the-g L'ajout de GRT à votre solde de facturation Arbitrum peut être effectué en un seul clic dans [Subgraph Studio](https://thegraph.com/studio/). Vous pourrez facilement relier votre GRT à Arbitrum et remplir vos clés API en une seule transaction. -Visit the [Billing page](/billing/) for more detailed instructions on adding, withdrawing, or acquiring GRT. +Visitez la [page de Facturation](/facturation/) pour obtenir des instructions plus détaillées sur l'ajout, le retrait ou l'acquisition de GRT. diff --git a/website/pages/fr/arbitrum/l2-transfer-tools-guide.mdx b/website/pages/fr/arbitrum/l2-transfer-tools-guide.mdx index 81df458a44b1..a0a6f17b4265 100644 --- a/website/pages/fr/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/pages/fr/arbitrum/l2-transfer-tools-guide.mdx @@ -68,7 +68,7 @@ Après avoir ouvert l'outil de transfert, vous pourrez saisir l'adresse du porte Si vous exécutez cette étape, **assurez-vous de continuer jusqu'à terminer l'étape 3 en moins de 7 jours, sinon le subgraph et votre signal GRT seront perdus.** Cela est dû au fonctionnement de la messagerie L1-L2 sur Arbitrum : les messages qui sont envoyés via le pont sont des « tickets réessayables » qui doivent être exécutés dans les 7 jours, et l'exécution initiale peut nécessiter une nouvelle tentative s'il y a des pics dans le prix du gaz sur Arbitrum. -![Start the transfer to L2](/img/startTransferL2.png) +![Démarrer le transfert vers la L2](/img/startTransferL2.png) ## Étape 2 : Attendre que le subgraphe atteigne L2 diff --git a/website/pages/fr/billing.mdx b/website/pages/fr/billing.mdx index cb3b4c99bb2b..610c9ff09b00 100644 --- a/website/pages/fr/billing.mdx +++ b/website/pages/fr/billing.mdx @@ -2,173 +2,173 @@ title: Facturation --- -## Subgraph Billing Plans +## Les Plans de Facturation des Subgraphs -There are two plans to use when querying subgraphs on The Graph Network. +Il y a deux plans à utiliser lorsqu'on interroge les subgraphs sur le réseau de The Graph. -- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. +- **Le Plan Gratuit**: Le Plan Gratuit comprend 100,000 requêtes mensuelles gratuites avec un accès complet à l'environnement de test de Subgraph Studio. Ce plan est conçu pour les amateurs, les participants aux hackathons et ceux qui ont des projets annexes pour essayer The Graph avant de faire évoluer leur dApp. -- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +- **Plan de croissance**: Le plan de croissance comprend tout ce qui est inclus dans le plan gratuit avec toutes les requêtes après 100 000 requêtes mensuelles nécessitant des paiements en GRT ou par carte de crédit. Le plan de croissance est suffisamment flexible pour couvrir les besoins des équipes qui ont établi des dapps à dans une variété de cas d'utilisation. -## Query Payments with credit card +## Paiements de Requêtes avec Carte de Crédit⁠ -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) - 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). - 2. Cliquez sur le bouton « Connecter le portefeuille » dans le coin supérieur droit de la page. Vous serez redirigé vers la page de sélection du portefeuille. Sélectionnez votre portefeuille et cliquez sur "Connecter". - 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. - 4. To choose a credit card payment, choose “Credit card” as the payment method and fill out your credit card information. Those who have used Stripe before can use the Link feature to autofill their details. -- Invoices will be processed at the end of each month and require an active credit card on file for all queries beyond the free plan quota. +- Pour mettre en place la facturation par carte de crédit/débit, les utilisateurs doivent accéder à Subgraph Studio (https://thegraph.com/studio/) + 1. Accédez à la [page de facturation de Subgraph Studio](https://thegraph.com/studio/billing/). + 2. Cliquez sur le bouton "Connecter le portefeuille" dans le coin supérieur droit de la page. Vous serez redirigé vers la page de sélection des portefeuilles. Sélectionnez votre portefeuille et cliquez sur "Connecter". + 3. Choisissez « Mettre à niveau votre abonnement » si vous effectuez une mise à niveau depuis le plan gratuit, ou choisissez « Gérer l'abonnement » si vous avez déjà ajouté des GRT à votre solde de facturation par le passé. Ensuite, vous pouvez estimer le nombre de requêtes pour obtenir une estimation du prix, mais ce n'est pas une étape obligatoire. + 4. Pour choisir un paiement par carte de crédit, choisissez “Credit card” comme mode de paiement et remplissez les informations de votre carte de crédit. Ceux qui ont déjà utilisé Stripe peuvent utiliser la fonctionnalité Link pour remplir automatiquement leurs informations. +- Les factures seront traitées à la fin de chaque mois et nécessitent une carte de crédit valide enregistrée sur votre compte pour toute requête au-delà du quota du plan gratuit. -## Query Payments with GRT +## Paiements de Requêtes avec du GRT -Subgraph users can use The Graph Token (or GRT) to pay for queries on The Graph Network. With GRT, invoices will be processed at the end of each month and require a sufficient balance of GRT to make queries beyond the Free Plan quota of 100,000 monthly queries. You'll be required to pay fees generated from your API keys. Using the billing contract, you'll be able to: +Les utilisateurs de subgraphs peuvent utiliser le jeton natif de The Graph (GRT) pour payer les requêtes sur le réseau The Graph. Avec le GRT, les factures seront traitées à la fin de chaque mois et nécessiteront un solde suffisant de GRT pour effectuer des requêtes au-delà du quota du plan gratuit de 100 000 requêtes mensuelles. Vous devrez payer les frais générés par vos clés API. En utilisant le contrat de facturation, vous pourrez : - Ajoutez et retirez du GRT du solde de votre compte. - Gardez une trace de vos soldes en fonction du montant de GRT que vous avez ajouté au solde de votre compte, du montant que vous avez supprimé et de vos factures. - Payez automatiquement les factures en fonction des frais de requête générés, à condition qu'il y ait suffisamment de GRT dans le solde de votre compte. -### GRT on Arbitrum or Ethereum +### GRT sur Arbitrum ou Ethereum -The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One. +Le système de facturation de The Graph accepte le GRT sur Arbitrum, et les utilisateurs devront disposer d'ETH sur Arbitrum pour payer le gaz. Bien que le protocole The Graph ait commencé sur le réseau principal d'Ethereum, toutes les activités, y compris les contrats de facturation, sont désormais réalisées sur Arbitrum One. -To pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this: +L'utilisation du GRT sur Arbitrum est nécessaire pour le paiement des requêtes. Voici quelques options pour en acquérir : -- If you already have GRT on Ethereum, you can bridge it to Arbitrum. You can do this via the GRT bridging option provided in Subgraph Studio or by using one of the following bridges: +- Si vous avez déjà des GRT sur Ethereum, vous pouvez les transférer vers Arbitrum. Vous pouvez le faire via l'option de transfert de GRT fournie dans Subgraph Studio ou en utilisant l'un des ponts suivants : -- [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) +- [Le pont Arbitrum](https://bridge.arbitrum.io/?l2ChainId=42161) - [TransferTo](https://transferto.xyz/swap) -- If you already have assets on Arbitrum, you can swap them for GRT via a swapping protocol like Uniswap. +- Si vous possédez déjà des actifs sur Arbitrum, vous pouvez les échanger contre du GRT via un protocole d'échange comme Uniswap. -- Alternatively, you acquire GRT directly on Arbitrum through a decentralized exchange. +- Alternativement, vous pouvez acquérir du GRT directement sur Arbitrum via un échange décentralisé. -> This section is written assuming you already have GRT in your wallet, and you're on Arbitrum. If you don't have GRT, you can learn how to get GRT [here](#getting-grt). +> Cette section est rédigée en supposant que vous avez déjà des GRT dans votre portefeuille et que vous êtes sur Arbitrum. Si vous n'avez pas de GRT, vous pouvez apprendre comment obtenir des GRT [ici](#getting-grt). -Once you bridge GRT, you can add it to your billing balance. +Une fois que vous avez transféré du GRT, vous pouvez l'ajouter à votre solde de facturation. -### Adding GRT using a wallet +### Ajout de GRT à l'aide d'un portefeuille -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). -2. Cliquez sur le bouton « Connecter le portefeuille » dans le coin supérieur droit de la page. Vous serez redirigé vers la page de sélection du portefeuille. Sélectionnez votre portefeuille et cliquez sur "Connecter". -3. Select the "Manage" button near the top right corner. First time users will see an option to "Upgrade to Growth plan" while returning users will click "Deposit from wallet". -4. Use the slider to estimate the number of queries you expect to make on a monthly basis. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. -5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. -6. Select the number of months you would like to prepay. - - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. -7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. -8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. -9. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. +1. Accédez à la [page de facturation de Subgraph Studio](https://thegraph.com/studio/billing/). +2. Cliquez sur le bouton "Connecter le portefeuille" dans le coin supérieur droit de la page. Vous serez redirigé vers la page de sélection des portefeuilles. Sélectionnez votre portefeuille et cliquez sur "Connecter". +3. Cliquez sur le bouton « Manage » situé dans le coin supérieur droit. Les nouveaux utilisateurs verront l'option « Upgrade to Growth plan » (Passer au plan de croissance), tandis que les utilisateurs existants devront sélectionner « Deposit from wallet » (Déposer depuis le portefeuille). +4. Utilisez le curseur pour estimer le nombre de requêtes que vous prévoyez d’effectuer sur une base mensuelle. + - Pour des suggestions sur le nombre de requêtes que vous pouvez utiliser, consultez notre page **Foire aux questions**. +5. Choisissez "Cryptocurrency". Le GRT est actuellement la seule cryptomonnaie acceptée sur le réseau The Graph. +6. Sélectionnez le nombre de mois pour lesquels vous souhaitez effectuer un paiement anticipé. + - Le paiement anticipé ne vous engage pas sur une utilisation future. Vous ne serez facturé que pour ce que vous utiliserez et vous pourrez retirer votre solde à tout moment. +7. Choisissez le réseau à partir duquel vous déposez vos GRT. Les GRT sur Arbitrum ou Ethereum sont tous deux acceptables. +8. Cliquez sur "Autoriser l'accès au GRT" puis spécifiez le montant de GRT qui peut être prélevé de votre portefeuille. + - Si vous payez à l'avance pour plusieurs mois, vous devez autoriser l'accès au montant correspondant. Cette interaction ne coûtera aucun gaz. +9. Enfin, cliquez sur « Add GRT to Billing Balance ». Cette transaction nécessitera des ETH sur Arbitrum pour couvrir les coûts du gaz. -- Note that GRT deposited from Arbitrum will process within a few moments while GRT deposited from Ethereum will take approximately 15-20 minutes to process. Once the transaction is confirmed, you'll see the GRT added to your account balance. +- Notez que les GRT déposés depuis Arbitrum seront traités en quelques instants tandis que les GRT déposés depuis Ethereum prendront environ 15-20 minutes pour être traités. Une fois la transaction confirmée, vous verrez les GRT ajoutés à votre solde de compte. -### Withdrawing GRT using a wallet +### Retirer des GRT en utilisant un portefeuille -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. -4. Enter the amount of GRT you would like to withdraw. -5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. -6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. +1. Accédez à la [page de facturation de Subgraph Studio](https://thegraph.com/studio/billing/). +2. Cliquez sur le bouton "Connect Wallet" dans le coin supérieur droit de la page. Sélectionnez votre portefeuille et cliquez sur "Connect". +3. Cliquez sur le bouton « Gérer » dans le coin supérieur droit de la page. Sélectionnez « Retirer des GRT ». Un panneau latéral apparaîtra. +4. Entrez le montant de GRT que vous voudriez retirer. +5. Cliquez 'Withdraw GRT' pour retirer les GRT de votre solde de compte. Signez la transaction associée dans votre portefeuille. Cela coûtera du gaz. Les GRT seront envoyés à votre portefeuille Arbitrum. +6. Une fois que la transaction est confirmée, vous verrez le GRT qu'on a retiré de votre solde du compte dans votre portefeuille Arbitrum. ### Ajout de GRT à l'aide d'un portefeuille multisig -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas. -3. Select the "Manage" button near the top right corner. First time users will see an option to "Upgrade to Growth plan" while returning users will click "Deposit from wallet". -4. Use the slider to estimate the number of queries you expect to make on a monthly basis. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. -5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. -6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. -7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. -8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. +1. Allez à la page [Facturation de Studio Subgraph](https://thegraph.com/studio/billing/). +2. Cliquez sur le bouton "Connect Wallet" dans le coin supérieur droit de la page. Sélectionnez votre portefeuille et cliquez sur "Connect". Si vous utilisez [Gnosis-Safe](https://gnosis-safe.io/), vous pourrez connecter votre multisig ainsi que votre portefeuille de signature. Ensuite, signez le message associé. Cela ne coûtera aucun gaz. +3. Cliquez sur le bouton « Manage » situé dans le coin supérieur droit. Les nouveaux utilisateurs verront l'option « Upgrade to Growth plan » (Passer au plan de croissance), tandis que les utilisateurs existants devront sélectionner « Deposit from wallet » (Déposer depuis le portefeuille). +4. Utilisez le curseur pour estimer le nombre de requêtes que vous prévoyez d’effectuer sur une base mensuelle. + - Pour des suggestions sur le nombre de requêtes que vous pouvez utiliser, consultez notre page **Foire aux questions**. +5. Choisissez "Cryptocurrency". Le GRT est actuellement la seule cryptomonnaie acceptée sur le réseau The Graph. +6. Sélectionnez le nombre de mois pour lesquels vous souhaitez effectuer un paiement anticipé. + - Le paiement anticipé ne vous engage pas sur une utilisation future. Vous ne serez facturé que pour ce que vous utiliserez et vous pourrez retirer votre solde à tout moment. +7. Choisissez le réseau à partir duquel vous déposez vos GRT. Les GRT sur Arbitrum ou Ethereum, les deux sont acceptables. 8. Cliquez sur "Allow GRT Access", et puis spécifiez le montant de GRT qui peut être prélevé de votre portefeuille. + - Si vous payez à l'avance pour plusieurs mois, vous devez autoriser l'accès au montant correspondant. Cette interaction ne coûtera aucun gaz. +8. Enfin, cliquez sur « Add GRT to Billing Balance ». Cette transaction nécessitera des ETH sur Arbitrum pour couvrir les coûts du gaz. -- Note that GRT deposited from Arbitrum will process within a few moments while GRT deposited from Ethereum will take approximately 15-20 minutes to process. Once the transaction is confirmed, you'll see the GRT added to your account balance. +- Notez que les GRT déposés depuis Arbitrum seront traités en quelques instants tandis que les GRT déposés depuis Ethereum prendront environ 15-20 minutes pour être traités. Une fois la transaction confirmée, vous verrez les GRT ajoutés à votre solde de compte. -## Getting GRT +## Obtenir du GRT -This section will show you how to get GRT to pay for query fees. +Cette section vous montrera comment obtenir du GRT pour payer les frais de requête. ### Coinbase -This will be a step by step guide for purchasing GRT on Coinbase. +Voici un guide étape par étape pour acheter de GRT sur Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. -2. Once you have created an account, you will need to verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. -3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy/Sell" button on the top right of the page. -4. Select the currency you want to purchase. Select GRT. -5. Select the payment method. Select your preferred payment method. -6. Select the amount of GRT you want to purchase. -7. Review your purchase. Review your purchase and click "Buy GRT". -8. Confirm your purchase. Confirm your purchase and you will have successfully purchased GRT. -9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - To transfer the GRT to your wallet, click on the "Accounts" button on the top right of the page. - - Click on the "Send" button next to the GRT account. - - Enter the amount of GRT you want to send and the wallet address you want to send it to. - - Click "Continue" and confirm your transaction. -Please note that for larger purchase amounts, Coinbase may require you to wait 7-10 days before transferring the full amount to a wallet. +1. Accédez à [Coinbase](https://www.coinbase.com/) et créez un compte. +2. Dès que vous aurez créé un compte, vous devrez vérifier votre identité par le biais d'un processus connu sous le nom de KYC (Know Your Customer ou Connaître Votre Client). Il s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. +3. Une fois votre identité vérifiée, vous pouvez acheter des GRT. Pour ce faire, cliquez sur le bouton « Acheter/Vendre » en haut à droite de la page. +4. Sélectionnez la devise que vous souhaitez acheter. Sélectionnez GRT. +5. Sélectionnez le mode de paiement. Sélectionnez votre mode de paiement préféré. +6. Sélectionnez la quantité de GRT que vous souhaitez acheter. +7. Vérifiez votre achat. Vérifiez votre achat et cliquez sur "Buy GRT". +8. Confirmez votre achat. Confirmez votre achat et vous aurez acheté des GRT avec succès. +9. Vous pouvez transférer les GRT de votre compte vers votre portefeuille tel que [MetaMask](https://metamask.io/). + - Pour transférer les GRT dans votre portefeuille, cliquez sur le bouton "Accounts" en haut à droite de la page. + - Cliquez sur le bouton "Send" à côté du compte GRT. + - Entrez le montant de GRT que vous souhaitez envoyer et l'adresse du portefeuille vers laquelle vous souhaitez l'envoyer. + - Cliquez sur "Continue" et confirmez votre transaction. -Veuillez noter que pour des montants d'achat plus importants, Coinbase peut vous demander d'attendre 7 à 10 jours avant de transférer le montant total vers un portefeuille. -You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Vous pouvez en savoir plus sur l'obtention de GRT sur Coinbase [ici](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). ### Binance -This will be a step by step guide for purchasing GRT on Binance. +Ceci est un guide étape par étape pour l'achat des GRT sur Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. -2. Once you have created an account, you will need to verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. -3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy Now" button on the homepage banner. -4. You will be taken to a page where you can select the currency you want to purchase. Select GRT. -5. Select your preferred payment method. You'll be able to pay with different fiat currencies such as Euros, US Dollars, and more. -6. Select the amount of GRT you want to purchase. -7. Review your purchase and click "Buy GRT". -8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. -9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. - - Click on the "wallet" button, click withdraw, and select GRT. - - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - - Click "Continue" and confirm your transaction. +1. Allez sur [Binance](https://www.binance.com/en) et créez un compte. +2. Dès que vous aurez créé un compte, vous devrez vérifier votre identité par le biais d'un processus connu sous le nom de KYC (Know Your Customer ou Connaître Votre Client). Il s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. +3. Une fois votre identité vérifiée, vous pouvez acheter des GRT. Pour ce faire, cliquez sur le bouton « Acheter maintenant » sur la bannière de la page d'accueil. +4. Vous accéderez à une page où vous pourrez sélectionner la devise que vous souhaitez acheter. Sélectionnez GRT. +5. Choisissez votre mode de paiement préféré. Vous pourrez payer avec différentes devises fiduciaires telles que l'euro, le dollar américain, etc. +6. Sélectionnez la quantité de GRT que vous souhaitez acheter. +7. Confirmez votre achat et cliquez sur « Acheter des GRT ». +8. Confirmez votre achat et vous pourrez voir vos GRT dans votre portefeuille Binance Spot. +9. Vous pouvez retirer les GRT de votre compte vers votre portefeuille tel que [MetaMask](https://metamask.io/). + - [Pour retirer](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) des GRT dans votre portefeuille, ajoutez l'adresse de votre portefeuille à la liste blanche des retraits. + - Cliquez sur le bouton « portefeuille », cliquez sur retrait et sélectionnez GRT. + - Saisissez le montant de GRT que vous souhaitez envoyer et l'adresse du portefeuille sur liste blanche à laquelle vous souhaitez l'envoyer. + - Cliquer sur « Continuer » et confirmez votre transaction. -You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Vous pouvez en savoir plus sur l'obtention de GRT sur Binance [ici](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). ### Uniswap -This is how you can purchase GRT on Uniswap. +Voici comment vous pouvez acheter des GRT sur Uniswap. -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. -2. Select the token you want to swap from. Select ETH. -3. Select the token you want to swap to. Select GRT. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -4. Enter the amount of ETH you want to swap. -5. Click "Swap". -6. Confirm the transaction in your wallet and you wait for the transaction to process. +1. Accédez à [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) et connectez votre portefeuille. +2. Sélectionnez le jeton dont vous souhaitez échanger. Sélectionnez ETH. +3. Sélectionnez le jeton vers lequel vous souhaitez échanger. Sélectionnez GRT. + - Assurez-vous que vous échangez contre le bon jeton. L'adresse du contrat intelligent GRT sur Arbitrum One est la suivante : [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +4. Entrez le montant d'ETH que vous souhaitez échanger. +5. Cliquez sur « Échanger ». +6. Confirmez la transaction dans votre portefeuille et attendez qu'elle soit traitée. -You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). +Vous pouvez en savoir plus sur l'obtention de GRT sur Uniswap [ici](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). -## Getting Ether +## Obtenir de l'Ether⁠ -This section will show you how to get Ether (ETH) to pay for transaction fees or gas costs. ETH is necessary to execute operations on the Ethereum network such as transferring tokens or interacting with contracts. +Cette section vous montrera comment obtenir de l'Ether (ETH) pour payer les frais de transaction ou les coûts de gaz. L'ETH est nécessaire pour exécuter des opérations sur le réseau Ethereum telles que le transfert de jetons ou l'interaction avec des contrats. ### Coinbase -This will be a step by step guide for purchasing ETH on Coinbase. - -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. -2. Once you have created an account, verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. -3. Once you have verified your identity, purchase ETH by clicking on the "Buy/Sell" button on the top right of the page. -4. Select the currency you want to purchase. Select ETH. -5. Select your preferred payment method. -6. Enter the amount of ETH you want to purchase. -7. Review your purchase and click "Buy ETH". -8. Confirm your purchase and you will have successfully purchased ETH. -9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). - - To transfer the ETH to your wallet, click on the "Accounts" button on the top right of the page. - - Click on the "Send" button next to the ETH account. - - Enter the amount of ETH you want to send and the wallet address you want to send it to. - - Ensure that you are sending to your Ethereum wallet address on Arbitrum One. +Ce sera un guide étape par étape pour acheter de l'ETH sur Coinbase. + +1. Accédez à [Coinbase](https://www.coinbase.com/) et créez un compte. +2. Une fois que vous avez créé un compte, vérifiez votre identité via un processus appelé KYC (ou Know Your Customer). l s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. +3. Une fois que vous avez vérifié votre identité, achetez de l'ETH en cliquant sur le bouton « Acheter/Vendre » en haut à droite de la page. +4. Choisissez la devise que vous souhaitez acheter. Sélectionnez ETH. +5. Sélectionnez votre mode de paiement préféré. +6. Entrez le montant d'ETH que vous souhaitez acheter. +7. Vérifiez votre achat et cliquez sur « Acheter des Ethereum ». +8. Confirmez votre achat et vous aurez acheté avec succès de l'ETH. +9. Vous pouvez transférer l'ETH de votre compte Coinbase vers votre portefeuille tel que [MetaMask](https://metamask.io/). + - Pour transférer l'ETH vers votre portefeuille, cliquez sur le bouton « Comptes » en haut à droite de la page. + - Cliquez sur le bouton « Envoyer » à côté du compte ETH. + - Entrez le montant d'ETH que vous souhaitez envoyer et l'adresse du portefeuille vers lequel vous souhaitez l'envoyer. + - Assurez-vous que vous envoyez à votre adresse de portefeuille Ethereum sur Arbitrum One. - Cliquez sur "Continuer" et confirmez votre transaction. Vous pouvez en savoir plus sur l'obtention d'ETH sur Coinbase [ici](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). @@ -185,29 +185,29 @@ Ce sera un guide étape par étape pour acheter des ETH sur Binance. 6. Entrez le montant d'ETH que vous souhaitez acheter. 7. Vérifiez votre achat et cliquez sur « Acheter ETH ». 8. Confirmez votre achat et vous verrez votre ETH dans votre portefeuille Binance Spot. -9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). - - To withdraw the ETH to your wallet, add your wallet's address to the withdrawal whitelist. +9. Vous pouvez retirer l'ETH de votre compte vers votre portefeuille tel que [MetaMask](https://metamask.io/). + - Pour retirer l'ETH vers votre portefeuille, ajoutez l'adresse de votre portefeuille à la liste blanche de retrait. - Cliquez sur le bouton « portefeuille », cliquez sur retirer et sélectionnez ETH. - Entrez le montant d'ETH que vous souhaitez envoyer et l'adresse du portefeuille sur liste blanche à laquelle vous souhaitez l'envoyer. - - Ensure that you are sending to your Ethereum wallet address on Arbitrum One. + - Assurez-vous que vous envoyez à votre adresse de portefeuille Ethereum sur Arbitrum One. - Cliquez sur "Continuer" et confirmez votre transaction. Vous pouvez en savoir plus sur l'obtention d'ETH sur Binance [ici](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). -## Billing FAQs +## FAQ sur la facturation -### How many queries will I need? +### De combien de requêtes aurai-je besoin ? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +Vous n'avez pas besoin de savoir à l'avance combien de requêtes vous aurez besoin. Vous ne serez facturé que pour ce que vous utilisez et vous pourrez retirer des GRT de votre compte à tout moment. -We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. +Nous vous recommandons de surestimer le nombre de requêtes dont vous aurez besoin afin de ne pas avoir à recharger votre solde fréquemment. Pour les applications de petite et moyenne taille, une bonne estimation consiste à commencer par 1 à 2 millions de requêtes par mois et à surveiller de près l'utilisation au cours des premières semaines. Pour les applications plus grandes, une bonne estimation consiste à utiliser le nombre de visites quotidiennes que reçoit votre site multiplié par le nombre de requêtes que votre page la plus active effectue à son ouverture. -Of course, both new and existing users can reach out to Edge & Node's BD team for a consult to learn more about anticipated usage. +Bien entendu, les nouveaux utilisateurs et les utilisateurs existants peuvent contacter l'équipe BD d'Edge & ; Node pour une consultation afin d'en savoir plus sur l'utilisation prévue. -### Can I withdraw GRT from my billing balance? +### Puis-je retirer du GRT de mon solde de facturation ? -Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). +Oui, vous pouvez toujours retirer les GRT qui n'ont pas déjà été utilisés pour des requêtes de votre solde de facturation. Le contrat de facturation est uniquement conçu pour transférer des GRT de l'Ethereum Mainnet vers le réseau Arbitrum. Si vous souhaitez transférer vos GRT d'Arbitrum vers le réseau principal Ethereum, vous devrez utiliser le [pont Arbitrum](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### Que se passe-t-il lorsque mon solde de facturation est épuisé ? Vais-je recevoir un avertissement ? -You will receive several email notifications before your billing balance runs out. +Vous recevrez plusieurs notifications par e-mail avant que votre solde de facturation ne soit épuisé. diff --git a/website/pages/fr/chain-integration-overview.mdx b/website/pages/fr/chain-integration-overview.mdx index 9310317d84ca..8e3cc18a00bd 100644 --- a/website/pages/fr/chain-integration-overview.mdx +++ b/website/pages/fr/chain-integration-overview.mdx @@ -6,12 +6,12 @@ Un processus d'intégration transparent et basé sur la gouvernance a été con ## Étape 1. Intégration technique -- Les équipes travaillent sur une intégration de Graph Node et Firehose pour les chaînes non basées sur EVM. [Voici comment](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Les équipes lancent le processus d'intégration du protocole en créant un fil de discussion sur le forum [ici](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (sous-catégorie Nouvelles sources de données sous Gouvernance et GIPs ). L'utilisation du modèle de forum par défaut est obligatoire. ## Étape 2. Validation de l'intégration -- Les équipes collaborent avec les développeurs principaux, Graph Foundation et les opérateurs d'interfaces graphiques et de passerelles réseau, tels que [Subgraph Studio](https://thegraph.com/studio/), pour garantir un processus d'intégration fluide. Cela implique de fournir l'infrastructure backend nécessaire, telle que les points de terminaison JSON RPC ou Firehose de la chaîne d'intégration. Les équipes souhaitant éviter d'auto-héberger une telle infrastructure peuvent s'appuyer sur la communauté d'opérateurs de nœuds (indexeurs) de The Graph, ce que la Fondation peut aider à faire. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Les Graph Indexeurs testent l'intégration sur le réseau de testnet du graph. - Les développeurs principaux et les indexeurs surveillent la stabilité, les performances et le déterminisme des données. @@ -38,7 +38,7 @@ Ce processus est lié au service de données Subgraph, applicable uniquement aux Cela n’aurait un impact que sur la prise en charge du protocole pour l’indexation des récompenses sur les subgraphs alimentés par Substreams. La nouvelle implémentation de Firehose nécessiterait des tests sur testnet, en suivant la méthodologie décrite pour l'étape 2 de ce GIP. De même, en supposant que l'implémentation soit performante et fiable, un PR sur la [Matrice de support des fonctionnalités](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) serait requis ( Fonctionnalité de sous-graphe « Sous-flux de sources de données »), ainsi qu'un nouveau GIP pour la prise en charge du protocole pour l'indexation des récompenses. N'importe qui peut créer le PR et le GIP ; la Fondation aiderait à obtenir l'approbation du Conseil. -### 3. Combien de temps ce processus prendra-t-il ? +### 3. How much time will the process of reaching full protocol support take? Le temps nécessaire à la mise en réseau principal devrait être de plusieurs semaines, variant en fonction du temps de développement de l'intégration, de la nécessité ou non de recherches supplémentaires, de tests et de corrections de bugs et, comme toujours, du calendrier du processus de gouvernance qui nécessite les commentaires de la communauté. @@ -46,4 +46,4 @@ La prise en charge du protocole pour l'indexation des récompenses dépend de la ### 4. Comment les priorités seront-elles gérées ? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/fr/cookbook/arweave.mdx b/website/pages/fr/cookbook/arweave.mdx index d4ba304400b1..b65102acbee9 100644 --- a/website/pages/fr/cookbook/arweave.mdx +++ b/website/pages/fr/cookbook/arweave.mdx @@ -105,7 +105,7 @@ La définition du schéma décrit la structure de la base de données de subgrap Les gestionnaires pour le traitement des événements sont écrits en [AssemblyScript](https://www.assemblyscript.org/). -L'indexation Arweave introduit des types de données spécifiques à Arweave dans l'[API AssemblyScript](/developing/graph-ts/api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { @@ -155,7 +155,7 @@ L'écriture des mappages d'un subgraph Arweave est très similaire à l'écritur Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash -graph deploy --studio --access-token +graph deploy --access-token ``` ## Interroger un subgraph d'Arweave diff --git a/website/pages/fr/cookbook/avoid-eth-calls.mdx b/website/pages/fr/cookbook/avoid-eth-calls.mdx index 446b0e8ecd17..8897ecdbfdc7 100644 --- a/website/pages/fr/cookbook/avoid-eth-calls.mdx +++ b/website/pages/fr/cookbook/avoid-eth-calls.mdx @@ -99,4 +99,18 @@ Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0 ## Conclusion -We can significantly improve indexing performance by minimizing or eliminating `eth_calls` in our subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/fr/cookbook/cosmos.mdx b/website/pages/fr/cookbook/cosmos.mdx index 9d332f717043..ed318635f292 100644 --- a/website/pages/fr/cookbook/cosmos.mdx +++ b/website/pages/fr/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and Les gestionnaires pour le traitement des événements sont écrits en [AssemblyScript](https://www.assemblyscript.org/). -L'indexation Cosmos introduit des types de données spécifiques à Cosmos dans l'[API AssemblyScript](/developing/graph-ts/api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { @@ -203,7 +203,7 @@ Une fois votre subgraph créé, vous pouvez le déployer en utilisant la command Visit the Subgraph Studio to create a new subgraph. ```bash -graph deploy --studio subgraph-name +graph deploy subgraph-name ``` **Local Graph Node (based on default configuration):** diff --git a/website/pages/fr/cookbook/derivedfrom.mdx b/website/pages/fr/cookbook/derivedfrom.mdx index 69dd48047744..09ba62abde3f 100644 --- a/website/pages/fr/cookbook/derivedfrom.mdx +++ b/website/pages/fr/cookbook/derivedfrom.mdx @@ -69,6 +69,20 @@ This will not only make our subgraph more efficient, but it will also unlock thr ## Conclusion -Adopting the `@derivedFrom` directive in subgraphs effectively handles dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. -To learn more detailed strategies to avoid large arrays, read this blog from Kevin Jones: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). +For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/fr/cookbook/enums.mdx b/website/pages/fr/cookbook/enums.mdx index a10970c1539f..ac68bdc05ade 100644 --- a/website/pages/fr/cookbook/enums.mdx +++ b/website/pages/fr/cookbook/enums.mdx @@ -50,7 +50,7 @@ type Token @entity { In this schema, `TokenStatus` is a simple string with no specific, allowed values. -#### Why is this a problem? +#### Pourquoi est-ce un problème ? - There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. - It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## Ressources additionnelles For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/pages/fr/cookbook/grafting-hotfix.mdx b/website/pages/fr/cookbook/grafting-hotfix.mdx index 4be0a0b07790..fd32ece68e7e 100644 --- a/website/pages/fr/cookbook/grafting-hotfix.mdx +++ b/website/pages/fr/cookbook/grafting-hotfix.mdx @@ -1,12 +1,12 @@ --- -Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment --- ## TLDR Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. -### Overview +### Aperçu This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. @@ -164,7 +164,7 @@ Grafting is an effective strategy for deploying hotfixes in subgraph development However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. -## Additional Resources +## Ressources additionnelles - **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. @@ -173,14 +173,14 @@ By incorporating grafting into your subgraph development workflow, you can enhan ## Subgraph Best Practices 1-6 -1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/fr/cookbook/grafting.mdx b/website/pages/fr/cookbook/grafting.mdx index 7a7c618dc550..b255c571ec8b 100644 --- a/website/pages/fr/cookbook/grafting.mdx +++ b/website/pages/fr/cookbook/grafting.mdx @@ -22,7 +22,7 @@ Pour plus d’informations, vous pouvez vérifier : - [Greffage](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -Dans ce tutoriel, nous allons aborder un cas d'utilisation de base. Nous allons remplacer un contrat existant par un contrat identique (avec une nouvelle adresse, mais le même code). Ensuite, nous grefferons le subgraph existant sur le subgraph "de base" qui suit le nouveau contrat. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Remarque importante sur le greffage lors de la mise à niveau vers le réseau @@ -30,7 +30,7 @@ Dans ce tutoriel, nous allons aborder un cas d'utilisation de base. Nous allons ### Pourquoi est-ce important? -La greffe est une fonctionnalité puissante qui permet de "greffer" un subgraph sur un autre, transférant ainsi les données historiques du subgraph existant vers une nouvelle version. Bien qu'il s'agisse d'un moyen efficace de préserver les données et de gagner du temps sur l'indexation, la greffe peut introduire des complexités et des problèmes potentiels lors de la migration d'un environnement hébergé vers le réseau décentralisé. Il n'est pas possible de greffer un subgraph du Graph Network vers le service hébergé ou le Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Les meilleures pratiques @@ -80,7 +80,7 @@ dataSources: ``` - La source de données `Lock` est l'adresse abi et le contrat que nous obtiendrons lorsque nous compilerons et déploierons le contrat -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - La section `mapping` définit les déclencheurs d'intérêt et les fonctions qui doivent être exécutées en réponse à ces déclencheurs. Dans ce cas, nous écoutons l'événement `Withdrawal` et appelons la fonction `handleWithdrawal` lorsqu'elle est émise. ## Définition de manifeste de greffage @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Ressources complémentaires -Si vous souhaitez acquérir plus d'expérience en matière de greffes, voici quelques exemples de contrats populaires : +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/fr/cookbook/immutable-entities-bytes-as-ids.mdx b/website/pages/fr/cookbook/immutable-entities-bytes-as-ids.mdx index f38c33385604..541212617f9f 100644 --- a/website/pages/fr/cookbook/immutable-entities-bytes-as-ids.mdx +++ b/website/pages/fr/cookbook/immutable-entities-bytes-as-ids.mdx @@ -174,3 +174,17 @@ Query Response: Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/fr/cookbook/near.mdx b/website/pages/fr/cookbook/near.mdx index 4296699b5744..cb3ed047e136 100644 --- a/website/pages/fr/cookbook/near.mdx +++ b/website/pages/fr/cookbook/near.mdx @@ -37,7 +37,7 @@ La définition d'un subgraph comporte trois aspects : **schema.graphql** : un fichier de schéma qui définit quelles données sont stockées pour votre subgraph, et comment les interroger via GraphQL. Les exigences pour les subgraphs NEAR sont couvertes par la [documentation existante](/developing/creating-a-subgraph#the-graphql-schema). -**Mappages AssemblyScript :** [Code AssemblyScript](/developing/graph-ts/api) qui traduit les données d'événement en entités définies dans votre schéma. La prise en charge de NEAR introduit des types de données spécifiques à NEAR et une nouvelle fonctionnalité d'analyse JSON. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. Lors du développement du subgraph, il y a deux commandes clés : @@ -98,7 +98,7 @@ La définition du schema décrit la structure de la base de données de subgraph Les gestionnaires de traitement des événements sont écrits dans l'[AssemblyScript](https://www.assemblyscript.org/). -L'indexation NEAR introduit des types de données spécifiques à NEAR dans l'[API AssemblyScript](/developing/graph-ts/api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ Ces types sont passés au bloc & gestionnaires de reçus : - Les gestionnaires de blocs reçoivent un `Block` - Les gestionnaires de reçus reçoivent un `ReceiptWithOutcome` -Sinon, le reste de l'[API AssemblyScript](/developing/graph-ts/api) est disponible pour les développeurs de subgraphs NEAR pendant l'exécution du mapping. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -Cela inclut une nouvelle fonction d'analyse JSON - les journaux sur NEAR sont fréquemment émis sous forme de JSON stringifiés. Une nouvelle fonction `json.fromString(...)` est disponible dans le cadre de l'[API JSON](/developing/graph-ts/api#json-api) pour permettre aux développeurs pour traiter facilement ces journaux. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Déploiement d'un subgraph NEAR @@ -194,8 +194,8 @@ La configuration du nœud dépend de l'endroit où le subgraph est déployé. ### Subgraph Studio ```sh -graph auth --studio -graph deploy --studio +graph auth +graph deploy ``` ### Nœud Graph local ( en fonction de la configuration par défaut) diff --git a/website/pages/fr/cookbook/pruning.mdx b/website/pages/fr/cookbook/pruning.mdx index f22a2899f1de..d86bf50edf42 100644 --- a/website/pages/fr/cookbook/pruning.mdx +++ b/website/pages/fr/cookbook/pruning.mdx @@ -39,3 +39,17 @@ dataSources: ## Conclusion Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/fr/cookbook/subgraph-uncrashable.mdx b/website/pages/fr/cookbook/subgraph-uncrashable.mdx index 56b166b1056f..319851bc8579 100644 --- a/website/pages/fr/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/fr/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Générateur de code de subgraph sécurisé - Le cadre comprend également un moyen (via le fichier de configuration) de créer des fonctions de définition personnalisées, mais sûres, pour des groupes de variables d'entité. De cette façon, il est impossible pour l'utilisateur de charger/utiliser une entité de graph obsolète et il est également impossible d'oublier de sauvegarder ou définissez une variable requise par la fonction. -- Les journaux d'avertissement sont enregistrés sous forme de journaux indiquant où il y a une violation de la logique de subgraph pour aider à corriger le problème afin de garantir l'exactitude des données. Ces journaux peuvent être consultés dans le service hébergé de The Graph dans la section "Journaux". +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable peut être exécuté en tant qu'indicateur facultatif à l'aide de la commande Graph CLI codegen. diff --git a/website/pages/fr/cookbook/timeseries.mdx b/website/pages/fr/cookbook/timeseries.mdx index 88ee70005a6e..44d7eca76ee9 100644 --- a/website/pages/fr/cookbook/timeseries.mdx +++ b/website/pages/fr/cookbook/timeseries.mdx @@ -6,7 +6,7 @@ title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggr Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. -## Overview +## Aperçu Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. @@ -44,7 +44,7 @@ A timeseries entity represents raw data points collected over time. It is define - `id`: Must be of type `Int8!` and is auto-incremented. - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. -Example: +L'exemple: ```graphql type Data @entity(timeseries: true) { @@ -61,7 +61,7 @@ An aggregation entity computes aggregated values from a timeseries source. It is - Annotation Arguments: - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). -Example: +L'exemple: ```graphql type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { @@ -77,7 +77,7 @@ In this example, Stats aggregates the price field from Data over hourly and dail Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. -Example: +L'exemple: ```graphql { @@ -101,7 +101,7 @@ Example: Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. -Example: +L'exemple: ### Timeseries Entity @@ -181,14 +181,14 @@ By adopting this pattern, developers can build more efficient and scalable subgr ## Subgraph Best Practices 1-6 -1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/fr/cookbook/transfer-to-the-graph.mdx b/website/pages/fr/cookbook/transfer-to-the-graph.mdx index 287cd7d81b4b..3ffe317f8063 100644 --- a/website/pages/fr/cookbook/transfer-to-the-graph.mdx +++ b/website/pages/fr/cookbook/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -48,7 +48,7 @@ graph init --product subgraph-studio In The Graph CLI, use the auth command seen in Subgraph Studio: ```sh -graph auth --studio +graph auth ``` ## 2. Deploy Your Subgraph to Studio @@ -58,7 +58,7 @@ If you have your source code, you can easily deploy it to Studio. If you don't h In The Graph CLI, run the following command: ```sh -graph deploy --studio --ipfs-hash +graph deploy --ipfs-hash ``` @@ -74,7 +74,7 @@ graph deploy --studio --ipfs-hash You can start [querying](/querying/querying-the-graph/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### Exemple [CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: @@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### Ressources additionnelles - To quickly create and publish a new subgraph, check out the [Quick Start](/quick-start/). - To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). diff --git a/website/pages/fr/deploying/deploy-using-subgraph-studio.mdx b/website/pages/fr/deploying/deploy-using-subgraph-studio.mdx index 502169b4ccfa..3aeb7730bd33 100644 --- a/website/pages/fr/deploying/deploy-using-subgraph-studio.mdx +++ b/website/pages/fr/deploying/deploy-using-subgraph-studio.mdx @@ -1,139 +1,137 @@ --- -title: Deploy Using Subgraph Studio +title: Déploiement en utilisant Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Apprenez à déployer votre subgraph sur Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it on-chain. +> Note : Lorsque vous déployez un subgraph, vous le poussez vers Subgraph Studio, où vous pourrez le tester. Il est important de se rappeler que déployer n'est pas la même chose que publier. Lorsque vous publiez un subgraph, vous le publiez sur la blockchain. -## Subgraph Studio Overview +## Présentation de Subgraph Studio -In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: +Dans [Subgraph Studio](https://thegraph.com/studio/), vous pouvez faire ce qui suit: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs -- Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph through the Studio UI -- Deploy your subgraph using the The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph with the Studio UI -- Manage your billing +- Voir une liste des subgraphs que vous avez créés +- Gérer, voir les détails et visualiser l'état d'un subgraph spécifique +- Créez et gérez vos clés API pour des subgraphs spécifiques +- Limitez vos clés API à des domaines spécifiques et autorisez uniquement certains Indexers à les utiliser pour effectuer des requêtes +- Créer votre subgraph +- Déployer votre subgraph en utilisant The Graph CLI +- Tester votre subgraph dans l'environnement de test +- Intégrer votre subgraph en staging en utilisant l'URL de requête du développement +- Publier votre subgraph sur The Graph Network +- Gérer votre facturation -## Install The Graph CLI +## Installer The Graph CLI -Before deploying, you must install The Graph CLI. +Avant de déployer, vous devez installer The Graph CLI. -You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use The Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +Vous devez avoir [Node.js](https://nodejs.org/) et un gestionnaire de packages de votre choix (`npm`, `yarn` ou `pnpm`) installés pour utiliser The Graph CLI. Vérifiez la version la [plus récente](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) de l'outil CLI. -**Install with yarn:** +### Installation avec yarn ```bash -yarn global add @graphprotocol/graph-cli +npm install -g @graphprotocol/graph-cli ``` -**Install with npm:** +### Installation avec npm ```bash npm install -g @graphprotocol/graph-cli ``` -## Create Your Subgraph - -Before deploying your subgraph you need to create an account in [Subgraph Studio](https://thegraph.com/studio/). +## Commencer -1. Open [Subgraph Studio](https://thegraph.com/studio/). -2. Connect your wallet to sign in. - - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +1. Ouvrez [Subgraph Studio](https://thegraph.com/studio/). +2. Connectez votre portefeuille pour vous connecter. + - Vous pouvez le faire via MetaMask, Coinbase Wallet, WalletConnect ou Safe. +3. Après vous être connecté, votre clé de déploiement unique sera affichée sur la page des détails de votre subgraph. + - La clé de déploiement vous permet de publier vos subgraphs ou de gérer vos clés d'API et votre facturation. Elle est unique mais peut être régénérée si vous pensez qu'elle a été compromise. -> Important: You need an API key to query subgraphs +> Important : Vous avez besoin d'une clé API pour interroger les subgraphs -### How to Create a Subgraph in Subgraph Studio +### Comment créer un subgraph dans Subgraph Studio -> For additional written detail, review the [Quick-Start](/quick-start/). +> Pour des informations supplémentaires écrites, consultez le [Quick Start](/quick-start/). -### Subgraph Compatibility with The Graph Network +### Compatibilité des subgraphs avec le réseau de The Graph -In order to be supported by Indexers on The Graph Network, subgraphs must: +Pour être pris en charge par les Indexeurs sur The Graph Network, les subgraphs doivent : -- Index a [supported network](/developing/supported-networks) -- Must not use any of the following features: +- Indexer un [réseau pris en charge](/developing/supported-networks) +- Ne doit utiliser aucune des fonctionnalités suivantes : - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting + - Erreurs non fatales + - La greffe -## Initialize Your Subgraph +## Initialisez votre Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Une fois que votre subgraph a été créé dans Subgraph Studio, vous pouvez initialiser son code via la CLI en utilisant cette commande : ```bash -graph init --studio +graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +Vous pouvez trouver la valeur `` sur la page des détails de votre subgraph dans Subgraph Studio, voir l'image ci-dessous : ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +Après avoir exécuté la commande `graph init`, ilvous sera demandé de saisir l'adresse du contrat, le réseau, et un ABI que vous souhaitez interroger. Cela générera un nouveau dossier sur votre machine locale avec quelques codes de base pour commencer à travailler sur votre subgraph. Vous pouvez ensuite finaliser votre subgraph pour vous assurer qu'il fonctionne comme prévu. -## Graph Auth +## Authentification The Graph -Before you can deploy your subgraph to Subgraph Studio, you need to login into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Avant de pouvoir déployer votre subgraph sur Subgraph Studio, vous devez vous connecter à votre compte via la CLI. Pour le faire, vous aurez besoin de votre clé de déploiement, que vous pouvez trouver sur la page des détails de votre subgraph. -Then, use the following command to authenticate from the CLI: +Ensuite, utilisez la commande suivante pour vous authentifier depuis la CLI : ```bash -graph auth --studio +graph auth ``` -## Deploying a Subgraph +## Déploiement d'un Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Une fois prêt, vous pouvez déployer votre subgraph sur Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and and update the metadata. This action won't publish your subgraph to the decentralized network. +> Déployer un subgraph avec la CLI le pousse vers le Studio, où vous pouvez le tester et mettre à jour les métadonnées. Cette action ne publiera pas votre subgraph sur le réseau décentralisé. -Use the following CLI command to deploy your subgraph: +Utilisez la commande CLI suivante pour déployer votre subgraph : ```bash -graph deploy --studio +graph deploy ``` -After running this command, the CLI will ask for a version label. +Après avoir exécuté cette commande, la CLI demandera une étiquette de version. -- It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as `v1`, `version1`, or `asdf`. -- The labels you create will be visible in Graph Explorer and can be used by curators to decide if they want to signal on a specific version or not, so choose them wisely. +- Il est fortement recommandé d'utiliser [semver](https://semver.org/) pour le versionnage, comme `0.0.1`. Cela dit, vous êtes libre de choisir n'importe quelle chaîne de caractère comme version telle que v1, version1 ou asdf. +- Les étiquettes que vous créez seront visibles dans Graph Explorer et pourront être utilisées par les Curateurs pour décider s'ils veulent signaler sur une version spécifique ou non, donc choisissez-les judicieusement. -## Testing Your Subgraph +## Tester votre Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +Après le déploiement, vous pouvez tester votre subgraph (soit dans Subgraph Studio, soit dans votre propre application, avec l'URL de requête du déploiement), déployer une autre version, mettre à jour les métadonnées, et publier sur [Graph Explorer](https://thegraph.com/explorer) lorsque vous êtes prêt. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Utilisez Subgraph Studio pour vérifier les journaux (logs) sur le tableau de bord et rechercher les erreurs éventuelles de votre subgraph. -## Publish Your Subgraph +## Publiez votre subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/publishing/publishing-a-subgraph/). +Pour publier votre subgraph avec succès, consultez la page [publier un subgraph](/publishing/publishing-a-subgraph/). -## Versioning Your Subgraph with the CLI +## Versionning de votre subgraph avec le CLI -If you want to update your subgraph, you can do the following: +Si vous souhaitez mettre à jour votre subgraph, vous pouvez faire ce qui suit : -- You can deploy a new version to Studio using the CLI (it will only be private at this point). -- Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- Vous pouvez déployer une nouvelle version dans Studio en utilisant la CLI (cette version sera privée à ce stade). +- Une fois que vous en êtes satisfait, vous pouvez publier votre nouveau déploiement sur [Graph Explorer](https://thegraph.com/explorer). +- Cette action créera une nouvelle version de votre subgraph sur laquelle les Curateurs pourront commencer à signaler et que les Indexeurs pourront indexer. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +Vous pouvez également mettre à jour les métadonnées de votre subgraph sans publier une nouvelle version. Vous pouvez mettre à jour les détails de votre subgraph dans Studio (sous la photo de profil, le nom, la description, etc.) en cochant une option appelée **Update Details** dans [Graph Explorer](https://thegraph.com/explorer). Si cette option est cochée, une transaction sera générée sur la blockchain (on-chain) pour mettre à jour les détails du subgraph dans Explorer sans avoir à publier une nouvelle version avec un nouveau déploiement. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/network/curating/). +> Remarque : Il y a des coûts associés à la publication d'une nouvelle version d'un subgraph sur le réseau. En plus des frais de transaction, vous devez également financer une partie de la taxe de curation sur le signal d'auto-migration . Vous ne pouvez pas publier une nouvelle version de votre subgraph si les Curateurs n'ont pas signalé dessus. Pour plus d'informations, veuillez lire plus [ici](/network/curating/). -## Automatic Archiving of Subgraph Versions +## Archivage automatique des versions de subgraphs -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Chaque fois que vous déployez une nouvelle version de subgraph dans Subgraph Studio, la version précédente sera archivée. Les versions archivées ne seront pas indexées/synchronisées et ne pourront donc pas être interrogées. Vous pouvez désarchiver une version de votre subgraph dans Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Remarque : les versions précédentes des subgraphs non publiés mais déployés dans Studio seront automatiquement archivées. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/pages/fr/deploying/multiple-networks.mdx b/website/pages/fr/deploying/multiple-networks.mdx index dc2b8e533430..c4e981a572cc 100644 --- a/website/pages/fr/deploying/multiple-networks.mdx +++ b/website/pages/fr/deploying/multiple-networks.mdx @@ -1,37 +1,36 @@ --- -title: Deploying a Subgraph to Multiple Networks +title: Déploiement d'un subgraph sur plusieurs réseaux --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). +Cette page explique comment déployer un subgraph sur plusieurs réseaux. Pour déployer un subgraph, vous devez premièrement installer le [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). Si vous n'avez pas encore créé de subgraph, consultez [Creation d'un subgraph](/developing/creating-a-subgraph). -## Deploying the subgraph to multiple networks +## Déploiement du subgraph sur plusieurs réseaux -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +Dans certains cas, vous souhaiterez déployer le même subgraph sur plusieurs réseaux sans dupliquer tout son code. Le principal défi qui en découle est que les adresses contractuelles sur ces réseaux sont différentes. -### Using `graph-cli` +### En utilisant `graph-cli` -Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: +Les commandes `graph build` (depuis la version `v0.29.0`) et `graph deploy` (depuis la version `v0.32.0`) acceptent deux nouvelles options: ```sh Options: - ... - --network Network configuration to use from the networks config file - --network-file Networks config file path (default: "./networks.json") + --network Configuration du réseau à utiliser à partir du fichier de configuration des réseaux + --network-file Chemin du fichier de configuration des réseaux (par défaut : "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +Vous pouvez utiliser l'option `--network` pour spécifier une configuration de réseau à partir d'un fichier standard `json` (par défaut networks.json) pour facilement mettre à jour votre subgraph pendant le développement. -> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. +> Note : La commande `init` générera désormais automatiquement un fichier networks.json en se basant sur les informations fournies. Vous pourrez ensuite mettre à jour les réseaux existants ou en ajouter de nouveaux. -If you don't have a `networks.json` file, you'll need to manually create one with the following structure: +Si vous n'avez pas de fichier `networks.json`, vous devrez en créer un manuellement avec la structure suivante : ```json { - "network1": { // the network name - "dataSource1": { // the dataSource name - "address": "0xabc...", // the contract address (optional) - "startBlock": 123456 // the startBlock (optional) + "network1": { // le nom du réseau + "dataSource1": { // le nom de la source de données + "address": "0xabc...", // l'adresse du contrat (facultatif) + "startBlock": 123456 // le bloc de départ (facultatif) }, "dataSource2": { "address": "0x123...", @@ -52,9 +51,9 @@ If you don't have a `networks.json` file, you'll need to manually create one wit } ``` -> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. +> Note : Vous n'avez besoin de spécifier aucun des `templates` (si vous en avez) dans le fichier de configuration, uniquement les `dataSources`. Si des `templates` sont déclarés dans le fichier `subgraph.yaml`, leur réseau sera automatiquement mis à jour vers celui spécifié avec l'option `--network`. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Supposons maintenant que vous souhaitiez déployer votre subgraph sur les réseaux `mainnet` et `sepolia`, et que ceci est votre fichier subgraph.yaml : ```yaml # ... @@ -69,7 +68,7 @@ dataSources: kind: ethereum/events ``` -This is what your networks config file should look like: +Voici à quoi devrait ressembler votre fichier de configuration réseau : ```json { @@ -86,17 +85,17 @@ This is what your networks config file should look like: } ``` -Now we can run one of the following commands: +Nous pouvons maintenant exécuter l'une des commandes suivantes : ```sh -# Using default networks.json file +# En utilisant le fichier networks.json par défaut yarn build --network sepolia -# Using custom named file -yarn build --network sepolia --network-file path/to/config + # En utilisant un fichier personnalisé +yarn build --network sepolia --network-file chemin/à/configurer ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +La commande `build` mettra à jour votre fichier `subgraph.yaml` avec la configuration `sepolia` puis recompilera le subgraph. Votre fichier `subgraph.yaml` devrait maintenant ressembler à ceci: ```yaml # ... @@ -111,23 +110,23 @@ dataSources: kind: ethereum/events ``` -Now you are ready to `yarn deploy`. +Vous êtes maintenant prêt à utiliser la commande `yarn deploy`. -> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: +> Note : Comme mentionné précédemment, depuis `graph-cli 0.32.0`, vous pouvez directement exécuter `yarn deploy` avec l'option `--network`: ```sh -# Using default networks.json file +# En utilisant le fichier networks.json par défaut yarn deploy --network sepolia -# Using custom named file -yarn deploy --network sepolia --network-file path/to/config + # En utilisant un fichier personnalisé +yarn deploy --network sepolia --network-file chemin/à/configurer ``` -### Using subgraph.yaml template +### Utilisation du modèle subgraph.yaml -One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). +Une façon de paramétrer des aspects tels que les adresses de contrat en utilisant des versions plus anciennes de `graph-cli` est de générer des parties de celui-ci avec un système de creation de modèle comme [Mustache](https://mustache.github.io/) ou [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +Pour illustrer cette approche, supposons qu'un subgraph doive être déployé sur le réseau principal (mainnet) et sur Sepolia en utilisant des adresses de contrat différentes. Vous pourriez alors définir deux fichiers de configuration fournissant les adresses pour chaque réseau : ```json { @@ -136,7 +135,7 @@ To illustrate this approach, let's assume a subgraph should be deployed to mainn } ``` -and +et ```json { @@ -145,7 +144,7 @@ and } ``` -Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: +Avec ceci, vous remplacerez le nom du réseau et les adresses dans le manifeste par des variables de type `{{network}}` et `{{address}}` et renommer le manifeste par exemple `subgraph.template.yaml`: ```yaml # ... @@ -162,7 +161,7 @@ dataSources: kind: ethereum/events ``` -In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: +Pour générer un manifeste pour l'un ou l'autre réseau, vous pourriez ajouter deux commandes supplémentaires au fichier `package.json` ainsi qu'une dépendance à `mustache` : ```json { @@ -179,7 +178,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +Pour déployer ce subgraph pour mainnet ou Sepolia, vous devez simplement exécuter l'une des deux commandes suivantes : ```sh # Mainnet: @@ -189,29 +188,29 @@ yarn prepare:mainnet && yarn deploy yarn prepare:sepolia && yarn deploy ``` -A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). +Un exemple fonctionnel de ceci peut être trouvé [ici](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). -**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. +Note : Cette approche peut également être appliquée à des situations plus complexes, dans lesquelles il est nécessaire de remplacer plus que les adresses des contrats et les noms de réseau ou où il est nécessaire de générer des mappages ou alors des ABI à partir de modèles également. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +Cela vous donnera le `chainHeadBlock` que vous pouvez comparer avec le `latestBlock` sur votre subgraph pour vérifier s'il est en retard. `synced` vous informe si le subgraph a déjà rattrapé la chaîne. `health` peut actuellement prendre les valeurs de `healthy` si aucune erreur ne s'est produite, ou `failed` s'il y a eu une erreur qui a stoppé la progression du subgraph. Dans ce cas, vous pouvez vérifier le champ `fatalError` pour les détails sur cette erreur. -## Subgraph Studio subgraph archive policy +## Politique d'archivage des subgraphs de Subgraph Studio -A subgraph version in Studio is archived if and only if it meets the following criteria: +Une version de subgraph dans Studio est archivée si et seulement si elle répond aux critères suivants : -- The version is not published to the network (or pending publish) -- The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- La version n'est pas publiée sur le réseau (ou en attente de publication) +- La version a été créée il y a 45 jours ou plus +- Le subgraph n'a pas été interrogé depuis 30 jours -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +De plus, lorsqu'une nouvelle version est déployée, si le subgraph n'a pas été publié, la version N-2 du subgraph est archivée. -Every subgraph affected with this policy has an option to bring the version in question back. +Chaque subgraph concerné par cette politique dispose d'une option de restauration de la version en question. -## Checking subgraph health +## Vérification de l'état des subgraphs -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +Si un subgraph se synchronise avec succès, c'est un bon signe qu'il continuera à bien fonctionner pour toujours. Cependant, de nouveaux déclencheurs sur le réseau peuvent amener votre subgraph à rencontrer une condition d'erreur non testée ou il peut commencer à prendre du retard en raison de problèmes de performances ou de problèmes avec les opérateurs de nœuds. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node expose un endpoint GraphQL que vous pouvez interroger pour vérifier l'état de votre subgraph. Sur le service hébergé, il est disponible à l'adresse `https://api.thegraph.com/index-node/graphql`. Sur un nœud local, il est disponible sur le port `8030/graphql` par défaut. Le schéma complet de cet endpoint peut être trouvé [ici](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Voici un exemple de requête qui vérifie l'état de la version actuelle d'un subgraph: ```graphql { @@ -238,4 +237,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +Cela vous donnera le `chainHeadBlock` que vous pouvez comparer avec le `latestBlock` sur votre subgraph pour vérifier s'il est en retard. `synced` vous informe si le subgraph a déjà rattrapé la chaîne. `health` peut actuellement prendre les valeurs de `healthy` si aucune erreur ne s'est produite, ou `failed` s'il y a eu une erreur qui a stoppé la progression du subgraph. Dans ce cas, vous pouvez vérifier le champ `fatalError` pour les détails sur cette erreur. diff --git a/website/pages/fr/deploying/subgraph-studio-faqs.mdx b/website/pages/fr/deploying/subgraph-studio-faqs.mdx index ae6da600254d..1ace101654f2 100644 --- a/website/pages/fr/deploying/subgraph-studio-faqs.mdx +++ b/website/pages/fr/deploying/subgraph-studio-faqs.mdx @@ -8,7 +8,7 @@ title: Subgraph Studio FAQ ## 2. Comment créer une clé API ? -To create an API, navigate to Subgraph Studio and connect your wallet. You will be able to click the API keys tab at the top. There, you will be able to create an API key. +Pour créer une API, allez dans Subgraph Studio et connectez votre portefeuille. Vous pourrez cliquer sur l'onglet des clés API en haut. Là, vous pourrez créer une clé API. ## 3. Puis-je créer plusieurs clés API ? @@ -20,12 +20,12 @@ Après avoir créé une clé API, dans la section Sécurité, vous pouvez défin ## Puis-je transférer mon subgraph à un autre propriétaire ? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Oui, les subgraphs qui ont été publiés sur Arbitrum One peuvent être transférés vers un nouveau portefeuille ou un Multisig. Vous pouvez le faire en cliquant sur les trois points à côté du bouton 'Publish' sur la page des détails du subgraph et en sélectionnant 'Transfer ownership'. Notez que vous ne pourrez plus voir ou modifier le subgraph dans Studio une fois qu'il aura été transféré. ## Comment trouver les URL de requête pour les sugraphs si je ne suis pas le développeur du subgraph que je veux utiliser ? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +Vous pouvez trouver l'URL de requête de chaque subgraph dans la section Subgraph Details de Graph Explorer. Lorsque vous cliquez sur le bouton "Query", vous serez dirigé vers un volet où vous pourrez voir l'URL de requête du subgraph qui vous intéresse. Vous pouvez ensuite remplacer le placeholder `` par la clé API que vous souhaitez utiliser dans Subgraph Studio. N'oubliez pas que vous pouvez créer une clé API et interroger n'importe quel subgraph publié sur le réseau, même si vous créez vous-même un subgraph. Ces requêtes via la nouvelle clé API, sont des requêtes payantes comme n'importe quelle autre sur le réseau. diff --git a/website/pages/fr/developing/creating-a-subgraph/advanced.mdx b/website/pages/fr/developing/creating-a-subgraph/advanced.mdx new file mode 100644 index 000000000000..9788771e2158 --- /dev/null +++ b/website/pages/fr/developing/creating-a-subgraph/advanced.mdx @@ -0,0 +1,555 @@ +--- +title: Advance Subgraph Features +--- + +## Aperçu + +Add and implement advanced subgraph features to enhanced your subgraph's built. + +Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: + +| Feature | Name | +| ---------------------------------------------------- | ---------------- | +| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | +| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | + +For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +features: + - fullTextSearch + - nonFatalErrors +dataSources: ... +``` + +> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. + +## Séries chronologiques et agrégations + +Les séries chronologiques et les agrégations permettent à votre subgraph de suivre des statistiques telles que le prix moyen journalier, le total des transferts par heure, etc. + +Cette fonctionnalité introduit deux nouveaux types d'entités de subgraph. Les entités de séries chronologiques enregistrent des points de données avec des horodatages. Les entités d'agrégation effectuent des calculs pré-déclarés sur les points de données des séries chronologiques sur une base horaire ou quotidienne, puis stockent les résultats pour un accès facile via GraphQL. + +### Exemple de schéma + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} + +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +### Définition des Séries Chronologiques et des Agrégations + +Timeseries entities are defined with `@entity(timeseries: true)` in schema.graphql. Every timeseries entity must have a unique ID of the int8 type, a timestamp of the Timestamp type, and include data that will be used for calculation by aggregation entities. These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the Aggregation entities. + +Aggregation entities are defined with `@aggregation` in schema.graphql. Every aggregation entity defines the source from which it will gather data (which must be a Timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. + +#### Intervalles d'Agrégation disponibles + +- `hour`: sets the timeseries period every hour, on the hour. +- `day`: sets the timeseries period every day, starting and ending at 00:00. + +#### Fonctions d'Agrégation disponibles + +- `sum`: Total of all values. +- `count`: Number of values. +- `min`: Minimum value. +- `max`: Maximum value. +- `first`: First value in the period. +- `last`: Last value in the period. + +#### Exemple de requête d'Agrégations + +```graphql +{ + stats(interval: "hour", where: { timestamp_gt: 1704085200 }) { + id + timestamp + sum + } +} +``` + +Remarque: + +Pour utiliser les Séries Chronologiques et les Agrégations, un subgraph doit avoir une spec Version ≥1.1.0. Notez que cette fonctionnalité pourrait subir des changements significatifs affectant la compatibilité rétroactive. + +[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations. + +## Erreurs non fatales + +Les erreurs d'indexation sur les subgraphs déjà synchronisés entraîneront, par défaut, l'échec du subgraph et l'arrêt de la synchronisation. Les subgraphs peuvent également être configurés pour continuer la synchronisation en présence d'erreurs, en ignorant les modifications apportées par le gestionnaire qui a provoqué l'erreur. Cela donne aux auteurs de subgraphs le temps de corriger leurs subgraphs pendant que les requêtes continuent d'être traitées sur le dernier bloc, bien que les résultats puissent être incohérents en raison du bogue à l'origine de l'erreur. Notez que certaines erreurs sont toujours fatales. Pour être non fatale, l'erreur doit être connue pour être déterministe. + +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. + +L'activation des erreurs non fatales nécessite la définition de l'indicateur de fonctionnalité suivant sur le manifeste du subgraph : + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +features: + - nonFatalErrors + ... +``` + +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: + +```graphql +foos(first: 100, subgraphError: allow) { + id +} + +_meta { + hasIndexingErrors +} +``` + +If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: + +```graphql +"data": { + "foos": [ + { + "id": "0xdead" + } + ], + "_meta": { + "hasIndexingErrors": true + } +}, +"errors": [ + { + "message": "indexing_error" + } +] +``` + +## File Data Sources de fichiers IPFS/Arweave + +Les sources de données de fichiers sont une nouvelle fonctionnalité de subgraph permettant d'accéder aux données hors chaîne pendant l'indexation de manière robuste et extensible. Les sources de données de fichiers prennent en charge la récupération de fichiers depuis IPFS et Arweave. + +> Cela jette également les bases d’une indexation déterministe des données hors chaîne, ainsi que de l’introduction potentielle de données arbitraires provenant de HTTP. + +### Aperçu + +Plutôt que de récupérer les fichiers "ligne par ligne" pendant l'exécution du gestionnaire, ceci introduit des modèles qui peuvent être générés comme nouvelles sources de données pour un identifiant de fichier donné. Ces nouvelles sources de données récupèrent les fichiers, réessayant en cas d'échec, et exécutant un gestionnaire dédié lorsque le fichier est trouvé. + +This is similar to the [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources. + +> This replaces the existing `ipfs.cat` API + +### Guide de mise à niveau + +#### Update `graph-ts` and `graph-cli` + +File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1 + +#### Ajouter un nouveau type d'entité qui sera mis à jour lorsque des fichiers seront trouvés + +Les sources de données de fichier ne peuvent pas accéder ni mettre à jour les entités basées sur une chaîne, mais doivent mettre à jour les entités spécifiques au fichier. + +Cela peut impliquer de diviser les champs des entités existantes en entités distinctes, liées entre elles. + +Entité combinée d'origine : + +```graphql +type Token @entity { + id: ID! + tokenID: BigInt! + tokenURI: String! + externalURL: String! + ipfsURI: String! + image: String! + name: String! + description: String! + type: String! + updatedAtTimestamp: BigInt + owner: User! +} +``` + +Nouvelle entité scindée : + +```graphql +type Token @entity { + id: ID! + tokenID: BigInt! + tokenURI: String! + ipfsURI: TokenMetadata + updatedAtTimestamp: BigInt + owner: String! +} + +type TokenMetadata @entity { + id: ID! + image: String! + externalURL: String! + name: String! + description: String! +} +``` + +Si la relation est 1:1 entre l'entité parent et l'entité de source de données de fichier résultante, le modèle le plus simple consiste à lier l'entité parent à une entité de fichier résultante en utilisant le CID IPFS comme recherche. Contactez Discord si vous rencontrez des difficultés pour modéliser vos nouvelles entités basées sur des fichiers ! + +> You can use [nested filters](/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities. + +#### Add a new templated data source with `kind: file/ipfs` or `kind: file/arweave` + +Il s'agit de la source de données qui sera générée lorsqu'un fichier d'intérêt est identifié. + +```yaml +templates: + - name: TokenMetadata + kind: file/ipfs + mapping: + apiVersion: 0.0.7 + language: wasm/assemblyscript + file: ./src/mapping.ts + handler: handleMetadata + entities: + - TokenMetadata + abis: + - name: Token + file: ./abis/Token.json +``` + +> Currently `abis` are required, though it is not possible to call contracts from within file data sources + +The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#limitations) for more details. + +#### Créer un nouveau gestionnaire pour traiter les fichiers + +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). + +The CID of the file as a readable string can be accessed via the `dataSource` as follows: + +```typescript +const cid = dataSource.stringParam() +``` + +Exemple de gestionnaire : + +```typescript +import { json, Bytes, dataSource } from '@graphprotocol/graph-ts' +import { TokenMetadata } from '../generated/schema' + +export function handleMetadata(content: Bytes): void { + let tokenMetadata = new TokenMetadata(dataSource.stringParam()) + const value = json.fromBytes(content).toObject() + if (value) { + const image = value.get('image') + const name = value.get('name') + const description = value.get('description') + const externalURL = value.get('external_url') + + if (name && image && description && externalURL) { + tokenMetadata.name = name.toString() + tokenMetadata.image = image.toString() + tokenMetadata.externalURL = externalURL.toString() + tokenMetadata.description = description.toString() + } + + tokenMetadata.save() + } +} +``` + +#### Générer des sources de données de fichiers si nécessaire + +Vous pouvez désormais créer des sources de données de fichiers lors de l'exécution de gestionnaires basés sur une chaîne : + +- Import the template from the auto-generated `templates` +- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave + +For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). + +For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). + +L'exemple: + +```typescript +import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' + +const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' +//Cet exemple de code concerne un sous-graphe de Crypto coven. Le hachage ipfs ci-dessus est un répertoire contenant les métadonnées des jetons pour toutes les NFT de l'alliance cryptographique. + +export function handleTransfer(event: TransferEvent): void { + let token = Token.load(event.params.tokenId.toString()) + if (!token) { + token = new Token(event.params.tokenId.toString()) + token.tokenID = event.params.tokenId + + token.tokenURI = '/' + event.params.tokenId.toString() + '.json' + const tokenIpfsHash = ipfshash + token.tokenURI + //Ceci crée un chemin vers les métadonnées pour un seul Crypto coven NFT. Il concatène le répertoire avec "/" + nom de fichier + ".json" + + token.ipfsURI = tokenIpfsHash + + TokenMetadataTemplate.create(tokenIpfsHash) + } + + token.updatedAtTimestamp = event.block.timestamp + token.owner = event.params.to.toHexString() + token.save() +} +``` + +Cela créera une nouvelle source de données de fichier, qui interrogera le point d'extrémité IPFS ou Arweave configuré du nœud de graphique, en réessayant si elle n'est pas trouvée. Lorsque le fichier est trouvé, le gestionnaire de la source de données de fichier est exécuté. + +This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. + +> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file + +Félicitations, vous utilisez des sources de données de fichiers ! + +#### Déployer vos subgraphs + +You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. + +#### Limitations + +Les entités et les gestionnaires de sources de données de fichiers sont isolés des autres entités du subgraph, ce qui garantit que leur exécution est déterministe et qu'il n'y a pas de contamination des sources de données basées sur des chaînes. Pour être plus précis : + +- Les entités créées par les sources de données de fichiers sont immuables et ne peuvent pas être mises à jour +- Les gestionnaires de sources de données de fichiers ne peuvent pas accéder à des entités provenant d'autres sources de données de fichiers +- Les entités associées aux sources de données de fichiers ne sont pas accessibles aux gestionnaires basés sur des chaînes + +> Cette contrainte ne devrait pas poser de problème pour la plupart des cas d'utilisation, mais elle peut en compliquer certains. N'hésitez pas à nous contacter via Discord si vous rencontrez des problèmes pour modéliser vos données basées sur des fichiers dans un subgraph ! + +En outre, il n'est pas possible de créer des sources de données à partir d'une source de données de fichier, qu'il s'agisse d'une source de données onchain ou d'une autre source de données de fichier. Cette restriction pourrait être levée à l'avenir. + +#### Meilleures pratiques + +Si vous liez des métadonnées NFT aux jetons correspondants, utilisez le hachage IPFS des métadonnées pour référencer une entité Metadata à partir de l'entité Token. Enregistrez l'entité Metadata en utilisant le hachage IPFS comme identifiant. + +You can use [DataSource context](/developing/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. + +If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. + +> Nous travaillons à l'amélioration de la recommandation ci-dessus, afin que les requêtes ne renvoient que la version "la plus récente" + +#### Problèmes connus + +File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. + +Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. + +#### Exemples + +[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) + +#### Les Références + +[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) + +## Filtres d'Arguments indexés / Filtres de Topics + +> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` + +Les filtres de topics, également connus sous le nom de filtres d'arguments indexés, sont une fonctionnalité puissante dans les subgraphs qui permettent aux utilisateurs de filtrer précisément les événements de la blockchain en fonction des valeurs de leurs arguments indexés. + +- Ces filtres aident à isoler des événements spécifiques intéressants parmi le vaste flux d'événements sur la blockchain, permettant aux subgraphs de fonctionner plus efficacement en se concentrant uniquement sur les données pertinentes. + +- Ceci est utile pour créer des subgraphs personnels qui suivent des adresses spécifiques et leurs interactions avec divers contrats intelligents sur la blockchain. + +### Comment fonctionnent les filtres de Topics + +Lorsqu'un contrat intelligent émet un événement, tous les arguments marqués comme indexés peuvent être utilisés comme filtres dans le manifeste d'un subgraph. Ceci permet au subgraph d'écouter de façon sélective les événements qui correspondent à ces arguments indexés. + +- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. + +```solidity +// SPDX-License-Identifier: MIT +pragma solidity ^0.8.0; + +contract Token { + // Déclaration de l'événement avec des paramètres indexés pour les adresses + event Transfer(address indexed from, address indexed to, uint256 value); + + // Fonction pour simuler le transfert de tokens + function transfer(address to, uint256 value) public { + // Emission de l'événement Transfer avec from, to, et value + emit Transfer(msg.sender, to, value); + } +} +``` + +Dans cet exemple: + +- The `Transfer` event is used to log transactions of tokens between addresses. +- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. +- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. + +#### Configuration dans les subgraphs + +Les filtres de topics sont définis directement dans la configuration du gestionnaire d'évènement situé dans le manifeste du subgraph. Voici comment ils sont configurés : + +```yaml +eventHandlers: + - event: SomeEvent(indexed uint256, indexed address, indexed uint256) + handler: handleSomeEvent + topic1: ['0xValue1', '0xValue2'] + topic2: ['0xAddress1', '0xAddress2'] + topic3: ['0xValue3'] +``` + +Dans cette configuration : + +- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. +- Chaque topic peut avoir une ou plusieurs valeurs, et un événement n'est traité que s'il correspond à l'une des valeurs de chaque rubrique spécifiée. + +#### Logique des Filtres + +- Au sein d'une même Topic: La logique fonctionne comme une condition OR. L'événement sera traité s'il correspond à l'une des valeurs listées dans une rubrique donnée. +- Entre différents Topics: La logique fonctionne comme une condition AND. Un événement doit satisfaire toutes les conditions spécifiées à travers les différents topics pour déclencher le gestionnaire associé. + +#### Exemple 1 : Suivi des transferts directs de l'adresse A à l'adresse B + +```yaml +eventHandlers: + - event: Transfer(indexed address,indexed address,uint256) + handler: handleDirectedTransfer + topic1: ['0xAddressA'] # Sender Address + topic2: ['0xAddressB'] # Receiver Address +``` + +Dans cette configuration: + +- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. +- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. +- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. + +#### Exemple 2 : Suivi des transactions dans les deux sens entre deux ou plusieurs adresses + +```yaml +eventHandlers: + - event: Transfer(indexed address,indexed address,uint256) + handler: handleTransferToOrFrom + topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Sender Address + topic2: ['0xAddressB', '0xAddressC'] # Receiver Address +``` + +Dans cette configuration: + +- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. +- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. +- Le subgraph indexera les transactions qui se produisent dans les deux sens entre plusieurs adresses, permettant une surveillance complète des interactions impliquant toutes les adresses. + +## Déclaration eth_call + +> Remarque : Il s'agit d'une fonctionnalité expérimentale qui n'est pas encore disponible dans une version stable de Graph Node. Vous ne pouvez l'utiliser que dans Subgraph Studio ou sur votre nœud auto-hébergé. + +Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. + +Cette fonctionnalité permet de : + +- Améliorer de manière significative les performances de la récupération des données de la blockchain Ethereum en réduisant le temps total pour plusieurs appels et en optimisant l'efficacité globale du subgraph. +- Permet une récupération plus rapide des données, entraînant des réponses de requête plus rapides et une meilleure expérience utilisateur. +- Réduire les temps d'attente pour les applications qui doivent réunir des données de plusieurs appels Ethereum, rendant le processus de récupération des données plus efficace. + +### Concepts clés + +- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. +- Exécution en parallèle : Au lieu d'attendre la fin d'un appel avant de commencer le suivant, plusieurs appels peuvent être initiés simultanément. +- Efficacité temporelle : Le temps total nécessaire pour tous les appels passe de la somme des temps d'appel individuels (séquentiels) au temps pris par l'appel le plus long (parallèle). + +#### Scenario without Declarative `eth_calls` + +Imaginez que vous ayez un subgraph qui doit effectuer trois appels Ethereum pour récupérer des données sur les transactions, le solde et les avoirs en jetons d'un utilisateur. + +Traditionnellement, ces appels pourraient être effectués de manière séquentielle : + +1. Appel 1 (Transactions) : Prend 3 secondes +2. Appel 2 (Solde) : Prend 2 secondes +3. Appel 3 (Avoirs en jetons) : Prend 4 secondes + +Temps total pris = 3 + 2 + 4 = 9 secondes + +#### Scenario with Declarative `eth_calls` + +Avec cette fonctionnalité, vous pouvez déclarer que ces appels soient exécutés en parallèle : + +1. Appel 1 (Transactions) : Prend 3 secondes +2. Appel 2 (Solde) : Prend 2 secondes +3. Appel 3 (Avoirs en jetons) : Prend 4 secondes + +Puisque ces appels sont exécutés en parallèle, le temps total pris est égal au temps pris par l'appel le plus long. + +Temps total pris = max (3, 2, 4) = 4 secondes + +#### Comment ça marche + +1. Définition déclarative : Dans le manifeste du subgraph, vous déclarez les appels Ethereum d'une manière indiquant qu'ils peuvent être exécutés en parallèle. +2. Moteur d'exécution parallèle : Le moteur d'exécution de Graph Node reconnaît ces déclarations et exécute les appels simultanément. +3. Agrégation des résultats : Une fois que tous les appels sont terminés, les résultats sont réunis et utilisés par le subgraph pour un traitement ultérieur. + +#### Exemple de configuration dans le manifeste du subgraph + +Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. + +`Subgraph.yaml` using `event.address`: + +```yaml +eventHandlers: +event: Swap(indexed address,indexed address,int256,int256,uint160,uint128,int24) +handler: handleSwap +calls: + global0X128: Pool[event.address].feeGrowthGlobal0X128() + global1X128: Pool[event.address].feeGrowthGlobal1X128() +``` + +Détails pour l'exemple ci-dessus : + +- `global0X128` is the declared `eth_call`. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` +- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. + +`Subgraph.yaml` using `event.params` + +```yaml +calls: + - ERC20DecimalsToken0: ERC20[event.params.token0].decimals() +``` + +### Greffe sur des subgraphs existants + +> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). + +When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. + +A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: + +```yaml +description: ... +graft: + base: Qm... # Subgraph ID of base subgraph + block: 7345624 # Block number +``` + +When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. + +Étant donné que le greffage copie plutôt que l'indexation des données de base, il est beaucoup plus rapide d'amener le susgraph dans le bloc souhaité que l'indexation à partir de zéro, bien que la copie initiale des données puisse encore prendre plusieurs heures pour de très gros subgraphs. Pendant l'initialisation du subgraph greffé, le nœud graphique enregistrera des informations sur les types d'entités qui ont déjà été copiés. + +Le subgraph greffé peut utiliser un schéma GraphQL qui n'est pas identique à celui du subgraph de base, mais simplement compatible avec celui-ci. Il doit s'agir d'un schéma de subgraph valide à part entière, mais il peut s'écarter du schéma du subgraph de base des manières suivantes : + +- Il ajoute ou supprime des types d'entités +- Il supprime les attributs des types d'entités +- Il ajoute des attributs nullables aux types d'entités +- Il transforme les attributs non nullables en attributs nullables +- Il ajoute des valeurs aux énumérations +- Il ajoute ou supprime des interfaces +- Cela change pour quels types d'entités une interface est implémentée + +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. diff --git a/website/pages/fr/developing/creating-a-subgraph/assemblyscript-mappings.mdx b/website/pages/fr/developing/creating-a-subgraph/assemblyscript-mappings.mdx new file mode 100644 index 000000000000..22d69907dbd2 --- /dev/null +++ b/website/pages/fr/developing/creating-a-subgraph/assemblyscript-mappings.mdx @@ -0,0 +1,113 @@ +--- +title: Writing AssemblyScript Mappings +--- + +## Aperçu + +The mappings take data from a particular source and transform it into entities that are defined within your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. + +## Écriture de mappages + +For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. + +In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: + +```javascript +import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' +import { Gravatar } from '../generated/schema' + +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let id = event.params.id + let gravatar = Gravatar.load(id) + if (gravatar == null) { + gravatar = new Gravatar(id) + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. + +The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on-demand. The entity is then updated to match the new event parameters before it is saved back to the store using `gravatar.save()`. + +### ID recommandés pour la création de nouvelles entités + +It is highly recommended to use `Bytes` as the type for `id` fields, and only use `String` for attributes that truly contain human-readable text, like the name of a token. Below are some recommended `id` values to consider when creating new entities. + +- `transfer.id = event.transaction.hash` + +- `let id = event.transaction.hash.concatI32(event.logIndex.toI32())` + +- For entities that store aggregated data, for e.g, daily trade volumes, the `id` usually contains the day number. Here, using a `Bytes` as the `id` is beneficial. Determining the `id` would look like + +```typescript +let dayID = event.block.timestamp.toI32() / 86400 +let id = Bytes.fromI32(dayID) +``` + +- Convert constant addresses to `Bytes`. + +`const id = Bytes.fromHexString('0xdead...beef')` + +There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) which contains utilities for interacting with the Graph Node store and conveniences for handling smart contract data and entities. It can be imported into `mapping.ts` from `@graphprotocol/graph-ts`. + +### Traitement des entités ayant des identifiants identiques + +Lors de la création et de l'enregistrement d'une nouvelle entité, si une entité avec le même ID existe déjà, les propriétés de la nouvelle entité sont toujours préférées lors du processus de fusion. Cela signifie que l'entité existante sera mise à jour avec les valeurs de la nouvelle entité. + +Si une valeur nulle est intentionnellement définie pour un champ de la nouvelle entité avec le même ID, l'entité existante sera mise à jour avec la valeur nulle. + +Si aucune valeur n'est définie pour un champ de la nouvelle entité avec le même ID, le champ aura également la valeur null. + +## Génération de code + +Afin de faciliter et de sécuriser le travail avec les contrats intelligents, les événements et les entités, la CLI Graph peut générer des types AssemblyScript à partir du schéma GraphQL du subgraph et des ABI de contrat inclus dans les sources de données. + +Cela se fait avec + +```sh +graph codegen [--output-dir ] [] +``` + +but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: + +```sh +# Yarn +yarn codegen + +# NPM +npm run codegen +``` + +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. + +```javascript +import { + // La classe de contrat : + Gravity, + // Les classes d'événements : + NewGravatar, + UpdatedGravatar, +} from '../generated/Gravity/Gravity' +``` + +In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with + +```javascript +import { Gravatar } from '../generated/schema' +``` + +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. + +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/pages/fr/developing/creating-a-subgraph/install-the-cli.mdx b/website/pages/fr/developing/creating-a-subgraph/install-the-cli.mdx new file mode 100644 index 000000000000..d5f137f2260f --- /dev/null +++ b/website/pages/fr/developing/creating-a-subgraph/install-the-cli.mdx @@ -0,0 +1,119 @@ +--- +title: Installation du Graph CLI +--- + +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/network/curating/). + +## Aperçu + +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/creating-a-subgraph/subgraph-manifest/) and compiles the [mappings](/creating-a-subgraph/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. + +## Démarrage + +### Installation du Graph CLI + +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +Sur votre machine locale, exécutez l'une des commandes suivantes : + +#### Using [npm](https://www.npmjs.com/) + +```bash +npm install -g @graphprotocol/graph-cli@latest +``` + +#### Using [yarn](https://yarnpkg.com/) + +```bash +npm install -g @graphprotocol/graph-cli +``` + +The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. + +## Créer un subgraph + +### À partir d'un contrat existant + +La commande suivante crée un subgraph qui indexe tous les événements d'un contrat existant : + +```sh +graph init \ + --product subgraph-studio + --from-contract \ + [--network ] \ + [--abi ] \ + [] +``` + +- La commande tente de récupérer l'ABI du contrat depuis Etherscan. + + - Graph CLI repose sur un endpoint RPC public. Bien que des échecs occasionnels soient attendus, les réessais résolvent généralement ce problème. Si les échecs persistent, envisagez d'utiliser un ABI local. + +- Si certains arguments optionnels manquent, il vous guide à travers un formulaire interactif. + +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. + +### À partir d'un exemple de subgraph + +La commande suivante initialise un nouveau projet à partir d'un exemple de subgraph : + +```sh +graph init --from-example=example-subgraph +``` + +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. + +### Add New `dataSources` to an Existing Subgraph + +`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. + +Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: + +```sh +graph add
[] + +Options: + + --abi Path to the contract ABI (default: download from Etherscan) + --contract-name Name of the contract (default: Contract) + --merge-entities Whether to merge entities with the same name (default: false) + --network-file Networks config file path (default: "./networks.json") +``` + +#### Spécificités⁠ + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. + +### Récupération des ABIs + +Le(s) fichier(s) ABI doivent correspondre à votre(vos) contrat(s). Il existe plusieurs façons d'obtenir des fichiers ABI : + +- Si vous construisez votre propre projet, vous aurez probablement accès à vos ABI les plus récents. +- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. + +## Versions disponibles de SpecVersion + +| Version | Notes de version | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Ajout de la prise en charge des gestionnaires d'événement ayant accès aux reçus de transactions. | +| 0.0.4 | Ajout de la prise en charge du management des fonctionnalités de subgraph. | diff --git a/website/pages/fr/developing/creating-a-subgraph/ql-schema.mdx b/website/pages/fr/developing/creating-a-subgraph/ql-schema.mdx new file mode 100644 index 000000000000..b0c91ff4665f --- /dev/null +++ b/website/pages/fr/developing/creating-a-subgraph/ql-schema.mdx @@ -0,0 +1,312 @@ +--- +title: The Graph QL Schema +--- + +## Aperçu + +The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. + +> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/querying/graphql-api/) section. + +### Définition des entités + +Avant de définir des entités, il est important de prendre du recul et de réfléchir à la manière dont vos données sont structurées et liées. + +- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- Il peut être utile d'imaginer les entités comme des "objets contenant des données", plutôt que comme des événements ou des fonctions. +- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. +- Each type that should be an entity is required to be annotated with an `@entity` directive. +- Par défaut, les entités sont mutables, ce qui signifie que les mappages peuvent charger des entités existantes, les modifier et stocker une nouvelle version de cette entité. + - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. + - Si des changements se produisent dans le même bloc où l'entité a été créée, alors les mappages peuvent effectuer des changements sur les entités immuables. Les entités immuables sont beaucoup plus rapides à écrire et à interroger, donc elles devraient être utilisées chaque fois que c'est possible. + +#### Bon exemple + +The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. + +```graphql +type Gravatar @entity(immutable: true) { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String + accepted: Boolean +} +``` + +#### Mauvais exemple + +The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. + +```graphql +type GravatarAccepted @entity { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String +} + +type GravatarDeclined @entity { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String +} +``` + +#### Champs facultatifs et obligatoires + +Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If the field is a scalar field, you get an error when you try to store the entity. If the field references another entity then you get this error: + +``` +Null value resolved for non-null field 'name' +``` + +Each entity must have an `id` field, which must be of type `Bytes!` or `String!`. It is generally recommended to use `Bytes!`, unless the `id` contains human-readable text, since entities with `Bytes!` id's will be faster to write and query as those with a `String!` `id`. The `id` field serves as the primary key, and needs to be unique among all entities of the same type. For historical reasons, the type `ID!` is also accepted and is a synonym for `String!`. + +For some entity types the `id` for `Bytes!` is constructed from the id's of two other entities; that is possible using `concat`, e.g., `let id = left.id.concat(right.id) ` to form the id from the id's of `left` and `right`. Similarly, to construct an id from the id of an existing entity and a counter `count`, `let id = left.id.concatI32(count)` can be used. The concatenation is guaranteed to produce unique id's as long as the length of `left` is the same for all such entities, for example, because `left.id` is an `Address`. + +### Types scalaires intégrés + +#### Scalaires pris en charge par GraphQL + +Les scalaires suivants sont supportés dans l'API GraphQL : + +| Type | Description | +| --- | --- | +| `Bytes` | Tableau d'octets, représenté sous forme de chaîne hexadécimale. Couramment utilisé pour les hachages et adresses Ethereum. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | + +### Enums + +Vous pouvez également créer des énumérations dans un schéma. Les énumérations ont la syntaxe suivante : + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner"`. The example below demonstrates what the Token entity would look like with an enum field: + +More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). + +### Relations entre entités + +Une entité peut avoir une relation avec une ou plusieurs autres entités de votre schéma. Ces relations pourront être parcourues dans vos requêtes. Les relations dans The Graph sont unidirectionnelles. Il est possible de simuler des relations bidirectionnelles en définissant une relation unidirectionnelle à chaque « extrémité » de la relation. + +Les relations sont définies sur les entités comme n'importe quel autre champ sauf que le type spécifié est celui d'une autre entité. + +#### Relations individuelles + +Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: + +```graphql +type Transaction @entity(immutable: true) { + id: Bytes! + transactionReceipt: TransactionReceipt +} + +type TransactionReceipt @entity(immutable: true) { + id: Bytes! + transaction: Transaction +} +``` + +#### Relations un-à-plusieurs + +Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: + +```graphql +type Token @entity(immutable: true) { + id: Bytes! +} + +type TokenBalance @entity { + id: Bytes! + amount: Int! + token: Token! +} +``` + +### Recherches inversées + +Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. + +Pour les relations un-à-plusieurs, la relation doit toujours être stockée du côté « un » et le côté « plusieurs » doit toujours être dérivé. Stocker la relation de cette façon, plutôt que de stocker un tableau d'entités du côté « plusieurs », entraînera des performances considérablement meilleures pour l'indexation et l'interrogation du sous-graphe. En général, le stockage de tableaux d’entités doit être évité autant que possible. + +#### Exemple + +We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: + +```graphql +type Token @entity(immutable: true) { + id: Bytes! + tokenBalances: [TokenBalance!]! @derivedFrom(field: "token") +} + +type TokenBalance @entity { + id: Bytes! + amount: Int! + token: Token! +} +``` + +#### Relations plusieurs-à-plusieurs + +Pour les relations plusieurs-à-plusieurs, telles que les utilisateurs pouvant appartenir à un nombre quelconque d'organisations, la manière la plus simple, mais généralement pas la plus performante, de modéliser la relation consiste à créer un tableau dans chacune des deux entités impliquées. Si la relation est symétrique, un seul côté de la relation doit être stocké et l’autre côté peut être dérivé. + +#### Exemple + +Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. + +```graphql +type Organization @entity { + id: Bytes! + name: String! + members: [User!]! +} + +type User @entity { + id: Bytes! + name: String! + organizations: [Organization!]! @derivedFrom(field: "members") +} +``` + +A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like + +```graphql +type Organization @entity { + id: Bytes! + name: String! + members: [UserOrganization!]! @derivedFrom(field: "organization") +} + +type User @entity { + id: Bytes! + name: String! + organizations: [UserOrganization!] @derivedFrom(field: "user") +} + +type UserOrganization @entity { + id: Bytes! # Set to `user.id.concat(organization.id)` + user: User! + organization: Organization! +} +``` + +Cette approche nécessite que les requêtes descendent vers un niveau supplémentaire pour récupérer, par exemple, les organisations des utilisateurs : + +```graphql +query usersWithOrganizations { + users { + organizations { + # ceci est une entité UserOrganization + organization { + name + } + } + } +} +``` + +Cette manière plus élaborée de stocker des relations plusieurs-à-plusieurs entraînera moins de données stockées pour le subgraph, et donc vers un subgraph qui est souvent considérablement plus rapide à indexer et à interroger. + +### Ajouter des commentaires au schéma + +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: + +```graphql +type MyFirstEntity @entity { + # identifiant unique et clé primaire de l'entité + id: Bytes! + address: Bytes! +} +``` + +## Définir les champs de recherche en texte intégral + +Les requêtes de recherche en texte intégral filtrent et classent les entités en fonction d'une entrée de recherche de texte. Les requêtes en texte intégral sont capables de renvoyer des correspondances pour des mots similaires en traitant le texte de la requête saisi en radicaux avant de les comparer aux données textuelles indexées. + +Une définition de requête en texte intégrale inclut le nom de la requête, le dictionnaire de langue utilisé pour traiter les champs de texte, l'algorithme de classement utilisé pour classer les résultats et les champs inclus dans la recherche. Chaque requête en texte intégral peut s'étendre sur plusieurs champs, mais tous les champs inclus doivent provenir d'un seul type d'entité. + +To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. + +```graphql +type _Schema_ + @fulltext( + name: "bandSearch" + language: en + algorithm: rank + include: [{ entity: "Band", fields: [{ name: "name" }, { name: "description" }, { name: "bio" }] }] + ) + +type Band @entity { + id: Bytes! + name: String! + description: String! + bio: String + wallet: Address + labels: [Label!]! + discography: [Album!]! + members: [Musician!]! +} +``` + +The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/querying/graphql-api#queries) for a description of the fulltext search API and more example usage. + +```graphql +query { + bandSearch(text: "breaks & electro & detroit") { + id + name + description + wallet + } +} +``` + +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. + +## Langues prises en charge + +Le choix d'une langue différente aura un effet définitif, bien que parfois subtil, sur l'API de recherche en texte intégral. Les champs couverts par un champ de requête en texte intégral sont examinés dans le contexte de la langue choisie, de sorte que les lexèmes produits par les requêtes d'analyse et de recherche varient d'une langue à l'autre. Par exemple : lorsque vous utilisez le dictionnaire turc pris en charge, "token" est dérivé de "toke", tandis que, bien sûr, le dictionnaire anglais le dérivera de "token". + +Dictionnaires de langues pris en charge : + +| Code | Dictionnaire | +| ------ | ------------ | +| simple | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | Portugais | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | + +### Algorithmes de classement + +Algorithmes de classement: + +| Algorithm | Description | +| --- | --- | +| rank | Utilisez la qualité de correspondance (0-1) de la requête en texte intégral pour trier les résultats. | +| proximitéRang | Similar to rank but also includes the proximity of the matches. | diff --git a/website/pages/fr/developing/creating-a-subgraph/starting-your-subgraph.mdx b/website/pages/fr/developing/creating-a-subgraph/starting-your-subgraph.mdx new file mode 100644 index 000000000000..423512bd80bc --- /dev/null +++ b/website/pages/fr/developing/creating-a-subgraph/starting-your-subgraph.mdx @@ -0,0 +1,21 @@ +--- +title: Starting Your Subgraph +--- + +## Aperçu + +The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. + +When you create a [subgraph](/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. + +Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. + +### Start Building + +Start the process and build a subgraph that matches your needs: + +1. [Install the CLI](/developing/creating-a-subgraph/install-the-cli/) - Set up your infrastructure +2. [Subgraph Manifest](/developing/creating-a-subgraph/subgraph-manifest/) - Understand a subgraph's key component +3. [The Graph Ql Schema](/developing/creating-a-subgraph/ql-schema/) - Write your schema +4. [Writing AssemblyScript Mappings](/developing/creating-a-subgraph/assemblyscript-mappings/) - Write your mappings +5. [Advanced Features](/developing/creating-a-subgraph/advanced/) - Customize your subgraph with advanced features diff --git a/website/pages/fr/developing/creating-a-subgraph/subgraph-manifest.mdx b/website/pages/fr/developing/creating-a-subgraph/subgraph-manifest.mdx new file mode 100644 index 000000000000..a7c76c52d491 --- /dev/null +++ b/website/pages/fr/developing/creating-a-subgraph/subgraph-manifest.mdx @@ -0,0 +1,534 @@ +--- +title: Subgraph Manifest +--- + +## Aperçu + +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. + +The **subgraph definition** consists of the following files: + +- `subgraph.yaml`: Contains the subgraph manifest + +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL + +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +### Subgraph Capabilities + +Un seul subgraph peut : + +- Indexer les données de plusieurs contrats intelligents (mais pas de plusieurs réseaux). + +- Indexer des données de fichiers IPFS en utilisant des File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: + +```yaml +version spec : 0.0.4 +description : Gravatar pour Ethereum +référentiel : https://github.com/graphprotocol/graph-tooling +schéma: + fichier : ./schema.graphql +indexeurConseils : + tailler : automatique +les sources de données: + - genre : ethereum/contrat + nom: Gravité + réseau : réseau principal + source: + adresse : '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + abi : Gravité + bloc de démarrage : 6175244 + bloc de fin : 7175245 + contexte: + foo : + tapez : Booléen + données : vrai + bar: + tapez : chaîne + données : 'barre' + cartographie : + genre : ethereum/événements + Version api : 0.0.6 + langage : wasm/assemblyscript + entités : + -Gravatar + abis : + - nom : Gravité + fichier : ./abis/Gravity.json + Gestionnaires d'événements : + - événement : NewGravatar(uint256,adresse,chaîne,chaîne) + gestionnaire : handleNewGravatar + - événement : UpdatedGravatar (uint256, adresse, chaîne, chaîne) + gestionnaire : handleUpdatedGravatar + Gestionnaires d'appels : + - fonction : createGravatar(string,string) + gestionnaire : handleCreateGravatar + gestionnaires de blocs : + - gestionnaire : handleBlock + - gestionnaire : handleBlockWithCall + filtre: + genre : appeler + fichier : ./src/mapping.ts +``` + +## Subgraph Entries + +> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/developing/creating-a-subgraph/ql-schema/). + +Les entrées importantes à mettre à jour pour le manifeste sont : + +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. + +- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. + +- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. + +- `features`: a list of all used [feature](#experimental-features) names. + +- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. + +- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. + +- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. + +- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. + +- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. + +- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. + +- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. + +- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. + +- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. + +- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. + +A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. + +## Gestionnaires d'événements + +Les gestionnaires d'événements dans un subgraph réagissent à des événements spécifiques émis par des contrats intelligents sur la blockchain et déclenchent des gestionnaires définis dans le manifeste du subgraph. Ceci permet aux subgraphs de traiter et de stocker les données des événements selon une logique définie. + +### Définition d'un gestionnaire d'événements + +Un gestionnaire d'événements est déclaré dans une source de données dans la configuration YAML du subgraph. Il spécifie quels événements écouter et la fonction correspondante à exécuter lorsque ces événements sont détectés. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: Approval(address,address,uint256) + handler: handleApproval + - event: Transfer(address,address,uint256) + handler: handleTransfer + topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Filtre de rubrique optionnel qui filtre uniquement les événements avec la rubrique spécifiée. +``` + +## Gestionnaires d'appels + +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. + +Les gestionnaires d'appels ne se déclencheront que dans l'un des deux cas suivants : lorsque la fonction spécifiée est appelée par un compte autre que le contrat lui-même ou lorsqu'elle est marquée comme externe dans Solidity et appelée dans le cadre d'une autre fonction du même contrat. + +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. + +### Définir un gestionnaire d'appels + +To define a call handler in your manifest, simply add a `callHandlers` array under the data source you would like to subscribe to. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar +``` + +The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. + +### Fonction de cartographie + +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: + +```typescript +import { CreateGravatarCall } from '../generated/Gravity/Gravity' +import { Transaction } from '../generated/schema' + +export function handleCreateGravatar(call: CreateGravatarCall): void { + let id = call.transaction.hash + let transaction = new Transaction(id) + transaction.displayName = call.inputs._displayName + transaction.imageUrl = call.inputs._imageUrl + transaction.save() +} +``` + +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. + +## Block Handlers + +En plus de s'abonner à des événements de contrat ou à des appels de fonction, un subgraph peut souhaiter mettre à jour ses données à mesure que de nouveaux blocs sont ajoutés à la chaîne. Pour y parvenir, un subgraph peut exécuter une fonction après chaque bloc ou après des blocs correspondant à un filtre prédéfini. + +### Filtres pris en charge + +#### Filtre d'appel + +```yaml +filter: + kind: call +``` + +_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ + +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. + +L'absence de filtre pour un gestionnaire de bloc garantira que le gestionnaire est appelé à chaque bloc. Une source de données ne peut contenir qu'un seul gestionnaire de bloc pour chaque type de filtre. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCallToContract + filter: + kind: call +``` + +#### Filtre d'interrogation + +> **Requires `specVersion` >= 0.0.8** +> +> **Note:** Polling filters are only available on dataSources of `kind: ethereum`. + +```yaml +blockHandlers: + - handler: handleBlock + filter: + kind: polling + every: 10 +``` + +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. + +#### Le filtre Once + +> **Requires `specVersion` >= 0.0.8** +> +> **Note:** Once filters are only available on dataSources of `kind: ethereum`. + +```yaml +blockHandlers: + - handler: handleOnce + filter: + kind: once +``` + +Le gestionnaire défini avec le filtre once ne sera appelé qu'une seule fois avant l'exécution de tous les autres gestionnaires. Cette configuration permet au subgraph d'utiliser le gestionnaire comme gestionnaire d'initialisation, effectuant des tâches spécifiques au début de l'indexation. + +```ts +export function handleOnce(block: ethereum.Block): void { + let data = new InitialData(Bytes.fromUTF8('initial')) + data.data = 'Setup data here' + data.save() +} +``` + +### Fonction de cartographie + +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. + +```typescript +import { ethereum } from '@graphprotocol/graph-ts' + +export function handleBlock(block: ethereum.Block): void { + let id = block.hash + let entity = new Block(id) + entity.save() +} +``` + +## Événements anonymes + +Si vous devez traiter des événements anonymes dans Solidity, cela peut être réalisé en fournissant le sujet 0 de l'événement, comme dans l'exemple : + +```yaml +eventHandlers: + - event: LogNote(bytes4,address,bytes32,bytes32,uint256,bytes) + topic0: '0x644843f351d3fba4abcd60109eaff9f54bac8fb8ccf0bab941009c21df21cf31' + handler: handleGive +``` + +An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. + +## Reçus de transaction dans les gestionnaires d'événements + +Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. + +To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. + +```yaml +eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + receipt: true +``` + +Inside the handler function, the receipt can be accessed in the `Event.receipt` field. When the `receipt` key is set to `false` or omitted in the manifest, a `null` value will be returned instead. + +## Ordre de déclenchement des gestionnaires + +Les déclencheurs d'une source de données au sein d'un bloc sont classés à l'aide du processus suivant : + +1. Les déclencheurs d'événements et d'appels sont d'abord classés par index de transaction au sein du bloc. +2. Les déclencheurs d'événements et d'appels au sein d'une même transaction sont classés selon une convention : les déclencheurs d'événements d'abord, puis les déclencheurs d'appel, chaque type respectant l'ordre dans lequel ils sont définis dans le manifeste. +3. Les déclencheurs de bloc sont exécutés après les déclencheurs d'événement et d'appel, dans l'ordre dans lequel ils sont définis dans le manifeste. + +Ces règles de commande sont susceptibles de changer. + +> **Note:** When new [dynamic data source](#data-source-templates-for-dynamically-created-contracts) are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. + +## Modèles de sources de données + +Un modèle courant dans les contrats intelligents compatibles EVM est l'utilisation de contrats de registre ou d'usine, dans lesquels un contrat crée, gère ou référence un nombre arbitraire d'autres contrats qui ont chacun leur propre état et leurs propres événements. + +The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. + +### Source de données pour le contrat principal + +First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.org) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on-chain by the factory contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: Factory + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - Directory + abis: + - name: Factory + file: ./abis/factory.json + eventHandlers: + - event: NewExchange(address,address) + handler: handleNewExchange +``` + +### Modèles de source de données pour les contrats créés dynamiquement + +Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a pre-defined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + # ... other source fields for the main contract ... +templates: + - name: Exchange + kind: ethereum/contract + network: mainnet + source: + abi: Exchange + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/exchange.ts + entities: + - Exchange + abis: + - name: Exchange + file: ./abis/exchange.json + eventHandlers: + - event: TokenPurchase(address,uint256,uint256) + handler: handleTokenPurchase + - event: EthPurchase(address,uint256,uint256) + handler: handleEthPurchase + - event: AddLiquidity(address,uint256,uint256) + handler: handleAddLiquidity + - event: RemoveLiquidity(address,uint256,uint256) + handler: handleRemoveLiquidity +``` + +### Instanciation d'un modèle de source de données + +In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + // Commence à indexer l'échange ; `event.params.exchange` est le + // adresse du nouveau contrat d'échange + Exchange.create(event.params.exchange) +} +``` + +> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> +> Si les blocs précédents contiennent des données pertinentes pour la nouvelle source de données, il est préférable d'indexer ces données en lisant l'état actuel du contrat et en créant des entités représentant cet état au moment de la création de la nouvelle source de données. + +### Data Source Context + +Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + let context = new DataSourceContext() + context.setString('tradingPair', event.params.tradingPair) + Exchange.createWithContext(event.params.exchange, context) +} +``` + +Inside a mapping of the `Exchange` template, the context can then be accessed: + +```typescript +import { dataSource } from '@graphprotocol/graph-ts' + +let context = dataSource.context() +let tradingPair = context.getString('tradingPair') +``` + +There are setters and getters like `setString` and `getString` for all value types. + +## Blocs de démarrage + +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. + +```yaml +dataSources: + - kind: ethereum/contract + name: ExampleSource + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: ExampleContract + startBlock: 6627917 + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - User + abis: + - name: ExampleContract + file: ./abis/ExampleContract.json + eventHandlers: + - event: NewEvent(address,address) + handler: handleNewEvent +``` + +> **Note:** The contract creation block can be quickly looked up on Etherscan: +> +> 1. Recherchez le contrat en saisissant son adresse dans la barre de recherche. +> 2. Click on the creation transaction hash in the `Contract Creator` section. +> 3. Chargez la page des détails de la transaction où vous trouverez le bloc de départ de ce contrat. + +## Conseils pour l'indexeur + +The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. + +> This feature is available from `specVersion: 1.0.0` + +### Prune + +`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: + +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. Un nombre spécifique : Fixe une limite personnalisée au nombre de blocs historiques à conserver. + +``` + indexerHints: + prune: auto +``` + +> Le terme "historique" dans ce contexte des subgraphs concerne le stockage des données qui reflètent les anciens états des entités mutables. + +L'historique à partir d'un bloc donné est requis pour : + +- [Time travel queries](/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history +- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block +- Rembobiner le subgraph jusqu'à ce bloc + +Si les données historiques à partir du bloc ont été purgées, les capacités ci-dessus ne seront pas disponibles. + +> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. + +For subgraphs leveraging [time travel queries](/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: + +Pour conserver une quantité spécifique de données historiques : + +``` + indexerHints: + prune: 1000 # Replace 1000 with the desired number of blocks to retain +``` + +Préserver l'histoire complète des États de l'entité : + +``` +indexerHints: + prune: never +``` diff --git a/website/pages/fr/developing/developer-faqs.mdx b/website/pages/fr/developing/developer-faqs.mdx index e46bbbcfeb19..5d05fd278df2 100644 --- a/website/pages/fr/developing/developer-faqs.mdx +++ b/website/pages/fr/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: FAQs pour les développeurs --- -## 1. Qu'est-ce qu'un subgraph ? +Cette page résume certaines des questions les plus courantes pour les développeurs construisant sur The Graph. -Un subgraph est une API personnalisée construite sur des données de blockchain. Les subgraphs sont interrogés à l'aide du langage de requête GraphQL et sont déployés sur un nœud de graph à l'aide de Graphe CLI . Dès qu'ils sont déployés et publiés sur le réseau décentralisé de The Graph, Les indexeurs traitent les subgraphs et les rendent disponibles pour être interrogés par les consommateurs de subgraphs. +## Relatif aux Subgraphs -## 2. Puis-je supprimer mon subgraph ? +### 1. Qu'est-ce qu'un subgraph ? -Il n'est pas possible de supprimer des subgraphs une fois qu'ils sont créés. +Un subgraph est une API personnalisée construite sur des données blockchain. Les subgraphs sont interrogés en utilisant le langage de requête GraphQL et sont déployés sur Graph Node en utilisant Graph CLI. Une fois déployés et publiés sur le réseau décentralisé de The Graph, les Indexeurs traitent les subgraphs et les rendent disponibles pour que les consommateurs de subgraphs puissent les interroger. -## 3. Puis-je changer le nom de mon subgraph ? +### 2. Quelle est la première étape pour créer un subgraph ? -Non. Une fois qu'un subgraph est créé, son nom ne peut plus être modifié. Assurez-vous d'y réfléchir attentivement avant de créer votre subgraph afin qu'il soit facilement consultable et identifiable par d'autres dapps. +Pour créer avec succès un subgraph, vous devrez installer Graph CLI. Consultez le [guide de démarrage rapide](/quick-start/) pour commencer. Pour des informations détaillées, voir [Création d'un subgraph](/developing/creating-a-subgraph/). -## 4. Puis-je modifier le compte GitHub associé à mon subgraph ? +### 3. Suis-je toujours en mesure de créer un subgraph si mes smart contracts n'ont pas d'événements ? -Non. Dès qu'un subgraph est créé, le compte GitHub associé ne peut pas être modifié. Assurez-vous d'y réfléchir attentivement avant de créer votre subgraph. +Il est fortement recommandé de structurer vos smart contracts pour avoir des événements associés aux données que vous souhaitez interroger. Les gestionnaires d'événements du subgraph sont déclenchés par des événements de contrat et constituent le moyen le plus rapide de récupérer des données utiles. -## 5. Suis-je toujours en mesure de créer un subgraph si mes smart contracts n'ont pas d'événements ? +Si les contrats avec lesquels vous travaillez ne contiennent pas d'événements, votre subgraph peut utiliser des gestionnaires d'appels et de blocs pour déclencher l'indexation. Cependant, ceci n'est pas recommandé, car les performances seront nettement plus lentes. -Il est fortement recommandé de structurer vos smart contracts pour avoir des événements associés aux données que vous souhaitez interroger. Les gestionnaires d'événements du subgraph sont déclenchés par des événements de contrat et constituent le moyen le plus rapide de récupérer des données utiles. +### 4. Puis-je modifier le compte GitHub associé à mon subgraph ? + +Non. Une fois un subgraph créé, le compte GitHub associé ne peut pas être modifié. Veuillez vous assurer de bien prendre en compte ce détail avant de créer votre subgraph. + +### 5. Comment mettre à jour un subgraph sur le mainnet ? -Si les contrats avec lesquels vous travaillez ne contiennent pas d'événements, votre subgraph peut utiliser des gestionnaires d'appels et de blocs pour déclencher l'indexation. Bien que cela ne soit pas recommandé, les performances seront considérablement plus lentes. +Vous pouvez déployer une nouvelle version de votre subgraph sur Subgraph Studio en utilisant la CLI. Cette action maintient votre subgraph privé, mais une fois que vous en êtes satisfait, vous pouvez le publier sur Graph Explorer. Cela créera une nouvelle version de votre subgraph sur laquelle les Curateurs pourront commencer à signaler. + +### 6. Est-il possible de dupliquer un subgraph vers un autre compte ou endpoint sans le redéployer ? + +Vous devez redéployer le subgraph, mais si l'ID de subgraph (hachage IPFS) ne change pas, il n'aura pas à se synchroniser depuis le début. -## 6. Est-il possible de déployer un subgraph portant le même nom pour plusieurs réseaux ? +### 7. Comment puis-je appeler une fonction d'un contrat ou accéder à une variable d'état publique depuis mes mappages de subgraph ? -Vous aurez besoin de noms distincts pour plusieurs réseaux. Bien que vous ne puissiez pas avoir différents subgraphs sous le même nom, il existe des moyens pratiques d'avoir une seule base de code pour plusieurs réseaux. Retrouvez plus d'informations à ce sujet dans notre documentation : [Déploiement d'un subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-an-subgraph) +Consultez `Accès à l'état du contrat intelligent` dans la section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Puis-je importer `ethers.js` ou d'autres bibliothèques JS dans mes mappages de subgraph ? + +Actuellement non, car les mappages sont écrits en AssemblyScript. + +Une solution alternative possible serait de stocker des données brutes dans des entités et à effectuer une logique nécessitant des bibliothèques JS sur le client. + +### 9. Lorsqu'on écoute plusieurs contrats, est-il possible de sélectionner l'ordre des contrats pour écouter les événements ? + +Dans un subgraph, les événements sont toujours traités dans l'ordre dans lequel ils apparaissent dans les blocs, que ce soit sur plusieurs contrats ou non. -## 7. En quoi les modèles sont-ils différents des sources de données ? +### 10. En quoi les modèles sont-ils différents des sources de données ? -Les modèles vous permettent de créer des sources de données à la volée, pendant l'indexation de votre subgraph. Il se peut que votre contrat engendre de nouveaux contrats au fur et à mesure que les gens interagissent avec lui, et puisque vous connaissez la forme de ces contrats (ABI, événements, etc.) à l'avance, vous pouvez définir comment vous souhaitez les indexer dans un modèle et lorsqu'ils sont générés, votre subgraph créera une source de données dynamique en fournissant l'adresse du contrat. +Les modèles vous permettent de créer rapidement des sources de données , pendant que votre subgraph est en cours d'indexation. Votre contrat peut générer de nouveaux contrats à mesure que les gens interagissent avec lui. Étant donné que vous connaissez la structure de ces contrats (ABI, événements, etc.) à l'avance, vous pouvez définir comment vous souhaitez les indexer dans un modèle. Lorsqu'ils sont générés, votre subgraph créera une source de données dynamique en fournissant l'adresse du contrat. Consultez la section "Instanciation d'un modèle de source de données" sur : [Modèles de source de données](/developing/creating-a-subgraph#data-source-templates). -## 8. Comment m'assurer que j'utilise la dernière version de graph-node pour mes déploiements locaux ? +### 11. Est-il possible de configurer un subgraph en utilisant `graph init` de `graph-cli` avec deux contrats ? Ou devrais-je ajouter manuellement une autre source de données dans `subgraph.yaml` après avoir exécuté `graph init` ? -Vous pouvez exécuter la commande suivante : +Oui. Avec la commande `graph init` elle-même, vous pouvez ajouter plusieurs sources de données en entrant les contrats l'un après l'autre. -```sh -docker pull graphprotocol/graph-node:dernier -``` +Vous pouvez également utiliser la commande `graph add` pour ajouter une nouvelle source de données. -**REMARQUE :** docker / docker-compose utilisera toujours la version de graph-node extraite la première fois que vous l'avez exécuté, il est donc important de le faire pour vous assurer que vous êtes à jour avec la dernière version de graph-node. +### 12. Dans quel ordre les gestionnaires d'événements, de blocs et d'appels sont-ils déclenchés pour une source de données ? -## 9. Comment appeler une fonction de contrat ou accéder à une variable d'état publique à partir de mes mappages de subgraphs ? +Les gestionnaires d'événements et d'appels sont d'abord classés par index de transaction à l'intérieur du bloc. Les gestionnaires d'événements et d'appels au sein d'une même transaction sont ordonnés selon une convention : d'abord les gestionnaires d'événements, puis les gestionnaires d'appels, chaque type respectant l'ordre défini dans le manifeste. Les gestionnaires de blocs sont exécutés après les gestionnaires d'événements et d'appels, dans l'ordre où ils sont définis dans le manifeste. Ces règles d'ordre sont également susceptibles d'être modifiées. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +Lorsque de nouvelles sources de données dynamiques sont créées, les gestionnaires définis pour les sources de données dynamiques ne commenceront à être traités qu'une fois que tous les gestionnaires de sources de données existantes auront été traités, et ils se répéteront dans la même séquence chaque fois qu'ils seront déclenchés. -## 10. Est-il possible de configurer un subgraph en utilisant `graph init` à partir de `graph-cli` avec deux contrats ? Ou dois-je ajouter manuellement une autre source de données dans `subgraph.yaml` après avoir exécuté `graph init` ? +### 13. Comment puis-je m'assurer que j'utilise la dernière version de graph-node pour mes déploiements en local ? -Oui. Dans la commande `graph init` elle-même, vous pouvez ajouter plusieurs sources de données en saisissant les contrats l'un après l'autre. Vous pouvez également utiliser la commande `graph add` pour ajouter une nouvelle source de données. +Vous pouvez exécuter la commande suivante : -## 11. Je souhaite contribuer ou ajouter un problème GitHub. Où puis-je trouver les référentiels open source ? +```sh +docker pull graphprotocol/graph-node:dernier +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [l'outil de graph](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Remarque : docker / docker-compose utilisera toujours la version de graph-node qui a été téléchargée la première fois que vous l'avez exécuté, alors assurez-vous d'être à jour avec la dernière version de graph-node. -## 12. Quelle est la méthode recommandée pour créer des identifiants « générés automatiquement » pour une entité lors du traitement des événements ? +### 14. Quelle est la méthode recommandée pour créer des Ids "autogénérés" pour une entité pendant la gestion des événements ? Si une seule entité est créée lors de l'événement et s'il n'y a rien de mieux disponible,alors le hachage de transaction + index de journal serait unique. Vous pouvez les masquer en les convertissant en octets, puis en les redirigeant vers `crypto.keccak256`, mais cela ne le rendra pas plus unique. -## 13. Lorsqu'on écoute plusieurs contrats, est-il possible de sélectionner l'ordre des contrats pour écouter les événements ? +### 15. Puis-je supprimer mon subgraph ? -Dans un subgraph, les événements sont toujours traités dans l'ordre dans lequel ils apparaissent dans les blocs, que ce soit sur plusieurs contrats ou non. +Yes, you can [delete](/managing/delete-a-subgraph/) and [transfer](/managing/transfer-a-subgraph/) your subgraph. + +## Relatif au Réseau + +### 16. Quels réseaux sont supportés par The Graph? + +Vous pouvez trouver la liste des réseaux supportés [ici](/developing/supported-networks). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +### 17. Est-il possible de faire la différence entre les réseaux (mainnet, Sepolia, local) dans les gestionnaires d'événements? Oui. Vous pouvez le faire en important `graph-ts` comme dans l'exemple ci-dessous : @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Prenez-vous en charge les gestionnaires de blocs et d'appels sur Sepolia? -Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. +Oui. Sepolia prend en charge les gestionnaires de blocs, les gestionnaires d'appels et les gestionnaires d'événements. Il convient de noter que les gestionnaires d'événements sont beaucoup plus performants que les deux autres gestionnaires, et ils sont pris en charge sur tous les réseaux compatibles EVM. -## 16. Puis-je importer ethers.js ou d'autres bibliothèques JS dans mes mappages de subgraphs ? +## En rapport avec l'indexation & les requêtes -Pas pour le moment, car les mappages sont écrits en AssemblyScript. Une autre solution possible consiste à stocker les données brutes dans des entités et à exécuter une logique qui nécessite des bibliothèques JS du client. +### 19. Est-il possible de spécifier à partir de quel bloc commencer l'indexation? -## 17. Est-il possible de spécifier sur quel bloc démarrer l'indexation ? +Oui. `dataSources.source.startBlock` dans le fichier `subgraph.yaml` spécifie le numéro du bloc à partir duquel la Source de donnée commence l'indexation. Dans la plupart des cas, nous suggérons d'utiliser le bloc où le contrat a été créé : [Blocs de départ](/developing/creating-a-subgraph#start-blocks) -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +### 20. Quels sont quelques conseils pour augmenter les performances d'indexation? Mon subgraph prend beaucoup de temps à se synchroniser -## 18. Existe-t-il des astuces pour améliorer les performances de l'indexation ? La synchronisation de mon subgraph prend beaucoup de temps +Oui, vous devriez consulter la fonctionnalité optionnelle de bloc de départ pour commencer l'indexation à partir du bloc où le contrat a été déployé : [Blocs de départ](/developing/creating-a-subgraph#start-blocks) -Oui, vous devriez jeter un coup d'œil à la fonctionnalité optionnelle de bloc de départ pour commencer l'indexation à partir du bloc où le contrat a été déployé : [Blocs de départ](/developing/creating-a-subgraph#start-blocks) - -## 19. Existe-t-il un moyen d'interroger directement le subgraph pour déterminer le dernier numéro de bloc qu'il a indexé ? +### 21. Existe-t-il un moyen d'interroger directement le subgraph pour déterminer le dernier numéro de bloc qu'il a indexé? Oui ! Essayez la commande suivante, en remplaçant "organization/subgraphName" par l'organisation sous laquelle elle est publiée et le nom de votre subgraphe : @@ -102,44 +121,27 @@ Oui ! Essayez la commande suivante, en remplaçant "organization/subgraphName" curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/ index-node/graphql ``` -## 20. Quels réseaux sont pris en charge par The Graph ? - -Vous pouvez trouver la liste des réseaux supportés [ici](/developing/supported-networks). - -## 21. Est-il possible de dupliquer un subgraph sur un autre compte ou point de terminaison sans redéployer ? - -Vous devez redéployer le subgraph, mais si l'ID de subgraph (hachage IPFS) ne change pas, il n'aura pas à se synchroniser depuis le début. - -## 22. Est-il possible d'utiliser Apollo Federation au-dessus du graph-node ? - -La fédération n'est pas encore supportée, bien que nous souhaitions la prendre en charge à l'avenir. Pour le moment, vous pouvez utiliser l'assemblage de schémas, soit sur le client, soit via un service proxy. - -## 23. Y a-t-il une limite au nombre d'objets que The Graph peut renvoyer par requête ? +### 22. Existe-t-il une limite au nombre d'objets que The Graph peut retourner par requête? -Par défaut, les réponses aux requêtes sont limitées à 100 éléments par collection. Si vous souhaitez en recevoir plus, vous pouvez aller jusqu'à 1000 articles par collection et au-delà, vous pouvez paginer avec : +Par défaut, les réponses aux requêtes sont limitées à 100 éléments par collection. Si vous voulez en recevoir plus, vous pouvez aller jusqu'à 1000 éléments par collection et au-delà, vous pouvez paginer avec : ```graphql quelquesCollection(first: 1000, skip: ) { ... } ``` -## 24. Si mon interface dapp utilise The Graph pour les requêtes, dois-je écrire ma clé de requête directement dans l'interface ? Et si nous payons des frais de requête pour les utilisateurs : les utilisateurs malveillants rendront-ils nos frais de requête très élevés ? - -Actuellement, l'approche recommandée pour une dapp consiste à ajouter la clé à l'interface et à l'exposer aux utilisateurs finaux. Cela dit, vous pouvez limiter cette clé à un nom d'hôte, comme _yourdapp.io_ et subgraph. La passerelle est actuellement gérée par Edge & Node. Une partie de la responsabilité d'une passerelle est de surveiller les comportements abusifs et de bloquer le trafic des clients malveillants. - -## 25. Where do I go to find my current subgraph on the hosted service? - -Rendez-vous sur le service hébergé afin de trouver les subgraphs que vous ou d'autres personnes avez déployés sur le service hébergé. Vous pouvez le trouver [ici](https://thegraph.com/hosted-service). +### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -## 26. Will the hosted service start charging query fees? +Actuellement, l'approche recommandée pour un dapp est d'ajouter la clé au frontend et de l'exposer aux utilisateurs finaux. Cela dit, vous pouvez limiter cette clé à un nom d'hôte, comme _yourdapp.io_ et subgraph. La passerelle est actuellement gérée par Edge & Node. Une partie de la responsabilité d'une passerelle est de surveiller les comportements abusifs et de bloquer le trafic des clients malveillants. -The Graph ne facturera jamais le service hébergé. The Graph est un protocole décentralisé, et faire payer un service centralisé n'est pas conforme aux valeurs du Graphe. Le service hébergé a toujours été une étape temporaire pour aider à passer au réseau décentralisé. Les développeurs disposeront d'un délai suffisant pour passer au réseau décentralisé lorsqu'ils le souhaiteront. +## Divers -## 27. How do I update a subgraph on mainnet? +### 24. Est-il possible d'utiliser Apollo Federation sur graph-node? -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +La fédération n'est pas encore supportée. Pour le moment, vous pouvez utiliser la fusion de schémas, soit sur le client, soit via un service proxy. -## 28. Dans quel ordre les gestionnaires d'événements, de blocages et d'appels sont-ils déclenchés pour une source de données ? +### 25. Je veux contribuer ou ajouter un problème GitHub. Où puis-je trouver les dépôts open source? -Les gestionnaires d'événements et d'appels sont d'abord classés par index de transaction à l'intérieur du bloc. Les gestionnaires d'événements et d'appels au sein d'une même transaction sont ordonnés selon une convention : d'abord les gestionnaires d'événements, puis les gestionnaires d'appels, chaque type respectant l'ordre défini dans le manifeste. Les gestionnaires de blocs sont exécutés après les gestionnaires d'événements et d'appels, dans l'ordre où ils sont définis dans le manifeste. Ces règles d'ordre sont également susceptibles d'être modifiées. - -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [l'outil de graph](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/fr/developing/graph-ts/api.mdx b/website/pages/fr/developing/graph-ts/api.mdx index 842054226e4d..3587758f3f64 100644 --- a/website/pages/fr/developing/graph-ts/api.mdx +++ b/website/pages/fr/developing/graph-ts/api.mdx @@ -2,47 +2,49 @@ title: API AssemblyScript --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Remarque : Si vous avez créé un subgraph avant la version `graph-cli`/`graph-ts` `0.22.0`, alors vous utilisez une ancienne version d'AssemblyScript. Il est recommandé de consulter le [`Guide de Migration `] (/release-notes/assemblyscript-migration-guide). -Cette page documente les API intégrées qui peuvent être utilisées lors de l'écriture de mappages de subgraphs. Deux types d'API sont disponibles prêtes à l'emploi : +Découvrez quelles APIs intégrées peuvent être utilisées lors de l'écriture des mappages de subgraph. Il existe deux types d'APIs disponibles par défaut : -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- La [Bibliothèque TypeScript de The Graph](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code généré à partir des fichiers du subgraph par `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +Vous pouvez également ajouter d'autres bibliothèques comme dépendances, à condition qu'elles soient compatibles avec [AssemblyScript] (https://github.com/AssemblyScript/assemblyscript). + +Étant donné que les mappages de langage sont écrits en AssemblyScript, il est utile de consulter les fonctionnalités de langage et de bibliothèque standard dans le du [wiki AssemblyScript] (https://github.com/AssemblyScript/assemblyscript/wiki). ## Référence API -The `@graphprotocol/graph-ts` library provides the following APIs: +La bibliothèque `@graphprotocol/graph-ts` fournit les API suivantes : -- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. -- A `store` API to load and save entities from and to the Graph Node store. -- A `log` API to log messages to the Graph Node output and Graph Explorer. -- An `ipfs` API to load files from IPFS. -- A `json` API to parse JSON data. -- A `crypto` API to use cryptographic functions. +- Une API `ethereum` pour travailler avec les contrats intelligents Ethereum, les événements, les blocs, les transactions et les valeurs Ethereum. +- Une API `store` pour charger et enregistrer des entités depuis et vers le magasin Graph Node. +- Une API `log` pour enregistrer des messages dans la sortie Graph Node et Graph Explorer. +- Une API `ipfs` pour charger des fichiers depuis IPFS. +- Une API `json` pour analyser les données JSON. +- Une API `crypto` pour utiliser des fonctions cryptographiques. - Primitives de bas niveau pour traduire entre différents systèmes de types tels que Ethereum, JSON, GraphQL et AssemblyScript. ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +La `apiVersion` dans le manifeste du subgraph spécifie la version de l'API de mappage exécutée par Graph Node pour un subgraph donné. | Version | Notes de version | | :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| 0.0.9 | Ajout de nouvelles fonctions hôtes [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Ajout de la validation pour l'existence des champs dans le schéma lors de l'enregistrement d'une entité. | +| 0.0.7 | Ajout des classes `TransactionReceipt` et `Log`aux types Ethereum
Ajout du champ `receipt` à l'objet Ethereum Event | +| 0.0.6 | Ajout du champ `nonce` à l'objet Ethereum Transaction
Ajout de `baseFeePerGas` à l'objet Ethereum Block | +| 0.0.5 | AssemblyScript a été mis à niveau vers la version 0.19.10 (ceci inclut des changements importants, veuillez consulter le [`Guide de Migration`](/release-notes/assemblyscript-migration-guide))
. `ethereum.transaction.gasUsed` est renommé en `ethereum.transaction.gasLimit` | +| 0.0.4 | Ajout du champ `functionSignature` à l'objet Ethereum SmartContractCall | +| 0.0.3 | Ajout du champ `from` à l'objet Ethereum Call
`etherem.call.address` est renommé en `ethereum.call.to` | +| 0.0.2 | Ajout du champ `input` à l'objet Ethereum Transaction | ### Types intégrés -Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://www.assemblyscript.org/types.html). +La documentation sur les types de base intégrés dans AssemblyScript se trouve dans le [wiki AssemblyScript](https://www.assemblyscript.org/types.html). -The following additional types are provided by `@graphprotocol/graph-ts`. +Les types additionnels suivants sont fournis par `@graphprotocol/graph-ts`. #### ByteArray @@ -50,26 +52,26 @@ The following additional types are provided by `@graphprotocol/graph-ts`. import { ByteArray } from '@graphprotocol/graph-ts' ``` -`ByteArray` represents an array of `u8`. +`ByteArray` représente un tableau de `u8`. _Construction_ -- `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. +- `fromI32(x: i32): ByteArray` - Décompose `x` en octets. +- `fromHexString(hex: string): ByteArray` - La longueur de la saisie doit être paire. Le préfixe `0x` est optionnel. -_Type conversions_ +_Conversions de type_ -- `toHexString(): string` - Converts to a hex string prefixed with `0x`. -- `toString(): string` - Interprets the bytes as a UTF-8 string. -- `toBase58(): string` - Encodes the bytes into a base58 string. -- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. -- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. +- `toHexString(): string` - Convertit en une chaîne de caractères hexadécimale ayant comme préfixe `0x`. +- `toString(): string` - Interprète les octets comme une chaîne UTF-8. +- toBase58(): string\` - Encode les octets en une chaîne de caractères de type base58. +- `toU32(): u32` - Interprète les octets comme un `u32` en little-endian. Envoie une exception en cas de dépassement. +- `toI32(): i32` - Interprète le tableau d'octets comme un `i32` en little-endian. Envoie une exception en cas de dépassement. -_Operators_ +_Operateurs_ -- `equals(y: ByteArray): bool` – can be written as `x == y`. -- `concat(other: ByteArray) : ByteArray` - return a new `ByteArray` consisting of `this` directly followed by `other` -- `concatI32(other: i32) : ByteArray` - return a new `ByteArray` consisting of `this` directly followed by the byte representation of `other` +- `equals(y: ByteArray): bool` – peut être écrit comme `x == y`. +- `concat(other: ByteArray) : ByteArray` - renvoie un nouveau `ByteArray` constitué de `this` directement suivi par `other` +- `concatI32(other: i32) : ByteArray` - retourne un nouveau `ByteArray` constitué de `this` directement suivi par la représentation en octets de `other` #### BigDecimal @@ -77,32 +79,32 @@ _Operators_ import { BigDecimal } from '@graphprotocol/graph-ts' ``` -`BigDecimal` is used to represent arbitrary precision decimals. +`BigDecimal` est utilisé pour représenter des décimales à précision arbitraire. -> Note: [Internally](https://github.com/graphprotocol/graph-node/blob/master/graph/src/data/store/scalar/bigdecimal.rs) `BigDecimal` is stored in [IEEE-754 decimal128 floating-point format](https://en.wikipedia.org/wiki/Decimal128_floating-point_format), which supports 34 decimal digits of significand. This makes `BigDecimal` unsuitable for representing fixed-point types that can span wider than 34 digits, such as a Solidity [`ufixed256x18`](https://docs.soliditylang.org/en/latest/types.html#fixed-point-numbers) or equivalent. +> Remarque: [En interne](https://github.com/graphprotocol/graph-node/blob/master/graph/src/data/store/scalar/bigdecimal.rs) `BigDecimal` est stocké au format [IEEE-754 décimal128 à virgule flottante](https://en.wikipedia.org/wiki/Decimal128_floating-point_format), qui supporte 34 chiffres significatifs. Cela rend `BigDecimal` inapproprié pour représenter des types à virgule fixe pouvant dépasser 34 chiffres, comme un Solidity [`ufixed256x18`](https://docs.soliditylang.org/en/latest/types.html#fixed-point-numbers) ou équivalent. _Construction_ -- `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. -- `static fromString(s: string): BigDecimal` – parses from a decimal string. +- `constructor(bigInt: BigInt)` – crée un `BigDecimal` à partir d'un `BigInt`. +- `static fromString(s: string): BigDecimal` – analyse à partir d'une chaîne de caractères décimaux. -_Type conversions_ +_Conversions de type_ -- `toString(): string` – prints to a decimal string. +- `toString(): string` – affiche en une chaîne de caractères décimaux. _Math_ -- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. -- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. -- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. -- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. -- `equals(y: BigDecimal): bool` – can be written as `x == y`. -- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. -- `lt(y: BigDecimal): bool` – can be written as `x < y`. -- `le(y: BigDecimal): bool` – can be written as `x <= y`. -- `gt(y: BigDecimal): bool` – can be written as `x > y`. -- `ge(y: BigDecimal): bool` – can be written as `x >= y`. -- `neg(): BigDecimal` - can be written as `-x`. +- `plus(y: BigDecimal): BigDecimal` – peut être écrit comme `x + y`. +- `minus(y: BigDecimal): BigDecimal` – peut être écrit comme `x - y`. +- `times(y: BigDecimal): BigDecimal` – peut être écrit comme `x * y`. +- `div(y: BigDecimal): BigDecimal` – peut être écrit comme `x / y`. +- `equals(y: BigDecimal): bool` – peut être écrit comme `x == y`. +- `notEqual(y: BigDecimal): bool` – peut être écrit comme `x != y`. +- `lt(y: BigDecimal): bool` – peut être écrit comme `x < y`. +- `le(y: BigDecimal): bool` – peut être écrit comme `x <= y`. +- `gt(y: BigDecimal): bool` – peut être écrit comme `x > y`. +- `ge(y: BigDecimal): bool` – peut être écrit comme `x >= y`. +- `neg(): BigDecimal` - peut être écrit comme `-x`. #### BigInt @@ -110,53 +112,53 @@ _Math_ importer { BigInt } depuis '@graphprotocol/graph-ts' ``` -`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. +`BigInt` est utilisé pour représenter de grands entiers. Cela inclut les valeurs Ethereum de type `uint32` à `uint256` et `int64` à`int256`. Tout ce qui est en dessous de `uint32`, tel que `int32`, `uint24` ou `int8` est représenté sous forme de `i32`. -The `BigInt` class has the following API: +La classe `BigInt` possède l'API suivante : _Construction_ -- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. +- `BigInt.fromI32(x: i32): BigInt` – crée un `BigInt` à partir d'un `i32`. -- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. +- `BigInt.fromString(s: string): BigInt`– Analyse un `BigInt` à partir d'une chaîne de caractères. -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprète `bytes` comme un entier non signé en little-endian. Si votre saisie est en big-endian, appelez d'abord `.reverse()`. -- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprète `bytes` comme un entier signé en little-endian. Si votre saisie est en big-endian, appelez d'abord `.reverse()`. - _Type conversions_ + _Conversions de type_ -- `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. +- `x.toHex(): string` – transforme `BigInt` en une chaîne de caractères hexadécimaux. -- `x.toString(): string` – turns `BigInt` into a decimal number string. +- `x.toString(): string` – transforme`BigInt` en une chaîne de caractères de nombres décimaux. -- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. +- `x.toI32(): i32` – renvoie le `BigInt` comme un `i32`; échoue si la valeur ne rentre pas dans un `i32`. Il est conseillé de vérifier d'abord `x.isI32()`. -- `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. +- `x.toBigDecimal(): BigDecimal` - convertit en un nombre décimal sans virgule. _Math_ -- `x.plus(y: BigInt): BigInt` – can be written as `x + y`. -- `x.minus(y: BigInt): BigInt` – can be written as `x - y`. -- `x.times(y: BigInt): BigInt` – can be written as `x * y`. -- `x.div(y: BigInt): BigInt` – can be written as `x / y`. -- `x.mod(y: BigInt): BigInt` – can be written as `x % y`. -- `x.equals(y: BigInt): bool` – can be written as `x == y`. -- `x.notEqual(y: BigInt): bool` – can be written as `x != y`. -- `x.lt(y: BigInt): bool` – can be written as `x < y`. -- `x.le(y: BigInt): bool` – can be written as `x <= y`. -- `x.gt(y: BigInt): bool` – can be written as `x > y`. -- `x.ge(y: BigInt): bool` – can be written as `x >= y`. -- `x.neg(): BigInt` – can be written as `-x`. -- `x.divDecimal(y: BigDecimal): BigDecimal` – divides by a decimal, giving a decimal result. -- `x.isZero(): bool` – Convenience for checking if the number is zero. -- `x.isI32(): bool` – Check if the number fits in an `i32`. -- `x.abs(): BigInt` – Absolute value. +- `x.plus(y: BigInt): BigInt` – peut être écrit comme `x + y`. +- `x.minus(y: BigInt): BigInt` – peut être écrit comme `x - y`. +- `x.times(y: BigInt): BigInt` – peut être écrit comme `x * y`. +- `x.div(y: BigInt): BigInt` – peut être écrit comme `x / y`. +- `x.mod(y: BigInt): BigInt` – peut être écrit comme `x % y`. +- `x.equals(y: BigInt): bool` – peut être écrit comme `x == y`. +- `x.notEqual(y: BigInt): bool` – peut être écrit comme `x != y`. +- `x.lt(y: BigInt): bool` – peut être écrit comme `x < y`. +- `x.le(y: BigInt): bool` – peut être écrit comme `x <= y`. +- `x.gt(y: BigInt): bool` – peut être écrit comme `x > y`. +- `x.ge(y: BigInt): bool` – peut être écrit comme `x >= y`. +- `x.neg(): BigInt` – peut être écrit comme `-x`. +- `x.divDecimal(y: BigDecimal): BigDecimal` – divise par un nombre décimal, donnant un résultat décimal. +- `x.isZero(): bool` – Est pratique pour vérifier si le nombre est zéro. +- `x.isI32(): bool` – Vérifie si le nombre rentre dans un `i32`. +- `x.abs(): BigInt` – Valeur absolue. - `x.pow(exp: u8): BigInt` – Exponentiation. -- `bitOr(x: BigInt, y: BigInt): BigInt` – can be written as `x | y`. -- `bitAnd(x: BigInt, y: BigInt): BigInt` – can be written as `x & y`. -- `leftShift(x: BigInt, bits: u8): BigInt` – can be written as `x << y`. -- `rightShift(x: BigInt, bits: u8): BigInt` – can be written as `x >> y`. +- `bitOr(x: BigInt, y: BigInt): BigInt` – peut être écrit comme `x | y`. +- `bitAnd(x: BigInt, y: BigInt): BigInt` – peut être écrit comme `x & y`. +- `leftShift(x: BigInt, bits: u8): BigInt` – peut être écrit comme `x << y`. +- `rightShift(x: BigInt, bits: u8): BigInt` – peut être écrit comme `x >> y`. #### TypedMap @@ -164,15 +166,15 @@ _Math_ import { TypedMap } from '@graphprotocol/graph-ts' ``` -`TypedMap` can be used to store key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). +`TypedMap` peut être utilisé pour stocker des paires clé-valeur. Consultez [cet exemple](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). -The `TypedMap` class has the following API: +La classe `TypedMap` possède l'API suivante : -- `new TypedMap()` – creates an empty map with keys of type `K` and values of type `V` -- `map.set(key: K, value: V): void` – sets the value of `key` to `value` -- `map.getEntry(key: K): TypedMapEntry | null` – returns the key-value pair for a `key` or `null` if the `key` does not exist in the map -- `map.get(key: K): V | null` – returns the value for a `key` or `null` if the `key` does not exist in the map -- `map.isSet(key: K): bool` – returns `true` if the `key` exists in the map and `false` if it does not +- `new TypedMap()` – crée une carte vide avec des clés de type `K` et des valeurs de type `V` +- `map.set(key: K, value: V): void` – définit la valeur de `key` à `value` +- `map.getEntry(key: K): TypedMapEntry | null` – renvoie la paire clé-valeur pour une `key` ou `null` si la `key` n'existe pas dans la carte +- `map.get(key: K): V | null` – renvoie la valeur pour une `key` ou `null` si la `key` n'existe pas dans la carte +- `map.isSet(key: K): bool` – renvoie `true` si la `key` existe dans la carte et `false` si ce n'est pas le cas #### Octets @@ -180,25 +182,25 @@ The `TypedMap` class has the following API: import { Bytes } from '@graphprotocol/graph-ts' ``` -`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32`, etc. +`Bytes` est utilisé pour représenter des tableaux d'octets de longueur arbitraire. Ceci inclut les valeurs Ethereum de type `bytes`, `bytes32`, etc. -The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: +La classe `Bytes` hérite de [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) d'AssemblyScript et prend en charge toutes les fonctionnalités de `Uint8Array` ainsi que les nouvelles méthodes suivantes : _Construction_ -- `fromHexString(hex: string) : Bytes` - Convert the string `hex` which must consist of an even number of hexadecimal digits to a `ByteArray`. The string `hex` can optionally start with `0x` -- `fromI32(i: i32) : Bytes` - Convert `i` to an array of bytes +- `fromHexString(hex: string) : Bytes` - Convertit la chaîne de caractères `hex` qui doit comporter un nombre pair de chiffres hexadécimaux en un `ByteArray`. La chaîne de caractères `hex` peut de façon optionnelle commencer par `0x` +- `fromI32(i: i32) : Bytes` - Convertit `i` en un tableau de d'octets -_Type conversions_ +_Conversions de type_ -- `b.toHex()` – returns a hexadecimal string representing the bytes in the array -- `b.toString()` – converts the bytes in the array to a string of unicode characters -- `b.toBase58()` – turns an Ethereum Bytes value to base58 encoding (used for IPFS hashes) +- `b.toHex()` – renvoie une chaîne de caractères hexadécimale représentant les octets dans le tableau +- `b.toString()` – convertit les octets dans le tableau en une chaîne de caractères unicode +- `b.toBase58()` – convertit une valeur Ethereum Bytes en codage de type base58 (utilisé pour les hachages IPFS) -_Operators_ +_Operateurs_ -- `b.concat(other: Bytes) : Bytes` - - return new `Bytes` consisting of `this` directly followed by `other` -- `b.concatI32(other: i32) : ByteArray` - return new `Bytes` consisting of `this` directly follow by the byte representation of `other` +- `b.concat(other: Bytes) : Bytes` - - renvoie un nouveau `Bytes` constitué de `this` suivi directement de `other` +- `b.concatI32(other: i32) : ByteArray` - renvoie un nouveau `Bytes` constitué de `this` suivi directement de la représentation en octets de `other` #### Addresse @@ -206,12 +208,12 @@ _Operators_ import { Address } du '@graphprotocol/graph-ts' ``` -`Address` extends `Bytes` to represent Ethereum `address` values. +`Address` hérite de `Bytes` pour représenter les valeurs `address` d'Ethereum. -It adds the following method on top of the `Bytes` API: +Cela ajoute la méthode suivante en plus de l'API `Bytes` : -- `Address.fromString(s: string): Address` – creates an `Address` from a hexadecimal string -- `Address.fromBytes(b: Bytes): Address` – create an `Address` from `b` which must be exactly 20 bytes long. Passing in a value with fewer or more bytes will result in an error +- `Address.fromString(s: string): Address` – crée une `Address` à partir d'une chaîne de caractères hexadécimale +- `Address.fromBytes(b: Bytes): Address` – crée une `Address` à partir de `b` qui doit avoir exactement 20 octets de long. Passer une valeur avec moins ou plus d'octets entraînera une erreur ### Store API @@ -219,9 +221,9 @@ It adds the following method on top of the `Bytes` API: import { store } from '@graphprotocol/graph-ts' ``` -The `store` API allows to load, save and remove entities from and to the Graph Node store. +L'API `store` permet de charger, sauvegarder et supprimer des entités dans et depuis le magasin Graph Node. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Les entités écrites dans le magasin correspondent directement aux types `@entity` définis dans le schéma GraphQL du subgraph. Pour faciliter le travail avec ces entités, la commande `graph codegen` fournie par [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) génère des classes d'entités, qui sont des sous-classes du type `Entity` intégré, avec des accesseurs et des mutateurs pour les champs du schéma ainsi que des méthodes pour charger et sauvegarder ces entités. #### Création d'entités @@ -250,9 +252,11 @@ export function handleTransfer(event: TransferEvent): void { } ``` -When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. +Lorsqu'un événement `Transfer` est rencontré lors du traitement de la blockchain, il est transmis au gestionnaire d'événements `handleTransfer` en utilisant le type `Transfer` généré (ayant pour pseudonyme `TransferEvent` ici pour éviter un conflit de nom avec le type d'entité). Ce type permet d'accéder à des données telles que la transaction parente de l'événement et ses paramètres. + +Chaque entité doit avoir un ID unique pour éviter les collisions avec d'autres entités. Il est assez courant que les paramètres des événements incluent un identifiant unique pouvant être utilisé. -Chaque entité doit avoir un identifiant unique pour éviter les collisions avec d'autres entités. Il est assez courant que les paramètres d'événement incluent un identifiant unique pouvant être utilisé. Remarque : L'utilisation du hachage de transaction comme ID suppose qu'aucun autre événement dans la même transaction ne crée d'entités avec ce hachage comme ID. +> Remarque : utiliser le hash de la transaction comme ID suppose qu'aucun autre événement dans la même transaction ne crée d'entités avec ce hash comme ID. #### Chargement d'entités depuis le magasin @@ -268,15 +272,18 @@ if (transfer == null) { // Utiliser l'entité Transfer comme précédemment ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +Comme l'entité peut ne pas encore exister dans le magasin, la méthode `load` renvoie une valeur de type `Transfer | null`. Il peut être nécessaire de vérifier le cas `null` avant d'utiliser la valeur. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Remarque : Le chargement des entités n'est nécessaire que si les modifications apportées dans le mappage dépendent des données précédentes d'une entité. Consultez la section suivante pour savoir les deux façons de mettre à jour les entités existantes. #### Recherche d'entités créées dans un bloc -As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. +Depuis `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 et `@graphprotocol/graph-cli` v0.49.0 la méthode `loadInBlock` est disponible pour tous les types d'entités. -L'API du magasin facilite la récupération des entités créées ou mises à jour dans le bloc actuel. Une situation typique est qu'un gestionnaire crée une transaction à partir d'un événement en chaîne et qu'un gestionnaire ultérieur souhaite accéder à cette transaction si elle existe. Dans le cas où la transaction n'existe pas, le ubgraph devra se rendre dans la base de données juste pour découvrir que l'entité n'existe pas ; si l'auteur du subgraph sait déjà que l'entité doit avoir été créée dans le même bloc, l'utilisation de loadInBlock évite cet aller-retour dans la base de données. Pour certains subgraphs, ces recherches manquées peuvent contribuer de manière significative au temps d'indexation. +L'API store facilite la récupération des entités qui ont été créées ou mises à jour dans le bloc actuel. Une situation typique est qu'un gestionnaire crée une transaction à partir d'un événement on-chain, et qu'un gestionnaire ultérieur souhaite accéder à cette transaction si elle existe. + +- Dans le cas où la transaction n'existe pas, le subgraph devra interroger la base de données pour découvrir que l'entité n'existe pas. Si l'auteur du subgraph sait déjà que l'entité doit avoir été créée dans le même bloc, utiliser `loadInBlock` évite ce détour par la base de données. +- Pour certains subgraphs, ces recherches infructueuses peuvent contribuer de manière significative au temps d'indexation. ```typescript let id = event.transaction.hash // ou de toute autre manière dont l'ID est construit @@ -288,11 +295,11 @@ if (transfer == null) { // Utiliser l'entité Transfer comme auparavant ``` -> Note: If there is no entity created in the given block, `loadInBlock` will return `null` even if there is an entity with the given ID in the store. +> Remarque : S'il n'y a pas d'entité créée dans le bloc donné, `loadInBlock` renverra `null` même s'il y a une entité avec l'ID donné dans le magasin. #### Recherche d'entités dérivées -As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.31.0 and `@graphprotocol/graph-cli` v0.51.0 the `loadRelated` method is available. +Depuis `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.31.0 et`@graphprotocol/graph-cli` v0.51.0 la méthode `loadRelated` est disponible. Cela permet de charger des champs d'entités dérivés à partir d'un gestionnaire d'événements. Par exemple, étant donné le schéma suivant : @@ -309,7 +316,7 @@ type Holder @entity { } ``` -The following code will load the `Token` entity that the `Holder` entity was derived from: +Le code suivant chargera l'entité `Token` dont l'entité `Holder`est dérivée : ```typescript let holder = Holder.load('test-id') @@ -321,8 +328,8 @@ let tokens = holder.tokens.load() Il existe deux manières de mettre à jour une entité existante : -1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. -2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. +1. Chargez l'entité avec, par exemple, `Transfer.load(id)`, définissez des propriétés sur l'entité, puis `.save()` pour la sauvegarder dans le magasin. +2. Créez simplement l'entité avec, par exemple, `new Transfer(id)`, séfinissez des propriétés sur l'entité, puis `.save()` pour la sauvegarder dans le magasin. Si l'entité existe déjà, les modifications y sont fusionnées. La modification des propriétés est simple dans la plupart des cas, grâce aux paramètres de propriétés générés : @@ -340,9 +347,9 @@ transfer.from.unset() transfer.from = null ``` -This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. +Ceci ne fonctionne qu'avec des propriétés optionnelles, c'est-à-dire des propriétés déclarées sans `!` dans GraphQL. Deux exemples seraient `owner: Bytes` ou `amount: BigInt`. -Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. +La mise à jour des propriétés de tableau est un peu plus complexe, car obtenir un tableau à partir d'une entité crée une copie de ce tableau. Cela signifie que les propriétés de tableau doivent être définies à nouveau explicitement après la modification du tableau. Ce qui suit suppose que `entity` a un champ `numbers: [BigInt!]!` . ```typescript // Cela ne fonctionnera pas @@ -358,7 +365,7 @@ entity.save() #### Supprimer des entités du magasin -There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: +Il n'y a actuellement aucun moyen de supprimer une entité via les types générés. Au lieu de cela, la suppression d'une entité nécessite de passer le nom du type d'entité et l'ID de l'entité à `store.remove`: ```typescript import { store } from '@graphprotocol/graph-ts' @@ -373,9 +380,9 @@ L'API Ethereum donne accès aux contrats intelligents, aux variables d'état pub #### Prise en charge des types Ethereum -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +Comme pour les entités, `graph codegen` génère des classes pour tous les contrats intelligents et événements utilisés dans un subgraph. Pour cela, les ABIs des contrats doivent faire partie de la source de données dans le manifeste du subgraph. En général, les fichiers ABI sont stockés dans un dossier `abis/` . -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +Avec les classes générées, les conversions entre les types Ethereum et [les types intégrés](#built-in-types) se font en arrière-plan afin que les auteurs de subgraph n'aient pas à s'en soucier. L’exemple suivant illustre cela. Étant donné un schéma de subgraph comme @@ -388,7 +395,7 @@ type Transfer @entity { } ``` -and a `Transfer(address,address,uint256)` event signature on Ethereum, the `from`, `to` and `amount` values of type `address`, `address` and `uint256` are converted to `Address` and `BigInt`, allowing them to be passed on to the `Bytes!` and `BigInt!` properties of the `Transfer` entity: +et une signature d'événement `Transfer(address,address,uint256)` sur Ethereum, les valeurs `from`, `to` et`amount` de type `address`, `address` et `uint256` sont enverties en `Address` et `BigInt`, leur permettant d'être passées aux propriétés `Bytes!` et `BigInt!` de l'entité `Transfer` : ```typescript let id = event.transaction.hash @@ -401,7 +408,7 @@ transfer.save() #### Événements et données de bloc/transaction -Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): +Les événements Ethereum passés aux gestionnaires d'événements, comme l'événement `Transfer` dans les exemples précédents, fournissent non seulement l'accès aux paramètres de l'événement, mais également à leur transaction parente et au bloc auquel ils appartiennent. Les données suivantes peuvent être obtenues à partir des instances d' `event` (ces classes font partie du module `ethereum` dans `graph-ts`): ```typescript class Event { @@ -476,7 +483,7 @@ class Log { #### Accès à l'état du contrat intelligent -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +Le code généré par `graph codegen` inclut également des classes pour les contrats intelligents utilisés dans le subgraph. Celles-ci peuvent être utilisées pour accéder aux variables d'état publiques et appeler des fonctions du contrat au bloc actuel. Un modèle courant consiste à accéder au contrat dont provient un événement. Ceci est réalisé avec le code suivant : @@ -495,15 +502,17 @@ export function handleTransfer(event: TransferEvent) { } ``` -`Transfer` is aliased to `TransferEvent` here to avoid a naming conflict with the entity type +`Transfer` est remplacé par `TransferEvent` ici pour éviter un conflit de nommage avec le type d'entité -As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. +Tant que le `ERC20Contract` sur Ethereum a une fonction publique en lecture seule appelée `symbol`, elle peut être appelée avec `.symbol()`. Pour les variables d'état publiques, une méthode du même nom est créée automatiquement. Tout autre contrat faisant partie du subgraph peut être importé à partir du code généré et peut être lié à une adresse valide. #### Gestion des appels retournés -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +Si les méthodes en lecture seule de votre contrat peuvent échouer, vous devez gérer cela en appelant la méthode de contrat générée préfixée par `try_`. + +- Par exemple, le contrat Gravity expose la méthode `gravatarToOwner` . Ce code serait capable de gérer une erreur dans cette méthode : ```typescript let gravity = Gravity.bind(event.address) @@ -515,11 +524,11 @@ if (callResult.reverted) { } ``` -Notez qu'un nœud Graph connecté à un client Geth ou Infura peut ne pas détecter tous les retours, si vous comptez sur cela, nous vous recommandons d'utiliser un nœud Graph connecté à un client Parity. +> Remarque : Un Graph Node connecté à un client Geth ou Infura peut ne pas détecter toutes les réversions (reverts). Si vous en dépendez, nous recommandons d'utiliser un Graph Node connecté à un client Parity. #### Encodage/décodage ABI -Data can be encoded and decoded according to Ethereum's ABI encoding format using the `encode` and `decode` functions in the `ethereum` module. +Les données peuvent être encodées et décodées selon le format de codage ABI d'Ethereum en utilisant les fonctions `encode` et `decode` du module `ethereum`. ```typescript import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' @@ -538,33 +547,33 @@ let decoded = ethereum.decode('(address,uint256)', encoded) Pour plus d'informations: -- [ABI Spec](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) -- Encoding/decoding [Rust library/CLI](https://github.com/rust-ethereum/ethabi) -- More [complex example](https://github.com/graphprotocol/graph-node/blob/08da7cb46ddc8c09f448c5ea4b210c9021ea05ad/tests/integration-tests/host-exports/src/mapping.ts#L86). +- [Spécifications ABI](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) +- Encodage/décodage [bibliothèque Rust/CLI] (https://github.com/rust-ethereum/ethabi) +- Exemple [plus complexe](https://github.com/graphprotocol/graph-node/blob/08da7cb46ddc8c09f448c5ea4b210c9021ea05ad/tests/integration-tests/host-exports/src/mapping.ts#L86). -#### Balance of an Address +#### Solde d'une adresse -The native token balance of an address can be retrieved using the `ethereum` module. This feature is available from `apiVersion: 0.0.9` which is defined `subgraph.yaml`. The `getBalance()` retrieves the balance of the specified address as of the end of the block in which the event is triggered. +Le solde de jetons natifs d'une adresse peut être récupéré en utilisant le module `ethereum`. Cette fonctionnalité est disponible à partir de `apiVersion: 0.0.9` définie dans `subgraph.yaml`. La fonction `getBalance()` récupère le solde de l'adresse spécifiée à la fin du bloc où l'événement est déclenché. ```typescript import { ethereum } from '@graphprotocol/graph-ts' let address = Address.fromString('0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045') -let balance = ethereum.getBalance(address) // returns balance in BigInt +let balance = ethereum.getBalance(address) // renvoie le solde en BigInt ``` -#### Check if an Address is a Contract or EOA +#### Vérifier si une adresse est une adresse de contrat intelligent ou une adresse détenue par des personnes (EOA) -To check whether an address is a smart contract address or an externally owned address (EOA), use the `hasCode()` function from the `ethereum` module which will return `boolean`. This feature is available from `apiVersion: 0.0.9` which is defined `subgraph.yaml`. +Pour vérifier si une adresse est une adresse de contrat intelligent ou une adresse détenue extérieurement (EOA), utilisez la fonction `hasCode()` du module `ethereum` qui retournera un `boolean`. Cette fonctionnalité est disponible à partir de `apiVersion: 0.0.9` qui est définie dans `subgraph.yaml`. ```typescript import { ethereum } from '@graphprotocol/graph-ts' let contractAddr = Address.fromString('0x2E645469f354BB4F5c8a05B3b30A929361cf77eC') -let isContract = ethereum.hasCode(contractAddr).inner // returns true +let isContract = ethereum.hasCode(contractAddr).inner // renvoie true let eoa = Address.fromString('0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045') -let isContract = ethereum.hasCode(eoa).inner // returns false +let isContract = ethereum.hasCode(eoa).inner // renvoie false ``` ### Logging API @@ -573,17 +582,17 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +L'API `log` permet aux subgraphs d'enregistrer des informations sur la sortie standard de Graph Node ainsi que sur Graph Explorer. Les messages peuvent être enregistrés en utilisant différents niveaux de journalisation. Une syntaxe de chaîne de caractère de format de base est fournie pour composer des messages de journal à partir de l'argument. -The `log` API includes the following functions: +L'API `log` inclut les fonctions suivantes : -- `log.debug(fmt: string, args: Array): void` - logs a debug message. -- `log.info(fmt: string, args: Array): void` - logs an informational message. -- `log.warning(fmt: string, args: Array): void` - logs a warning. -- `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.debug(fmt: string, args: Array): void` - enregistre un message de débogage. +- `log.info(fmt: string, args: Array): void` - enregistre un message d'information. +- `log.warning(fmt: string, args: Array): void` - enregistre un avertissement. +- `log.error(fmt: string, args: Array): void` - enregistre un message d'erreur. +- `log.critical(fmt: string, args: Array): void` – enregistre un message critique _et_ met fin au subgraph. -The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. +L'API `log` prend une chaîne de caractères de format et un tableau de valeurs de chaîne de caractères. Elle remplace ensuite les espaces réservés par les valeurs de chaîne de caractères du tableau. Le premier espace réservé `{}` est remplacé par la première valeur du tableau, le second `{}` est remplacé par la deuxième valeur, et ainsi de suite. ```typescript log.info('Message à afficher : {}, {}, {}', [value.toString(), anotherValue.toString(), 'déjà une chaîne']) @@ -593,7 +602,7 @@ log.info('Message à afficher : {}, {}, {}', [value.toString(), anotherValue.to ##### Enregistrer une seule valeur -In the example below, the string value "A" is passed into an array to become`['A']` before being logged: +Dans l'exemple ci-dessous, la valeur de chaîne de caractères "A" est passée dans un tableau pour devenir `['A']` avant d'être enregistrée: ```typescript let myValue = 'A' @@ -619,7 +628,7 @@ export function handleSomeEvent(event: SomeEvent): void { #### Journalisation de plusieurs entrées d'un tableau existant -Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. +Chaque entrée dans le tableau des arguments nécessite son propre espace réservé `{}` dans la chaîne de caractères du message de log. L'exemple ci-dessous contient trois espaces réservés `{}` dans le message de log. À cause de cela, toutes les trois valeurs dans `myArray` sont enregistrées. ```typescript let myArray = ['A', 'B', 'C'] @@ -663,7 +672,7 @@ export function handleSomeEvent(event: SomeEvent): void { import { ipfs } from '@graphprotocol/graph-ts' ``` -Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. +Les contrats intelligents ancrent occasionnellement des fichiers IPFS sur la blockchain. Cela permet aux mappages d'obtenir les hashs IPFS du contrat et de lire les fichiers correspondants à partir d'IPFS. Les données du fichier seront retournées sous forme de `Bytes`, ce qui nécessite généralement un traitement supplémentaire, par exemple avec l'API `json` documentée plus loin sur cette page. Étant donné un hachage ou un chemin IPFS, la lecture d'un fichier depuis IPFS se fait comme suit : @@ -678,9 +687,9 @@ let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. +**Remarque:** `ipfs.cat` n'est pas déterministe pour le moment. Si le fichier ne peut pas être récupéré sur le réseau IPFS avant l'expiration de la demande, il retournera `null`. Pour cette raison, il est toujours utile de vérifier le résultat pour `null`. -It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: +Il est également possible de traiter des fichiers plus volumineux en streaming avec `ipfs.map`. La fonction s'attend à recevoir un hash ou à un chemin pour un fichier IPFS, le nom d'un callback, et des indicateurs pour modifier son comportement : ```typescript import { JSONValue, Value } from '@graphprotocol/graph-ts' @@ -710,9 +719,9 @@ ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) ``` -The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. +Le seul indicateur actuellement pris en charge est `json`, qui doit être passé à `ipfs.map`. Avec l'indicateur `json` , le fichier IPFS doit consister en une série de valeurs JSON, une valeur par ligne. L'appel à `ipfs.map` lira chaque ligne du fichier, la désérialisera en un `JSONValue` et appellera le callback pour chacune d'entre elles. Le callback peut alors utiliser des opérations des entités pour stocker des données à partir du `JSONValue`. Les modifications d'entité ne sont enregistrées que lorsque le gestionnaire qui a appelé `ipfs.map` se termine avec succès ; en attendant, elles sont conservées en mémoire, et la taille du fichier que `ipfs.map` peut traiter est donc limitée. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +En cas de succès, `ipfs.map` renvoie `void`. Si une invocation du callback provoque une erreur, le gestionnaire qui a invoqué `ipfs.map` est interrompu et le subgraph marqué comme échoué. ### Crypto API @@ -720,7 +729,7 @@ On success, `ipfs.map` returns `void`. If any invocation of the callback causes import { crypto } from '@graphprotocol/graph-ts' ``` -The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: +L'API `crypto` rend des fonctions cryptographiques disponibles pour une utilisation dans les mappages. Actuellement, il n'y en a qu'une seule : - `crypto.keccak256(input: ByteArray): ByteArray` @@ -730,14 +739,14 @@ The `crypto` API makes a cryptographic functions available for use in mappings. import { json, JSONValueKind } from '@graphprotocol/graph-ts' ``` -JSON data can be parsed using the `json` API: +Les données JSON peuvent être analysées en utilisant l'API `json`: -- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence -- `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed -- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` -- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed +- `json.fromBytes(data: Bytes): JSONValue` – analyse les données JSON à partir d'un tableau `Bytes` interprété comme une séquence UTF-8 valide +- `json.try_fromBytes(data: Bytes): Result` – version sécurisée de `json.fromBytes`, elle renvoie une variante d'erreur si l'analyse échoue +- `json.fromString(data: string): JSONValue` – analyse les données JSON à partir d'une `String` UTF-8 valide +- `json.try_fromString(data: string): Result` – version sécurisée de `json.fromString`, elle renvoie une variante d'erreur si l'analyse échoue -The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: +La classe `JSONValue` fournit un moyen d'extraire des valeurs d'un document JSON arbitraire. Étant donné que les valeurs JSON peuvent être des booléens, des nombres, des tableaux et plus encore, `JSONValue` est accompagné d'une propriété `kind` pour vérifier le type d'une valeur : ```typescript let value = json.fromBytes(...) @@ -746,45 +755,45 @@ if (value.kind == JSONValueKind.BOOL) { } ``` -In addition, there is a method to check if the value is `null`: +De plus, il existe une méthode pour vérifier si la valeur est `null`: - `value.isNull(): boolean` -When the type of a value is certain, it can be converted to a [built-in type](#built-in-types) using one of the following methods: +Lorsque le type d'une valeur est certain, il peut être converti en un [type intégré](#built-in-types) n utilisant l'une des méthodes suivantes : - `value.toBool(): boolean` - `value.toI64(): i64` - `value.toF64(): f64` - `value.toBigInt(): BigInt` - `value.toString(): string` -- `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) +- `value.toArray(): Array` - (et ensuite convertir `JSONValue` avec l'une des 5 méthodes ci-dessus) ### Référence des conversions de types -| Source(s) | Destination | Conversion function | +| Source(s) | Destination | Fonctions de conversion | | -------------------- | -------------------- | ---------------------------- | -| Address | Bytes | none | +| Address | Bytes | aucune | | Address | String | s.toHexString() | | BigDecimal | String | s.toString() | | BigInt | BigDecimal | s.toBigDecimal() | | BigInt | String (hexadecimal) | s.toHexString() or s.toHex() | | BigInt | String (unicode) | s.toString() | | BigInt | i32 | s.toI32() | -| Boolean | Boolean | none | -| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | -| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | +| Boolean | Boolean | aucune | +| Bytes (signé) | BigInt | BigInt.fromSignedBytes(s) | +| Bytes (non signé) | BigInt | BigInt.fromUnsignedBytes(s) | | Bytes | String (hexadecimal) | s.toHexString() or s.toHex() | | Bytes | String (unicode) | s.toString() | | Bytes | String (base58) | s.toBase58() | | Bytes | i32 | s.toI32() | | Bytes | u32 | s.toU32() | | Bytes | JSON | json.fromBytes(s) | -| int8 | i32 | none | -| int32 | i32 | none | +| int8 | i32 | aucune | +| int32 | i32 | aucune | | int32 | BigInt | BigInt.fromI32(s) | -| uint24 | i32 | none | -| int64 - int256 | BigInt | none | -| uint32 - uint256 | BigInt | none | +| uint24 | i32 | aucune | +| int64 - int256 | BigInt | aucune | +| uint32 - uint256 | BigInt | aucune | | JSON | boolean | s.toBool() | | JSON | i64 | s.toI64() | | JSON | u64 | s.toU64() | @@ -802,7 +811,7 @@ When the type of a value is certain, it can be converted to a [built-in type](#b ### Métadonnées de la source de données -You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: +Vous pouvez inspecter l'adresse du contrat, le réseau et le contexte de la source de données qui a invoqué le gestionnaire grâce un namespace `dataSource` : - `dataSource.address(): Address` - `dataSource.network(): string` @@ -810,7 +819,7 @@ You can inspect the contract address, network and context of the data source tha ### Entité et DataSourceContext -The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: +La classe de base `Entity` et la classe enfant `DataSourceContext` disposent d'assistants pour définir et récupérer dynamiquement des champs : - `setString(key: string, value: string): void` - `setI32(key: string, value: i32): void` @@ -827,9 +836,9 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +La section `context` de `dataSources` vous permet de définir des paires clé-valeur qui sont accessibles dans vos mappages de subgraphs. Les types disponibles sont `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, et `BigInt`. -Here is a YAML example illustrating the usage of various types in the `context` section: +Voici un exemple YAML illustrant l'utilisation de différents types dans la section `context` : ```yaml dataSources: @@ -869,13 +878,13 @@ dataSources: data: '1000000000000000000000000' ``` -- `Bool`: Specifies a Boolean value (`true` or `false`). -- `String`: Specifies a String value. -- `Int`: Specifies a 32-bit integer. -- `Int8`: Specifies an 8-bit integer. -- `BigDecimal`: Specifies a decimal number. Must be quoted. -- `Bytes`: Specifies a hexadecimal string. -- `List`: Specifies a list of items. Each item needs to specify its type and data. -- `BigInt`: Specifies a large integer value. Must be quoted due to its large size. +- `Bool` : Spécifie une valeur booléenne (`true` ou `false`). +- `String` : Spécifie une valeur de type chaîne de caractères. +- `Int` : Spécifie un nombre entier de 32 bits. +- `Int8` : Spécifie un entier de 8 bits. +- `BigDecimal` : Spécifie un nombre décimal. Doit être entre mis guillemets. +- `Bytes` : Spécifie une chaîne de caractères hexadécimale. +- `List` : Spécifie une liste d'éléments. Chaque élément doit spécifier son type et ses données. +- `BigInt` : Spécifie une grande valeur entière. Elle doit être mise entre guillemets en raison de sa grande taille. Ce contexte est ensuite accessible dans vos fichiers de mappage de subgraphs, permettant des subgraphs plus dynamiques et configurables. diff --git a/website/pages/fr/developing/graph-ts/common-issues.mdx b/website/pages/fr/developing/graph-ts/common-issues.mdx index 5b99efa8f493..b50b0404002a 100644 --- a/website/pages/fr/developing/graph-ts/common-issues.mdx +++ b/website/pages/fr/developing/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +Il existe certains problèmes courants avec [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) lors du développement de subgraph. Ces problèmes varient en termes de difficulté de débogage, mais les connaître peut être utile. Voici une liste non exhaustive de ces problèmes : -- `Private` class variables are not enforced in [AssembyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. -- Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). +- Les variables de classe `Private` ne sont pas appliquées dans [AssembyScript] (https://www.assemblyscript.org/status.html#language-features). Il n'y a aucun moyen de protéger les variables de classe d'une modification directe à partir de l'objet de la classe. +- La portée n'est pas héritée dans les [fonctions de fermeture] (https://www.assemblyscript.org/status.html#on-closures), c'est-à-dire que les variables déclarées en dehors des fonctions de fermeture ne peuvent pas être utilisées. Explication dans les [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/pages/fr/developing/substreams-powered-subgraphs-faq.mdx b/website/pages/fr/developing/substreams-powered-subgraphs-faq.mdx index 67ef80c54b92..b5510cf7441c 100644 --- a/website/pages/fr/developing/substreams-powered-subgraphs-faq.mdx +++ b/website/pages/fr/developing/substreams-powered-subgraphs-faq.mdx @@ -4,7 +4,7 @@ title: FAQ sur les subgraphs alimentés par les sous-flux ## Que sont les sous-flux ? -Developed by [StreamingFast](https://www.streamingfast.io/), Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. Substreams allow you to refine and shape blockchain data for fast and seamless digestion by end-user applications. More specifically, Substreams is a blockchain-agnostic, parallelized, and streaming-first engine, serving as a blockchain data transformation layer. Powered by the [Firehose](https://firehose.streamingfast.io/), it ​​enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) their data anywhere. +Développé par [StreamingFast](https://www.streamingfast.io/), Substreams est un moteur de traitement extrêmement puissant capable de consommer des flux riches de données blockchain. Substreams vous permet de raffiner et de structurer les données blockchain pour une digestion rapide et fluide par les applications des utilisateurs finaux. Plus précisément, Substreams est un moteur agnostique à la blockchain, parallélisé et axé sur le streaming, servant de couche de transformation des données blockchain. Propulsé par [Firehose](https://firehose.streamingfast.io/), il permet aux développeurs d'écrire des modules Rust, de s'appuyer sur des modules communautaires, de fournir une indexation très performante, et de [diriger(sink)](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) leurs données vers n'importe quelle destination. Rendez-vous sur le site [Substreams Documentation](/substreams) pour en savoir plus sur Substreams. @@ -22,7 +22,7 @@ En revanche, les subgraphs alimentés par des flux secondaires disposent d'une s ## Quels sont les avantages de l'utilisation de subgraphs alimentés par des courants descendants ? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://substreams.streamingfast.io/documentation/develop/manifest-modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Les subgraphs alimentés par Substreams combinent tous les avantages de Substreams avec la capacité d'intérrogation des subgraphs. Ils apportent une plus grande composabilité (la capacité des composants d'un système à être modifié et recombiné en une autre structure afin de répondre à des besoins précis.) et une indexation haute performance à The Graph. Ils permettent également de nouveaux cas d'utilisation de données; par exemple, une fois que vous avez construit votre subgraph alimenté par Substreams, vous pouvez réutiliser vos [modules Substreams](https://substreams.streamingfast.io/documentation/develop/manifest-modules) pour produire des sorties vers différents [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) tels que PostgreSQL, MongoDB et Kafka. ## Quels sont les avantages de Substreams ? @@ -66,13 +66,13 @@ La [documentation Substreams](/substreams) vous apprendra à construire des modu La [documentation sur les subgraphs alimentés par des flux partiels] (/cookbook/substreams-powered-subgraphs/) vous montrera comment les emballer pour les déployer sur The Graph. -The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. +Le [dernier outil Substreams Codegen ](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) vous permettra de lancer un projet Substreams sans aucun code. ## Quel est le rôle des modules Rust dans Substreams ? Les modules Rust sont l'équivalent des mappeurs AssemblyScript dans les subgraphs. Ils sont compilés dans WASM de la même manière, mais le modèle de programmation permet une exécution parallèle. Ils définissent le type de transformations et d'agrégations que vous souhaitez appliquer aux données brutes de la blockchain. -See [modules documentation](https://substreams.streamingfast.io/documentation/develop/manifest-modules) for details. +Consultez [ documentation des modules](https://substreams.streamingfast.io/documentation/develop/manifest-modules) pour plus de détails. ## Qu'est-ce qui rend Substreams composable ? diff --git a/website/pages/fr/developing/supported-networks.json b/website/pages/fr/developing/supported-networks.json index 3cef93eaa809..dd6bbee71dc6 100644 --- a/website/pages/fr/developing/supported-networks.json +++ b/website/pages/fr/developing/supported-networks.json @@ -5,5 +5,5 @@ "hostedService": "Service hébergé", "subgraphStudio": "Subgraph Studio", "decentralizedNetwork": "Réseau décentralisé", - "integrationType": "Integration Type" + "integrationType": "Type d'intégration" } diff --git a/website/pages/fr/developing/supported-networks.mdx b/website/pages/fr/developing/supported-networks.mdx index b45431b63a2f..dc5f6841f6e3 100644 --- a/website/pages/fr/developing/supported-networks.mdx +++ b/website/pages/fr/developing/supported-networks.mdx @@ -9,16 +9,16 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) -\* Baseline network support provided by the [upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/). -\*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). +\* Support réseau de base fourni par l' [upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/). +\*\* Intégration avec Graph Node : `evm`, `near`, `cosmos`, `osmosis` et `ar` sont nativement pris en charge dans Graph Node. Les blockchains compatibles avec Firehose et Substreams peuvent tirer parti de l'intégration généralisée des [subgraphs alimentés par Substreams](/cookbook/substreams-powered-subgraphs) (ceci inclut les réseaux `evm` et `near` ). ⁠ Prend en charge le déploiement des [subgraphs alimentés par Substreams](/cookbook/substreams-powered-subgraphs). -- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- Subgraph Studio repose sur la stabilité et la fiabilité des technologies sous-jacentes, comme les endpoints JSON-RPC, Firehose et Substreams. +- Les subgraphs indexant Gnosis Chain peuvent désormais être déployés avec l'identifiant de réseau `gnosis`. +- Si un subgraph a été publié via la CLI et repris par un Indexer, il pourrait techniquement être interrogé même sans support, et des efforts sont en cours pour simplifier davantage l'intégration de nouveaux réseaux. - Pour une liste complète des fonctionnalités prises en charge par le réseau décentralisé, voir [cette page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). -## Running Graph Node locally +## Exécution de Graph Node en local -If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. +Si votre réseau préféré n'est pas pris en charge sur le réseau décentralisé de The Graph, vous pouvez exécuter votre propre [Graph Node](https://github.com/graphprotocol/graph-node) pour indexer n'importe quel réseau compatible EVM. Assurez-vous que la [version](https://github.com/graphprotocol/graph-node/releases) que vous utilisez prend en charge le réseau et que vous avez la configuration nécessaire. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node peut également indexer d'autres protocoles via une intégration Firehose. Des intégrations Firehose ont été créées pour NEAR, Arweave et les réseaux basés sur Cosmos. De plus, Graph Node peut prendre en charge les subgraphs alimentés par Substreams pour tout réseau prenant en charge Substreams. diff --git a/website/pages/fr/developing/unit-testing-framework.mdx b/website/pages/fr/developing/unit-testing-framework.mdx index efd4a4ae780d..b3c5e2dde822 100644 --- a/website/pages/fr/developing/unit-testing-framework.mdx +++ b/website/pages/fr/developing/unit-testing-framework.mdx @@ -2,23 +2,32 @@ title: Cadre pour les tests unitaires --- -Matchstick est un cadre de test unitaire, développé par [LimeChain](https://limechain.tech/), qui permet aux développeurs de subgraphs de tester leur logique de cartographie dans un environnement de type bac à sable et de déployer leurs subgraphs en toute confiance ! +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and sucessfully deploy their subgraphs. + +## Benefits of Using Matchstick + +- It's written in Rust and optimized for high performance. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. ## Démarrage -### Installer les dépendances +### Install Dependencies -Pour utiliser les méthodes d'assistance aux tests et exécuter les tests, vous devrez installer les dépendances suivantes : +In order to use the test helper methods and run tests, you need to install the following dependencies: ```sh yarn add --dev matchstick-as ``` -❗ `graph-node` dépend de PostgreSQL, donc si vous ne l'avez pas déjà, vous devrez l'installer. Nous vous conseillons vivement d'utiliser les commandes ci-dessous, car l'ajouter d'une autre manière peut provoquer des erreurs inattendues ! +### Install PostgreSQL + +`graph-node` depends on PostgreSQL, so if you don't already have it, then you will need to install it. -#### Le MacOS +> Note: It's highly recommended to use the commands below to avoid unexpected errors. -Commande d'installation Postgres : +#### Using MacOS + +Installation command: ```sh brew install postgresql @@ -30,15 +39,15 @@ Créez un lien symbolique vers la dernière libpq.5.lib _Vous devrez peut-être ln -sf /usr/local/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/opt/postgresql/lib/libpq.5.dylib ``` -#### Linux +#### Using Linux -Commande d'installation de Postgres (dépend de votre distribution) : +Installation command (depends on your distro): ```sh sudo apt installer postgresql ``` -### WSL (Système Windows pour Linux) +### Using WSL (Windows Subsystem for Linux) Vous pouvez utiliser Matchstick sur WSL en utilisant à la fois l'approche Docker et l'approche binaire. Comme WSL peut être un peu délicat, voici quelques conseils au cas où vous rencontreriez des problèmes tels que @@ -76,7 +85,7 @@ Et en conclussion, n'utilisez pas `graph test` (qui utilise votre installation g } ``` -### Usage +### Using Matchstick Pour utiliser **Matchstick** dans votre projet de subgraph, il suffit d'ouvrir un terminal, de naviguer vers le dossier racine de votre projet et d'exécuter simplement `graph test [options] ` - il télécharge le dernier binaire **Matchstick** et exécute le test spécifié ou tous les tests dans un dossier de test (ou tous les tests existants si aucun datasource flag n'est spécifié). @@ -116,7 +125,7 @@ graph test path/to/file.test.ts À partir de `graph-cli 0.25.2`, la commande `graph test` prend en charge l'exécution de `matchstick` dans un conteneur Docker avec le `-d drapeau. L'implémentation de Docker utilise bind mount afin de ne pas avoir à reconstruire l'image Docker à chaque fois que la commande graph test -d` est exécutée. Vous pouvez également suivre les instructions du référentiel [matchstick](https://github.com/LimeChain/matchstick#docker-) pour exécuter Docker manuellement. -❗ `graph test -d` forces `docker run` to run with flag `-t`. This must be removed to run inside non-interactive environments (like GitHub CI). +❗ `graph test -d` force `docker run` à s'exécuter avec le paramètre `-t`. Cela doit être supprimé pour s'exécuter dans des environnements non interactifs (comme GitHub CI). ❗ En cas d'exécution préalable de `graph test`, vous risquez de rencontrer l'erreur suivante lors de la construction de docker : @@ -144,9 +153,9 @@ Vous pouvez tester et jouer avec les exemples de ce guide en clonant le repo [De Vous pouvez également consulter la série de vidéos sur [« comment utiliser Matchstick pour écrire des tests unitaires pour vos subgraphs »](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) -## Tests structure +## Structure des tests -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: La structure de test décrite ci-dessous dépend de la version `matchstick-as` version >=0.5.0**_ ### décrivez() @@ -524,33 +533,33 @@ assertNotNull(value: T) entityCount(entityType: string, expectedCount: i32) ``` -As of version 0.6.0, asserts support custom error messages as well +À partir de la version 0.6.0, les assertions supportent également les messages d'erreur personnalisés ```typescript -assert.fieldEquals('Gravatar', '0x123', 'id', '0x123', 'Id should be 0x123') -assert.equals(ethereum.Value.fromI32(1), ethereum.Value.fromI32(1), 'Value should equal 1') -assert.notInStore('Gravatar', '0x124', 'Gravatar should not be in store') -assert.addressEquals(Address.zero(), Address.zero(), 'Address should be zero') -assert.bytesEquals(Bytes.fromUTF8('0x123'), Bytes.fromUTF8('0x123'), 'Bytes should be equal') -assert.i32Equals(2, 2, 'I32 should equal 2') -assert.bigIntEquals(BigInt.fromI32(1), BigInt.fromI32(1), 'BigInt should equal 1') -assert.booleanEquals(true, true, 'Boolean should be true') -assert.stringEquals('1', '1', 'String should equal 1') -assert.arrayEquals([ethereum.Value.fromI32(1)], [ethereum.Value.fromI32(1)], 'Arrays should be equal') +assert.fieldEquals('Gravatar', '0x123', 'id', '0x123', 'L'Id doit être 0x123') +assert.equals(ethereum.Value.fromI32(1), ethereum.Value.fromI32(1), 'La valeur doit être égale à 1') +assert.notInStore('Gravatar', '0x124', 'Gravatar ne doit pas être dans le magasin') +assert.addressEquals(Address.zero(), Address.zero(), 'L'adresse doit être zéro') +assert.bytesEquals(Bytes.fromUTF8('0x123'), Bytes.fromUTF8('0x123'), 'Les Bytes doivent être égaux') +assert.i32Equals(2, 2, 'I32 doit être égal à 2') +assert.bigIntEquals(BigInt.fromI32(1), BigInt.fromI32(1), 'BigInt doit être égal à 1') +assert.booleanEquals(true, true, 'Le booléen doit être vrai') +assert.stringEquals('1', '1', 'La chaîne de caractère doit être égale à 1') +assert.arrayEquals([ethereum.Value.fromI32(1)], [ethereum.Value.fromI32(1)], 'Les tableaux doivent être égaux') assert.tupleEquals( changetype([ethereum.Value.fromI32(1)]), changetype([ethereum.Value.fromI32(1)]), - 'Tuples should be equal', + 'Les tuples doivent être égaux', ) -assert.assertTrue(true, 'Should be true') -assert.assertNull(null, 'Should be null') -assert.assertNotNull('not null', 'Should be not null') -assert.entityCount('Gravatar', 1, 'There should be 2 gravatars') -assert.dataSourceCount('GraphTokenLockWallet', 1, 'GraphTokenLockWallet template should have one data source') +assert.assertTrue(true, 'Doit être vrai') +assert.assertNull(null, 'Doit être nul') +assert.assertNotNull('pas nul', 'Ne doit pas être nul') +assert.entityCount('Gravatar', 1, 'Il devrait y avoir 2 gravatars') +assert.dataSourceCount('GraphTokenLockWallet', 1, 'Le modèle(template) GraphTokenLockWallet doit avoir une source de données') assert.dataSourceExists( 'GraphTokenLockWallet', Address.zero().toHexString(), - 'GraphTokenLockWallet should have a data source for zero address', + 'GraphTokenLockWallet doit avoir une source de données pour l'adresse zéro', ) ``` @@ -877,7 +886,7 @@ Les utilisateurs peuvent affirmer qu'une entité n'existe pas dans le magasin. L assert.notInStore('Gravatar', '23') ``` -### Printing the whole store, or single entities from it (for debug purposes) +### Affichage de tout le magasin ou d'entités individuelles (à des fins de débogage) Vous pouvez imprimer l'intégralité du magasin sur la console à l'aide de cette fonction d'assistance: @@ -887,7 +896,7 @@ import { logStore } from 'matchstick-as/assembly/store' logStore() ``` -As of version 0.6.0, `logStore` no longer prints derived fields, instead users can use the new `logEntity` function. Of course `logEntity` can be used to print any entity, not just ones that have derived fields. `logEntity` takes the entity type, entity id and a `showRelated` flag to indicate if users want to print the related derived entities. +À partir de la version 0.6.0, `logStore` n'affiche plus les champs dérivés, les utilisateurs peuvent utiliser la nouvelle fonction `logEntity`. Bien sûr, `logEntity` peut être utilisé pour afficher n'importe quelle entité, pas seulement celles qui ont des champs dérivés. `logEntity` prend le type d'entité, l'Id de l'entité et un paramètre `showRelated` pour indiquer si les utilisateurs veulent afficher les entités dérivées liées. ``` import { logEntity } from 'matchstick-as/assembly/store' @@ -949,16 +958,16 @@ La journalisation des erreurs critiques arrêtera l’exécution des tests et fe ### Tests dérivés -Testing derived fields is a feature which allows users to set a field on a certain entity and have another entity be updated automatically if it derives one of its fields from the first entity. +Tester les champs dérivés est une fonctionnalité qui permet aux utilisateurs de définir un champ sur une certaine entité et de faire en sorte qu'une autre entité soit automatiquement mise à jour si elle dérive l'un de ses champs de la première entité. -Before version `0.6.0` it was possible to get the derived entities by accessing them as entity fields/properties, like so: +Avant la version `0.6.0`, il était possible d'obtenir les entités dérivées en les accédant comme des champs/propriétés d'entité, comme ceci : ```typescript let entity = ExampleEntity.load('id') let derivedEntity = entity.derived_entity ``` -As of version `0.6.0`, this is done by using the `loadRelated` function of graph-node, the derived entities can be accessed the same way as in the handlers. +À partir de la version `0.6.0`, cela se fait en utilisant la fonction `loadRelated` de graph-node, les entités dérivées peuvent être accessibles de la même manière que dans les gestionnaires. ```typescript test('Derived fields example test', () => { @@ -1000,9 +1009,9 @@ test('Derived fields example test', () => { }) ``` -### Testing `loadInBlock` +### Test de `loadInBlock` -As of version `0.6.0`, users can test `loadInBlock` by using the `mockInBlockStore`, it allows mocking entities in the block cache. +Depuis la version `0.6.0`, les utilisateurs peuvent tester `loadInBlock` en utilisant `mockInBlockStore`, ce qui permet de simuler des entités dans le cache du bloc. ```typescript import { afterAll, beforeAll, describe, mockInBlockStore, test } from 'matchstick-as' @@ -1017,12 +1026,12 @@ describe('loadInBlock', () => { clearInBlockStore() }) - test('Can use entity.loadInBlock() to retrieve entity from cache store in the current block', () => { + test('On peut utiliser entity.loadInBlock() pour récupérer l'entité dans le cache du bloc actuel', () => { let retrievedGravatar = Gravatar.loadInBlock('gravatarId0') assert.stringEquals('gravatarId0', retrievedGravatar!.get('id')!.toString()) }) - test("Returns null when calling entity.loadInBlock() if an entity doesn't exist in the current block", () => { + test("Renvoie null lors de l'appel de entity.loadInBlock() si une entité n'existe pas dans le bloc actuel", () => { let retrievedGravatar = Gravatar.loadInBlock('IDoNotExist') assert.assertNull(retrievedGravatar) }) @@ -1086,46 +1095,39 @@ test('Exemple moqueur simple de source de données', () => { Notez que dataSourceMock.resetValues() est appelé à la fin. C'est parce que les valeurs sont mémorisées lorsqu'elles sont modifiées et doivent être réinitialisées si vous voulez revenir aux valeurs par défaut. -### Testing dynamic data source creation +### Test de la création dynamique de sources de données -As of version `0.6.0`, it is possible to test if a new data source has been created from a template. This feature supports both ethereum/contract and file/ipfs templates. There are four functions for this: +Depuis la version `0.6.0`, il est possible de tester si une nouvelle source de données a été créée à partir d'un modèle. Cette fonctionnalité prend en charge les modèles ethereum/contrat et file/ipfs. Il existe quatre fonctions pour cela : -- `assert.dataSourceCount(templateName, expectedCount)` can be used to assert the expected count of data sources from the specified template -- `assert.dataSourceExists(templateName, address/ipfsHash)` asserts that a data source with the specified identifier (could be a contract address or IPFS file hash) from a specified template was created -- `logDataSources(templateName)` prints all data sources from the specified template to the console for debugging purposes -- `readFile(path)` reads a JSON file that represents an IPFS file and returns the content as Bytes +- `assert.dataSourceCount(templateName, expectedCount)` peut être utilisée pour affirmer le nombre attendu de sources de données à partir du modèle spécifié +- `assert.dataSourceExists(templateName, address/ipfsHash)` affirme qu'une source de données avec l'identifiant spécifié (qui peut être une adresse de contrat ou un hash de fichier IPFS) a été créée à partir d'un modèle spécifié +- `logDataSources(templateName)` affiche toutes les sources de données à partir du modèle spécifié dans la console à des fins de débogage +- `readFile(path)` lit un fichier JSON qui représente un fichier IPFS et retourne le contenu sous forme de Bytes -#### Testing `ethereum/contract` templates +#### Test des modèles `ethereum/contract` ```typescript test('ethereum/contract dataSource creation example', () => { - // Assert there are no dataSources created from GraphTokenLockWallet template + // affirme qu'aucune source de données n'est créée à partir du modèle GraphTokenLockWallet assert.dataSourceCount('GraphTokenLockWallet', 0) - - // Create a new GraphTokenLockWallet datasource with address 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A + // Crée une nouvelle source de données GraphTokenLockWallet avec l'adresse 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A GraphTokenLockWallet.create(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2A')) - - // Assert the dataSource has been created + // affirme que la source de données a été créée assert.dataSourceCount('GraphTokenLockWallet', 1) - - // Add a second dataSource with context + // Ajoute une seconde source de données avec contexte let context = new DataSourceContext() context.set('contextVal', Value.fromI32(325)) - GraphTokenLockWallet.createWithContext(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'), context) - - // Assert there are now 2 dataSources + // Vérifie qu'il y a maintenant 2 sources de données assert.dataSourceCount('GraphTokenLockWallet', 2) - - // Assert that a dataSource with address "0xA16081F360e3847006dB660bae1c6d1b2e17eC2B" was created - // Keep in mind that `Address` type is transformed to lower case when decoded, so you have to pass the address as all lower case when asserting if it exists + // affirme qu'une source de données avec l'adresse "0xA16081F360e3847006dB660bae1c6d1b2e17eC2B" a été créée + // Gardez à l'esprit que le type `Address` est transformé en minuscules lors du décodage, vous devez donc passer l'adresse en minuscules lorsque vous affirmez son existence assert.dataSourceExists('GraphTokenLockWallet', '0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'.toLowerCase()) - logDataSources('GraphTokenLockWallet') }) ``` -##### Example `logDataSource` output +##### Exemple de sortie de `logDataSource` ```bash 🛠 { @@ -1146,19 +1148,19 @@ test('ethereum/contract dataSource creation example', () => { } } } -} ``` -#### Testing `file/ipfs` templates +#### Test des modèles `file/ipfs` -Similarly to contract dynamic data sources, users can test test file datas sources and their handlers +De même que les sources de données dynamiques de contrat, les utilisateurs peuvent tester les fichiers sources de données test et leurs gestionnaires -##### Example `subgraph.yaml` +##### Exemple `subgraph.yaml` ```yaml -... + +--- templates: - - kind: file/ipfs + - kind: file/ipfs name: GraphTokenLockMetadata network: mainnet mapping: @@ -1174,27 +1176,27 @@ templates: file: ./abis/GraphTokenLockWallet.json ``` -##### Example `schema.graphql` +##### Exemple de fichier `schema.graphql` ```graphql """ -Token Lock Wallets which hold locked GRT +Portefeuilles de verrouillage de jetons qui contiennent des GRT verrouillés """ type TokenLockMetadata @entity { - "The address of the token lock wallet" + "L'adresse du portefeuille de blocage des jetons" id: ID! - "Start time of the release schedule" + "Heure de début du calendrier de sortie" startTime: BigInt! - "End time of the release schedule" + "Heure de fin du calendrier de sortie"" endTime: BigInt! - "Number of periods between start time and end time" + "Nombre de périodes entre l'heure de début et l'heure de fin" periods: BigInt! - "Time when the releases start" + "Heure à laquelle commence la sortie" releaseStartTime: BigInt! } ``` -##### Example `metadata.json` +##### Exemple de fichier `metadata.json` ```json { @@ -1205,72 +1207,67 @@ type TokenLockMetadata @entity { } ``` -##### Example handler +##### Exemple de gestionnaire ```typescript export function handleMetadata(content: Bytes): void { - // dataSource.stringParams() returns the File DataSource CID - // stringParam() will be mocked in the handler test - // for more info https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files + // dataSource.stringParams() renvoie le CID du fichier de source de donnée + // stringParam() sera simulé dans le test du gestionnaire + // pour plus d'informations https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files let tokenMetadata = new TokenLockMetadata(dataSource.stringParam()) const value = json.fromBytes(content).toObject() - if (value) { const startTime = value.get('startTime') const endTime = value.get('endTime') const periods = value.get('periods') const releaseStartTime = value.get('releaseStartTime') - if (startTime && endTime && periods && releaseStartTime) { tokenMetadata.startTime = startTime.toBigInt() tokenMetadata.endTime = endTime.toBigInt() tokenMetadata.periods = periods.toBigInt() tokenMetadata.releaseStartTime = releaseStartTime.toBigInt() } - tokenMetadata.save() } } ``` -##### Example test +##### Exemple de test ```typescript import { assert, test, dataSourceMock, readFile } from 'matchstick-as' import { Address, BigInt, Bytes, DataSourceContext, ipfs, json, store, Value } from '@graphprotocol/graph-ts' - import { handleMetadata } from '../../src/token-lock-wallet' import { TokenLockMetadata } from '../../generated/schema' import { GraphTokenLockMetadata } from '../../generated/templates' -test('file/ipfs dataSource creation example', () => { - // Generate the dataSource CID from the ipfsHash + ipfs path file - // For example QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm/example.json +test('exemple de création de source de données file/ipfs', () => { + // Générer le CID de la source de données à partir du ipfsHash + chemin du fichier du ipfs + // Par exemple QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm/example.json const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' const CID = `${ipfshash}/example.json` - // Create a new dataSource using the generated CID + // Création d'une nouvelle source de données en utilisant le CID généré GraphTokenLockMetadata.create(CID) - // Assert the dataSource has been created + // Affirmer que la source de données a été créée assert.dataSourceCount('GraphTokenLockMetadata', 1) assert.dataSourceExists('GraphTokenLockMetadata', CID) logDataSources('GraphTokenLockMetadata') - // Now we have to mock the dataSource metadata and specifically dataSource.stringParam() - // dataSource.stringParams actually uses the value of dataSource.address(), so we will mock the address using dataSourceMock from matchstick-as - // First we will reset the values and then use dataSourceMock.setAddress() to set the CID + // Maintenant, nous devons simuler les métadonnées de la source de données et plus particulièrement dataSource.stringParam() + // dataSource.stringParams utilise en fait la valeur de dataSource.address(), donc nous allons simuler l'adresse en utilisant dataSourceMock de matchstick-as + // Tout d'abord, nous allons réinitialiser les valeurs et ensuite utiliser dataSourceMock.setAddress() pour définir le CID dataSourceMock.resetValues() dataSourceMock.setAddress(CID) - // Now we need to generate the Bytes to pass to the dataSource handler - // For this case we introduced a new function readFile, that reads a local json and returns the content as Bytes - const content = readFile(`path/to/metadata.json`) + // Maintenant, nous devons générer les Bytes à passer au gestionnaire de la source de données + // Pour ce cas, nous avons introduit une nouvelle fonction readFile, qui lit un json local et renvoie le contenu sous forme de Bytes + const content = readFile('path/to/metadata.json') handleMetadata(content) - // Now we will test if a TokenLockMetadata was created + // Maintenant, nous allons tester si un TokenLockMetadata a été créé const metadata = TokenLockMetadata.load(CID) - assert.bigIntEquals(metadata!.endTime, BigInt.fromI32(1)) assert.bigIntEquals(metadata!.periods, BigInt.fromI32(1)) assert.bigIntEquals(metadata!.releaseStartTime, BigInt.fromI32(1)) @@ -1368,7 +1365,7 @@ La sortie du journal inclut la durée de l’exécution du test. Voici un exempl > Critique : impossible de créer WasmInstance à partir d'un module valide avec un contexte : importation inconnue : wasi_snapshot_preview1::fd_write n'a pas été défini -Cela signifie que vous avez utilisé `console.log` dans votre code, ce qui n'est pas pris en charge par AssemblyScript. Veuillez envisager d'utiliser l'[API Logging](/developing/graph-ts/api/#logging-api) +Ceci signifie que vous avez utilisé `console.log` dans votre code, ce qui n'est pas pris en charge par AssemblyScript. Veuillez envisager d'utiliser l'[API de journalisation](/developing/graph-ts/api/#logging-api) > ERREUR TS2554 : attendu ? arguments, mais j'ai eu ?. > @@ -1384,6 +1381,10 @@ Cela signifie que vous avez utilisé `console.log` dans votre code, ce qui n'est L'inadéquation des arguments est causée par une inadéquation entre `graph-ts` et `matchstick-as`. La meilleure façon de résoudre des problèmes comme celui-ci est de tout mettre à jour vers la dernière version publiée. +## Ressources additionnelles + +For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). + ## Réaction Si vous avez des questions, des commentaires, des demandes de fonctionnalités ou si vous souhaitez simplement nous contacter, le meilleur endroit serait The Graph Discord où nous avons une chaîne dédiée à Matchstick, appelée 🔥| tests unitaires. diff --git a/website/pages/fr/glossary.mdx b/website/pages/fr/glossary.mdx index e709ff578441..ebe5e6e88668 100644 --- a/website/pages/fr/glossary.mdx +++ b/website/pages/fr/glossary.mdx @@ -10,11 +10,9 @@ title: Glossaire - **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexeurs** : participants au réseau qui exécutent des nœuds d'indexation pour indexer les données des blockchains et servir des requêtes GraphQL. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Flux de revenus des indexeurs** : Les indexeurs sont récompensés en GRT avec deux composantes : les remises sur les frais de requête et les récompenses d'indexation. @@ -22,19 +20,19 @@ title: Glossaire 2. **Récompenses d'indexation** : les récompenses que les indexeurs reçoivent pour l'indexation des subgraphs. Les récompenses d'indexation sont générées par l'émission annuelle de 3 % de GRT. -- **Participation personnelle de l'indexeur** : le montant de GRT que les indexeurs mettent en jeu pour participer au réseau décentralisé. Le minimum est de 100 000 GRT et il n’y a pas de limite supérieure. +- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Mise à niveau de l'indexeur** : un indexeur temporaire conçu pour servir de solution de secours pour les requêtes de subgraphs non prises en charge par d'autres indexeurs du réseau. Il garantit une transition transparente pour la mise à niveau des subgraphs à partir du service hébergé en répondant facilement à leurs requêtes dès leur publication. L'indexeur de mise à niveau n'est pas compétitif par rapport aux autres indexeurs et prend en charge les chaînes qui étaient auparavant exclusives au service hébergé. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Taxe de délégation** : Une taxe de 0,5 % payée par les délégués lorsqu'ils délèguent des GRT aux indexeurs. Les GRT utilisés pour payer la taxe sont brûlés. -- **Curateurs** : participants au réseau qui identifient des subgraphs de haute qualité et les « organisent » (c'est-à-dire signalent GRT sur eux) en échange de partages de curation. Lorsque les indexeurs réclament des frais de requête sur un subgraph, 10 % sont distribués aux conservateurs de ce subgraph. Les indexeurs gagnent des récompenses d'indexation proportionnelles au signal sur un subgraph. Nous voyons une corrélation entre la quantité de GRT signalée et le nombre d'indexeurs indexant un subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Taxe de curation** : Une taxe de 1% payée par les curateurs lorsqu'ils signalent des GRT sur des subgraphs. Le GRT utilisé pour payer la taxe est brûlé. -- **Consommateur de subgraphs** : Toute application ou utilisateur qui interroge un subgraph. +- **Data Consumer**: Any application or user that queries a subgraph. - **Développeur de subgraphs** : un développeur qui crée et déploie un subgraph sur le réseau décentralisé de The Graph. @@ -46,15 +44,15 @@ title: Glossaire 1. **Actif** : Une allocation est considérée comme active lorsqu'elle est créée sur la chaîne. Cela s'appelle ouvrir une allocation, et indique au réseau que l'indexeur indexe et sert activement les requêtes pour un subgraph particulier. Les allocations actives accumulent des récompenses d'indexation proportionnelles au signal sur le subgraph et à la quantité de GRT allouée. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio** : une application puissante pour créer, déployer et publier des subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. -- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. +- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. - **Récompenses d'indexation** : les récompenses que les indexeurs reçoivent pour l'indexation des subgraphs. Les récompenses d'indexation sont distribuées en GRT. @@ -62,11 +60,11 @@ title: Glossaire - .**GRT** : le jeton d'utilité du travail de The Graph, le GRT offre des incitations économiques aux participants du réseau pour leur contribution au réseau. -- **POI ou preuve d'indexation** : lorsqu'un indexeur clôture son allocation et souhaite réclamer ses récompenses d'indexation accumulées sur un subgraph donné, il doit fournir une preuve d'indexation valide et récente ( POI). Les pêcheurs peuvent contester le POI fourni par un indexeur. Un différend résolu en faveur du pêcheur entraînera la suppression de l'indexeur. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node** : Graph Node est le composant qui indexe les subgraphs et rend les données résultantes disponibles pour interrogation via une API GraphQL. En tant que tel, il est au cœur de la pile de l’indexeur, et le bon fonctionnement de Graph Node est crucial pour exécuter un indexeur réussi. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Agent de l'indexeur** : l'agent de l'indexeur fait partie de la pile de l'indexeur. Il facilite les interactions de l'indexeur sur la chaîne, notamment l'enregistrement sur le réseau, la gestion des déploiements de subgraphs vers son ou son(ses) noed(s) de graph et la gestion des allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client** : une bibliothèque pour créer des dapps basées sur GraphQL de manière décentralisée. @@ -76,12 +74,8 @@ title: Glossaire - **Période de récupération** : le temps restant jusqu'à ce qu'un indexeur qui a modifié ses paramètres de délégation puisse le faire à nouveau. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. - -- **_Mise à niveau_ d'un subgraph vers The Graph Network** : processus de déplacement d'un subgraph du service hébergé vers The Graph Network . +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. -- **_Mise à jour_ d'un subgraph** : processus de publication d'une nouvelle version de subgraph avec des mises à jour du manifeste, du schéma ou du subgraph. cartographies. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/fr/index.json b/website/pages/fr/index.json index c94d94f541f8..dd7f5b249c72 100644 --- a/website/pages/fr/index.json +++ b/website/pages/fr/index.json @@ -1,13 +1,13 @@ { "title": "Commencer", - "intro": "Découvrez The Graph, un protocole décentralisé pour indexer et interroger les données des blockchains.", + "intro": "Découvrez The Graph, un protocole décentralisé d'indexation et d'interrogation des données provenant des blockchains.", "shortcuts": { "aboutTheGraph": { - "title": "À propos du Graph", + "title": "À propos de The Graph", "description": "En savoir plus sur The Graph" }, "quickStart": { - "title": "Début rapide", + "title": "Démarrage rapide", "description": "Lancez-vous et commencez avec The Graph" }, "developerFaqs": { @@ -15,8 +15,8 @@ "description": "Questions fréquemment posées" }, "queryFromAnApplication": { - "title": "Requête d'une application", - "description": "Apprenez à exécuter vos requêtes d'une application" + "title": "Requête depuis une application", + "description": "Apprenez à exécuter vos requêtes à partir d'une application" }, "createASubgraph": { "title": "Créer un subgraph", @@ -25,7 +25,7 @@ }, "networkRoles": { "title": "Les divers rôles du réseau", - "description": "Découvrez les rôles réseau de The Graph.", + "description": "Découvrez les divers rôles du réseau The Graph.", "roles": { "developer": { "title": "Développeur", @@ -56,16 +56,12 @@ "graphExplorer": { "title": "Graph Explorer", "description": "Explorer les subgraphs et interagir avec le protocole" - }, - "hostedService": { - "title": "Service hébergé", - "description": "Create and explore subgraphs on the hosted service" } } }, "supportedNetworks": { "title": "Réseaux pris en charge", - "description": "The Graph supports the following networks.", - "footer": "For more details, see the {0} page." + "description": "The Graph prend en charge les réseaux suivants.", + "footer": "Pour plus de détails, consultez la page {0}." } } diff --git a/website/pages/fr/managing/delete-a-subgraph.mdx b/website/pages/fr/managing/delete-a-subgraph.mdx index 68ef0a37da75..5e69052e4f4b 100644 --- a/website/pages/fr/managing/delete-a-subgraph.mdx +++ b/website/pages/fr/managing/delete-a-subgraph.mdx @@ -6,10 +6,12 @@ Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). > Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. -## Step-by-Step +## Étape par Étape 1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). + 2. Click on the three-dots to the right of the "publish" button. + 3. Click on the option to "delete this subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) @@ -24,6 +26,6 @@ Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). ### Important Reminders - Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Les curateurs ne seront plus en mesure de signaler le subgraph. +- Les Curateurs qui ont déjà signalé sur le subgraph peuvent retirer leur signal à un prix moyen par action. - Deleted subgraphs will show an error message. diff --git a/website/pages/fr/managing/transfer-a-subgraph.mdx b/website/pages/fr/managing/transfer-a-subgraph.mdx index c4060284d5d9..4f12b5a94032 100644 --- a/website/pages/fr/managing/transfer-a-subgraph.mdx +++ b/website/pages/fr/managing/transfer-a-subgraph.mdx @@ -1,65 +1,42 @@ --- -title: Transfer and Deprecate a Subgraph +title: Transférer un Subgraph --- -## Transferring ownership of a subgraph +Les subgraphs publiés sur le réseau décentralisé possèdent un NFT minté à l'adresse qui a publié le subgraph. Le NFT est basé sur la norme ERC721, ce qui facilite les transferts entre comptes sur The Graph Network. -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +## Rappels -**Please note the following:** +- Quiconque possède le NFT contrôle le subgraph. +- Si le propriétaire décide de vendre ou de transférer le NFT, il ne pourra plus éditer ou mettre à jour ce subgraph sur le réseau. +- Vous pouvez facilement déplacer le contrôle d'un subgraph vers un multi-sig. +- Un membre de la communauté peut créer un subgraph au nom d'une DAO. -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +## Voir votre Subgraph en tant que NFT -### View your subgraph as an NFT - -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +Pour voir votre subgraph en tant que NFT, vous pouvez visiter une marketplace NFT telle que **OpenSea**: ``` https://opensea.io/your-wallet-address ``` -Or a wallet explorer like **Rainbow.me**: +Ou un explorateur de portefeuille comme **Rainbow.me**: ``` -https://rainbow.me/your-wallet-addres +https://rainbow.me/adresse-de-votre-portefeuille ``` -### Step-by-Step - -To transfer ownership of a subgraph, do the following: - -1. Use the UI built into Subgraph Studio: - - ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) - -2. Choose the address that you would like to transfer the subgraph to: - - ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) - -Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: - -![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) - -## Deprecating a subgraph +## Étape par Étape -Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. +Pour transférer la propriété d'un subgraph, procédez comme suit : -### Step-by-Step +1. Utilisez l'interface utilisateur intégrée dans Subgraph Studio : -To deprecate your subgraph, do the following: + ![Transfert de propriété de subgraph](/img/subgraph-ownership-transfer-1.png) -1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). -2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. -3. Your subgraph will no longer appear in searches on Graph Explorer. +2. Choisissez l'adresse vers laquelle vous souhaitez transférer le subgraph : -**Please note the following:** + ![Transfert de propriété de subgraph](/img/subgraph-ownership-transfer-2.png) -- The owner's wallet should call the `deprecateSubgraph` function. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deprecated subgraphs will show an error message. +Optionnellement, vous pouvez également utiliser l'interface utilisateur intégrée dans les marketplaces NFT comme OpenSea : -> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. +![Transfert de propriété de subgraph depuis la marketplace NFT](/img/subgraph-ownership-transfer-nft-marketplace.png) diff --git a/website/pages/fr/network/benefits.mdx b/website/pages/fr/network/benefits.mdx index 30eb7202be81..d290716f01bd 100644 --- a/website/pages/fr/network/benefits.mdx +++ b/website/pages/fr/network/benefits.mdx @@ -11,7 +11,7 @@ Voici une analyse : ## Pourquoi devriez-vous utiliser le réseau Graph -- Significantly lower monthly costs +- Des coûts mensuels nettement réduits - 0 $ de frais de configuration de l'infrastructure - Disponibilité supérieure - Accès à des centaines d’indexeurs indépendants à travers le monde @@ -21,30 +21,30 @@ Voici une analyse : ### Une structure & de coûts faible et plus flexible -No contracts. No monthly fees. Only pay for the queries you use—with an average cost-per-query of $40 per million queries (~$0.00004 per query). Queries are priced in USD and paid in GRT or credit card. +Pas de contrat. Pas de frais mensuels. Vous ne payez que pour les requêtes que vous utilisez, avec un coût moyen par requête de 40 $ par million de requêtes (~0,00004 $ par requête). Les requêtes sont facturées en USD et payées en GRT ou par carte de crédit. -Query costs may vary; the quoted cost is the average at time of publication (March 2024). +Les coûts d'interrogation peuvent varier ; le coût indiqué est la moyenne au moment de la publication (mars 2024). -## Low Volume User (less than 100,000 queries per month) +## Utilisateur à faible volume (moins de 100 000 requêtes par mois) | Cost Comparison | Auto-hébergé | The Graph Network | | :-: | :-: | :-: | | Coût mensuel du serveur\* | 350 $ au mois | 0 $ | -| Frais de requête | + 0 $ | $0 per month | +| Frais de requête | + 0 $ | 0$ par mois | | Temps d'ingénierie | 400 $ au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | -| Requêtes au mois | Limité aux capacités infra | 100,000 (Free Plan) | -| Tarif par requête | 0 $ | $0 | +| Requêtes au mois | Limité aux capacités infra | 100 000 (Plan Gratuit) | +| Tarif par requête | 0 $ | 0$ | | Les infrastructures | Centralisée | Décentralisée | | La redondance géographique | 750$+ par nœud complémentaire | Compris | | Temps de disponibilité | Variable | + 99.9% | | Total des coûts mensuels | + 750 $ | 0 $ | -## Medium Volume User (~3M queries per month) +## Utilisateur à volume moyen (~3M requêtes par mois) | Comparaison de coût | Auto-hébergé | The Graph Network | | :-: | :-: | :-: | | Coût mensuel du serveur\* | 350 $ au mois | 0 $ | -| Frais de requête | 500 $ au mois | $120 per month | +| Frais de requête | 500 $ au mois | 120$ par mois | | Temps d'ingénierie | 800 $ au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | | Requêtes au mois | Limité aux capacités infra | ~3,000,000 | | Tarif par requête | 0 $ | $0.00004 | @@ -52,30 +52,31 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar | Frais d'ingénierie | 200 $ au mois | Compris | | La redondance géographique | 1 200 $ coût total par nœud supplémentaire | Compris | | Temps de disponibilité | Variable | + 99.9% | -| Total des coûts mensuels | + 1650 $ | $120 | +| Total des coûts mensuels | + 1650 $ | 120$ | -## High Volume User (~30M queries per month) +## Utilisateur à volume élevé (~30M requêtes par mois) | Comparaison des coûts | Auto-hébergé | The Graph Network | | :-: | :-: | :-: | | Coût mensuel du serveur\* | 1100 $ au mois, par nœud | 0 $ | -| Frais de requête | 4000 $ | $1,200 per month | +| Frais de requête | 4000 $ | 1 200 $ par mois | | Nombre de nœuds obligatoires | 10 | Sans objet | | Temps d'ingénierie | 6000 $ ou plus au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | | Requêtes au mois | Limité aux capacités infra | ~30,000,000 | -| Tarif par requête | 0 $ | $0.00004 | +| Tarif par requête | 0 $ | 0.00004$ | | L'infrastructure | Centralisée | Décentralisée | | La redondance géographique | 1 200 $ de coûts totaux par nœud supplémentaire | Compris | | Temps de disponibilité | Variable | + 99.9% | -| Total des coûts mensuels | + 11 000 $ | $1,200 | +| Total des coûts mensuels | + 11 000 $ | 1,200$ | \*y compris les coûts de sauvegarde : $50-$ à 100 dollars au mois Temps d'ingénierie basé sur une hypothèse de 200 $ de l'heure -Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. +Reflète le coût pour le consommateur de données. Les frais de requête sont toujours payés aux Indexeurs pour +les requêtes du Plan Gratuit. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/arbitrum/arbitrum-faq) are substantially lower than Ethereum mainnet. +Les coûts estimés sont uniquement pour les subgraphs Ethereum Mainnet - les coûts sont encore plus élevés lorsqu'on héberge soi-même un `graph-node` sur d'autres réseaux. Certains utilisateurs peuvent avoir besoin de mettre à jour leur subgraph vers une nouvelle version. En raison des frais de gaz Ethereum, une mise à jour coûte ~50 $ au moment de la rédaction. Notez que les frais de gaz sur [Arbitrum](/arbitrum/arbitrum-faq) sont considérablement plus bas que ceux d'Ethereum mainnet. Émettre un signal sur un subgraph est un cout net, nul optionnel et unique (par exemple, 1 000 $ de signal peuvent être conservés sur un subgraph, puis retirés - avec la possibilité de gagner des revenus au cours du processus). @@ -89,4 +90,4 @@ Le réseau décentralisé du Graph permet aux utilisateurs d'accéder à une red En résumé : Le réseau de graphs est moins coûteux, plus facile à utiliser et produit des résultats supérieurs à ceux obtenus par l'exécution locale d'un `nœud de graphs`. -Commencez à utiliser le réseau The Graph dès aujourd'hui, et apprenez comment [mettre à niveau votre subgraph vers le réseau décentralisé de The Graph](/cookbook/upgrading-a-subgraph). +Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/quick-start). diff --git a/website/pages/fr/network/contracts.mdx b/website/pages/fr/network/contracts.mdx index 6abd80577ced..b9b33f2c14d7 100644 --- a/website/pages/fr/network/contracts.mdx +++ b/website/pages/fr/network/contracts.mdx @@ -1,26 +1,26 @@ --- -title: Protocol Contracts +title: Contrats du Protocole --- import { ProtocolContractsTable } from '@/src/contracts' -Below are the deployed contracts which power The Graph Network. Visit the official [contracts repository](https://github.com/graphprotocol/contracts) to learn more. +Ci-dessous, les contrats déployés qui alimentent The Graph Network. Visitez le [dépôt officiel des contrats](https://github.com/graphprotocol/contracts) pour en savoir plus. ## Arbitrum -This is the principal deployment of The Graph Network. +Il s'agit du déploiement principal de The Graph Network. -## Mainnet +## Réseau principal -This was the original deployment of The Graph Network. [Learn more](/arbitrum/arbitrum-faq) about The Graph's scaling with Arbitrum. +Il s'agissait du déploiement initial de The Graph Network. [En savoir plus](/arbitrum/arbitrum-faq) sur la mise à l'échelle de The Graph avec Arbitrum. ## Arbitrum Sepolia -This is the primary testnet for The Graph Network. Testnet is predominantly used by core developers and ecosystem participants for testing purposes. There are no guarantees of service or availability on The Graph's testnets. +Il s'agit du testnet principal pour The Graph Network. Le testnet est principalement utilisé par les développeurs principaux et les participants de l'écosystème à des fins de test. Il n'y a aucune garantie de service ou de disponibilité sur les testnets de The Graph. diff --git a/website/pages/fr/network/curating.mdx b/website/pages/fr/network/curating.mdx index f7b06fd56ac2..da503e1d6dea 100644 --- a/website/pages/fr/network/curating.mdx +++ b/website/pages/fr/network/curating.mdx @@ -2,75 +2,73 @@ title: Curation --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Les Curateurs jouent un rôle essentiel dans l'économie décentralisée de The Graph. Ils utilisent leur connaissance de l'écosystème web3 pour évaluer et signaler les subgraphs qui devraient être indexés par The Graph Network. à travers Graph Explorer, les Curateurs consultent les données du réseau pour prendre des décisions de signalisation. En retour, The Graph Network récompense les Curateurs qui signalent des subgraphs de bonne qualité en leur reversant une partie des frais de recherche générés par ces subgraphs. La quantité de GRT signalée est l'une des principales considérations des Indexeurs lorsqu'ils déterminent les subgraphs à indexer. -## What Does Signaling Mean for The Graph Network? +## Que signifie "le signalement" pour The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Avant que les consommateurs ne puissent interroger un subgraphs, celui-ci doit être indexé. C'est ici que la curation entre en jeu. Afin que les Indexeurs puissent gagner des frais de requête substantiels sur des subgraphs de qualité, ils doivent savoir quels subgraphs indexer. Lorsque les Curateurs signalent un subgraphs , ils indiquent aux Indexeurs qu'un subgraphs est demandé et de qualité suffisante pour être indexé. -Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Les signaux des Curateurs sont représentés par des jetons ERC20 appelés Graph Curation Shares (GCS). Ceux qui veulent gagner plus de frais de requête doivent signaler leurs GRT aux subgraphs qui, selon eux, généreront un flux important de frais pour le réseau. Les Curateurs ne peuvent pas être réduits pour mauvais comportement, mais il y a une taxe de dépôt sur les Curateurs pour dissuader les mauvaises décisions pouvant nuire à l'intégrité du réseau. Les Curateurs gagneront également moins de frais de requête s'ils sélectionnent un subgraph de mauvaise qualité car il y aura moins de requêtes à traiter ou moins d'Indexeurs pour les traiter. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Le [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) garantit l'indexation de tous les subgraphs. Signaler du GRT sur un subgraph particulier attirera plus d'indexeurs. Cette incitation d'indexeurs supplémentaires à travers la curation vise à améliorer la qualité du service pour les requêtes en réduisant la latence et en améliorant la disponibilité du réseau. -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +Lors du signalement, les Curateurs peuvent décider de signaler une version spécifique du subgraph ou de signaler en utilisant l'auto-migration. S'ils signalent en utilisant l'auto-migration, les parts d'un Curateur seront toujours mises à jour vers la dernière version publiée par le développeur. S'ils décident de signaler une version spécifique, les parts resteront toujours sur cette version spécifique. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +Si vous avez besoin d'assistance avec la curation pour améliorer la qualité du service, veuillez envoyer une demande à l'équipe Edge & Node à l'adresse support@thegraph.zendesk.com et spécifier les subgraphs pour lesquels vous avez besoin d'assistance. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Les Indexeurs peuvent trouver des subgraphs à indexer en fonction des signaux de curation qu'ils voient dans Graph Explorer (capture d'écran ci-dessous). ![Les subgraphs d'exploration](/img/explorer-subgraphs.png) ## Comment signaler -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/network/explorer) +Dans l'onglet Curateur de Graph Explorer, les Curateurs pourront signaler et dé-signaler certains subgraphs en fonction des statistiques du réseau. Pour un aperçu étape par étape de la procédure à suivre dans Graph Explorer, [cliquez ici.](/network/explorer) Un curateur peut choisir de signaler une version spécifique d'un sugraph ou de faire migrer automatiquement son signal vers la version de production la plus récente de ce subgraph. Ces deux stratégies sont valables et comportent leurs propres avantages et inconvénients. -La signalisation sur une version spécifique est particulièrement utile lorsqu'un subgraph est utilisé par plusieurs dApps. Un dApp peut avoir besoin de mettre à jour régulièrement le subgraph avec de nouvelles fonctionnalités. Une autre dApp pourrait préférer utiliser une version plus ancienne et bien testée du subgraph. Lors de la curation initiale, une taxe standard de 1% est encourue. +Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. La migration automatique de votre signal vers la version de production la plus récente peut s'avérer utile pour vous assurer que vous continuez à accumuler des frais de requête. Chaque fois que vous effectuez une curation, une taxe de curation de 1 % est appliquée. Vous paierez également une taxe de curation de 0,5 % à chaque migration. Les développeurs de subgraphs sont découragés de publier fréquemment de nouvelles versions - ils doivent payer une taxe de curation de 0,5 % sur toutes les parts de curation migrées automatiquement. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Remarque** : La première adresse qui signale un subgraph particulier est considérée comme le premier Curateur et devra faire un travail plus intensif en gaz que les Curateurs suivants, car le premier Curateur initialise les jetons de curation à se partager et transfère également les jetons dans le proxy de The Graph. -## Withdrawing your GRT +## Retrait de vos GRT -Curators have the option to withdraw their signaled GRT at any time. +Les Curateurs ont la possibilité de retirer leur GRT signalé à tout moment. -Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). +Contrairement au processus de délégation, si vous décidez de retirer vos GRT signalés, vous n'aurez pas un délai d'attente et vous recevrez le montant total (moins la taxe de curation de 1%). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Une fois qu'un Curateur retire ses signaux, les Indexeurs peuvent choisir de continuer à indexer le subgraph, même s'il n'y a actuellement aucun GRT signalé actif. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +Cependant, il est recommandé que les Curateurs laissent leur GRT signalé en place non seulement pour recevoir une partie des frais de requête, mais aussi pour assurer la fiabilité et la disponibilité du subgraph. ## Risques 1. Le marché des requêtes est intrinsèquement jeune chez The Graph et il y a un risque que votre %APY soit inférieur à vos attentes en raison de la dynamique naissante du marché. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). +2. Frais de curation - lorsqu'un Curateur signale des GRT sur un subgraph, il doit s'acquitter d'une taxe de curation de 1%. Cette taxe est brûlée. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. Un subgraph peut échouer à cause d'un bug. Un subgraph qui échoue n'accumule pas de frais de requête. Par conséquent, vous devrez attendre que le développeur corrige le bogue et déploie une nouvelle version. - Si vous êtes abonné à la version la plus récente d'un subgraph, vos parts migreront automatiquement vers cette nouvelle version. Cela entraînera une taxe de curation de 0,5 %. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. + - Si vous avez signalé sur une version spécifique d'un subgraph et qu'elle échoue, vous devrez brûler manuellement vos parts de curation. Vous pouvez alors signaler sur la nouvelle version du subgraph, encourant ainsi une taxe de curation de 1%. ## FAQs sur la Curation ### 1. Quel pourcentage des frais de requête les Curateurs perçoivent-ils? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +En signalant sur un subgraph, vous gagnerez une part de tous les frais de requête générés par le subgraph. 10% de tous les frais de requête vont aux Curateurs au prorata de leurs parts de curation. Ces 10% sont soumis à la gouvernance. ### 2. Comment décider quels sont les subgraphs de haute qualité sur lesquels on peut émettre un signal ? -Trouver des subgraphs de haute qualité est une tâche complexe, mais elle peut être abordée de plusieurs manières différentes. En tant que Curateur, vous voulez rechercher des subgraphs fiables qui génèrent un volume de requêtes. Un subgraph fiable peut être précieux s'il est complet, précis et répond aux besoins en données d'une dApp. Un subgraph mal architecturé pourrait nécessiter d'être révisé ou republié, et peut également échouer. Il est crucial pour les Curateurs d'examiner l'architecture ou le code d'un subgraph afin d'évaluer si un subgraph est précieux. En conséquence : +Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: -- Les curateurs peuvent utiliser leur compréhension d'un réseau pour essayer de prédire comment un subgraph individuel peut générer un volume de requêtes plus ou moins élevé à l'avenir -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Les Curateurs peuvent utiliser leur compréhension d'un réseau pour essayer de prédire comment un subgraph individuel peut générer un volume de requêtes plus élevé ou plus faible à l'avenir +- Les Curateurs doivent également comprendre les métriques disponibles via Graph Explorer. Des métriques telles que le volume de requêtes passées et l'identité du développeur du subgraph peuvent aider à déterminer si un subgraph mérite ou non d'être signalé. ### 3. Quel est le coût de la mise à jour d'un subgraph ? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action that costs gas. +La migration de vos parts de curation vers une nouvelle version de subgraph entraîne une taxe de curation de 1%. Les Curateurs peuvent choisir de s'abonner à la version la plus récente d'un subgraph. Lorsque les parts de Curateur sont auto-migrées vers une nouvelle version, les Curateurs paieront également une demi-taxe de curation, soit 0,5%, car la mise à jour des subgraphs est une action on-chain qui coûte des frais de gaz. ### 4. À quelle fréquence puis-je mettre à jour mon subgraph ? @@ -78,49 +76,13 @@ Il est conseillé de ne pas mettre à jour vos subgraphs trop fréquemment. Voir ### 5. Puis-je vendre mes parts de curateurs ? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: - -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). - -### 6. Am I eligible for a curation grant? - -Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. - -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Courbe de liaison 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Prix des actions](/img/price-per-share.png) - -Par conséquent, le prix augmente de façon linéaire, ce qui signifie qu'il est de plus en plus cher d'acheter une action au fil du temps. Voici un exemple de ce que nous entendons par là, voir la courbe de liaison ci-dessous : - -![Courbe de liaison](/img/bonding-curve.png) - -Considérons que nous avons deux curateurs qui monnayent des actions pour un subgraph : +Les parts de curation ne peuvent pas être "achetées" ou "vendues" comme d'autres jetons ERC20 que vous pourriez connaître. Elles ne peuvent être que mintée (créees) ou brûlées (détruites). -- Le curateur A est le premier à signaler sur le subgraph. En ajoutant 120 000 GRT dans la courbe, il est capable de frapper 2000 parts. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Comme les deux curateurs détiennent la moitié du total des parts de curation, ils recevraient un montant égal de redevances de curateur. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- Le curateur restant recevrait alors toutes les redevances de curateur pour ce subgraph. S'il brûlait ses pièces pour retirer la GRT, il recevrait 120 000 GRT. -- **TLDR** : La valeur en GRT des parts de curation est déterminée par la courbe de liaison et peut-être volatile. Il est possible de subir de grosses pertes. Signer tôt signifie que vous investissez moins de GRT pour chaque action. Par extension, cela signifie que vous gagnez plus de redevances de curation par GRT que les curateurs ultérieurs pour le même subgraph. +En tant que Curateur sur Arbitrum, vous êtes assuré de récupérer les GRT que vous avez initialement déposé (moins la taxe). -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** +### 6. Suis-je éligible à une subvention de curation? -Dans le cas de The Graph, la [mise en œuvre par Bancor d'une formule de courbe de liaison](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) est exploitée. +Les subventions de curation sont déterminées individuellement au cas par cas. Si vous avez besoin d'assistance avec la curation, veuillez envoyer une demande à l'adresse support@thegraph.zendesk.com. Vous ne savez toujours pas où vous en êtes ? Regardez notre guide vidéo sur la curation ci-dessous : diff --git a/website/pages/fr/network/delegating.mdx b/website/pages/fr/network/delegating.mdx index ba31be5aa31f..a67c4240c3f9 100644 --- a/website/pages/fr/network/delegating.mdx +++ b/website/pages/fr/network/delegating.mdx @@ -2,15 +2,25 @@ title: Délégation --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Les Délégateurs sont des participants au réseau qui délèguent (c'est-à-dire "stakent") des GRT à un ou plusieurs Indexeurs. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- Ils aident à sécuriser le réseau sans exécuter eux-mêmes un Graph Node. + +- Ils gagnent une partie des frais de requête et des récompenses d'un Indexeur en lui déléguant des GRT. + +## Comment ça marche? + +Le nombre de requêtes qu'un Indexeur peut traiter dépend de son propre stake, **le stake déléguée**, et le prix que l'Indexeur facture pour chaque requête. Par conséquent, plus la participation allouée à un indexeur est élevée, plus un indexeur peut traiter de requêtes potentielles. ## Guide du délégué -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Apprenez comment être un délégateur efficace dans The Graph Network. + +Les Délégateurs partagent les gains du protocole aux côtés de tous les indexeurs en fonction de leur participation déléguée. Par conséquent, ils doivent faire preuve de discernement pour choisir les indexeurs en fonction de plusieurs facteurs. + +> Veuillez noter que ce guide ne couvre pas des étapes telles que la configuration proprement dite de MetaMask. La communauté Ethereum fournit une ressource complète concernant les portefeuilles via le lien suivant ([source](https://ethereum.org/en/wallets/)). -There are three sections in this guide: +Ce guide comporte trois sections: - Les risques de la délégation de jetons dans The Graph Network - Comment calculer les rendements escomptés en tant que délégué @@ -24,15 +34,19 @@ Les principaux risques liés à la fonction de délégué dans le protocole sont Les délégués ne peuvent pas être licenciés en cas de mauvais comportement, mais ils sont soumis à une taxe visant à décourager les mauvaises décisions susceptibles de nuire à l'intégrité du réseau. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +En tant que Délégateur, il est important de comprendre ce qui suit : -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- Vous serez facturé 0,5% chaque fois que vous déléguez. Cela signifie que si vous déléguez 1,000 GRT, vous brûlerez automatiquement 5 GRT. + +- Pour être en sécurité, vous devriez calculer votre retour potentiel lorsque vous déléguez à un Indexeur. Par exemple, vous pourriez calculer combien de jours il faudra avant d'avoir récupéré la taxe de 0,5% sur votre délégation. ### La période de découplage de la délégation -Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. +Chaque fois qu'un Délégateur veut se désengager, ses jetons sont soumis à une période de d'attente de 28 jours. Cela signifie qu'ils ne peuvent pas transférer leurs jetons ou gagner des récompenses pendant 28 jours. + +### Pourquoi ceci est t-il important? -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +Si vous choisissez un Indexeur qui n'est pas fiable ou qui ne fait pas du bon travail, vous voudrez vous désengager. Cela signifie que vous perdrez beaucoup d'opportunités de gagner des récompenses, ce qui peut être aussi mauvais que de brûler des GRT. Par conséquent, il est recommandé de choisir judicieusement un Indexeur.
!Délégation débondage](/img/Delegation-Unbonding.png) _Notez la commission de 0,5% dans l'interface utilisateur de la @@ -41,64 +55,87 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### Choisir un indexeur digne de confiance avec une rémunération équitable pour les délégués -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +Pour comprendre comment choisir un Indexeur fiable, vous devez comprendre les paramètres de Délégation. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Paramètres de Délégation + +- **Part de Récompense d'Indexation** - La portion des récompenses que l'Indexeur gardera pour lui-même. + - Si la part de récompense d'un indexeur est fixée à 100%, en tant que délégateur, vous recevrez 0 récompense d'indexation. + - Si elle est fixée à 80%, en tant que Délégateur, vous recevrez 20%.
![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *Le meilleur indexeur donne aux délégués 90 % des récompenses. Le celui du milieu donne 20 % aux délégués. Celui du bas donne aux délégués environ 83 %.*
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Part de Frais de Requête** - C'est comme la part de récompense d'indexation, mais elle s'applique aux gains sur les frais de requête que l'Indexeur collecte. + +Comme vous pouvez le voir, pour choisir le bon Indexeur, vous devez prendre en compte plusieurs éléments. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- Il est fortement recommandé d'explorer [le Discord de The Graph ](https://discord.gg/graphprotocol) pour déterminer quels Indexeurs ont les meilleures réputations sociales et techniques et aussi lesquels récompensent les Délégateurs de manière cohérente. +- Beaucoup d'Indexeurs sont très actifs sur Discord et seront heureux de répondre à vos questions. +- Beaucoup d'entre eux indexent depuis des mois et font de leur mieux pour aider les Délégateurs à obtenir un bon retour, car cela améliore la santé et le succès du réseau. -### Calcul du rendement attendu des délégués +## Calcul du rendement attendu par les Délégateurs -A Delegator must consider a lot of factors when determining the return. These include: +Un Délégateur doit considérer les facteurs suivants pour déterminer un rendement : -- Un délégué technique peut également examiner la capacité de l'indexeur à utiliser les jetons délégués dont il dispose. Si un indexeur n'alloue pas tous les jetons disponibles, il ne réalise pas le profit maximum qu'il pourrait réaliser pour lui-même ou pour ses délégués. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Considérer la capacité d'un Indexeur à utiliser les jetons délégués dont il dispose. + - Si un Indexeur n'alloue pas tous les jetons disponibles, ils ne gagnent pas le maximum de profit qu'ils pourraient obtenir pour lui même ou ses Déléguateurs. +- Soyez attentif aux premiers jours de la délégation. + - Un Indexeur peut choisir de fermer une allocation et de collecter des récompenses à tout moment entre 1 et 28 jours. Il est possible qu'un indexeur ait encore beaucoup de récompenses à collecter, de sorte que le total de ses récompenses est faible. ### Considérant la réduction des frais d'interrogation et la réduction des frais d'indexation -Comme décrit dans les sections précédentes, vous devez choisir un indexeur qui est transparent et honnête dans la fixation de sa réduction des frais de requête et d'indexation. Un délégué doit également examiner le temps de refroidissement des paramètres pour voir de combien de temps il dispose. Après cela, il est assez simple de calculer le montant des récompenses que les délégués reçoivent. La formule est la suivante : +Vous devriez choisir un Indexeur qui est transparent et honnête dans la fixation de ses frais de requête et de ses réductions de frais d'indexation. Vous devez également examiner le Paramètre délai de récupérations pour voir de combien de temps vous disposez. Une fois cela fait, il est facile de calculer le montant des récompenses que vous obtenez. + +La formule est : ![Délégation Image 3](/img/Delegation-Reward-Formula.png) ### Compte tenu du pool de délégation de l'indexeur -Une autre chose qu'un délégant doit prendre en compte est la proportion du pool de délégation qu'il possède. Toutes les récompenses de délégation sont partagées équitablement, avec un simple rééquilibrage du pool déterminé par le montant que le délégant a déposé dans le pool. Cela donne au délégant une part du pool : +Les Déléguateurs doivent considérer la proportion du Pool de Délégation qu'ils possèdent. -![Formule de partage](/img/Share-Forumla.png) +Toutes les récompenses de délégation sont partagées de manière égale, avec un rééquilibrage du pool basé sur le montant que le Déléguateur a déposé dans le pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +Cela donne au Déléguateur une part du pool : + +![Formule de partage](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> La formule ci-dessus montre qu'il est possible pour un Indexeur offrant seulement 20% aux Déléguateurs de fournir une meilleure récompense qu'un Indexeur donnant 90%. Il suffit de faire les calculs pour déterminer la meilleure récompense. ### Compte tenu de la capacité de délégation -Une autre chose à considérer est la capacité de délégation. Actuellement, le ratio de délégation est fixé à 16. Cela signifie que si un indexeur a mis en jeu 1 000 000 GRT, sa capacité de délégation est de 16 000 000 GRT de jetons délégués qu'il peut utiliser dans le protocole. Tout jeton délégué dépassant ce montant diluera toutes les récompenses du délégué. +Enfin, considérez la capacité de délégation. Actuellement, le Ratio de Délégation est fixé à 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Pourquoi est-ce important? -Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. +Cela signifie que si un Indexeur a staké 1 000 000 GRT, sa Capacité de Délégation est de 16 000 000 GRT de tokens délégués qu'ils peuvent utiliser dans le protocole. Tous les tokens délégués au-delà de ce montant dilueront toutes les récompenses des Déléguateurs. + +Imaginez un Indexeur avec 100 000 000 GRT délégués à lui, mais sa capacité est seulement de 16 000 000 GRT. Cela signifie effectivement que 84 000 000 de GRT ne sont pas utilisés pour gagner des tokens. Ainsi, les Déléguateurs et les Indexeurs gagnent moins de récompenses qu'ils pourraient. + +Par conséquent, un Déléguateur doit toujours considérer la Capacité de Délégation d'un Indexeur et en tenir compte dans leur prise de décision. ## FAQ et bugs pour les délégants ### Bug MetaMask « Transaction en attente » -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. Lorsque j'essaie de déléguer ma transaction dans MetaMask, elle apparaît comme « En attente » ( "Pending") ou « En file d'attente » ( "Queued") plus longtemps que prévu. Que devrais-je faire? + +Parfois, les tentatives de déléguer aux Indexeurs via MetaMask peuvent échouer et entraîner des périodes prolongées de tentatives de transaction "Pending" ou "Queued". + +#### Exemple -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Imaginons que vous tentiez de déléguer avec des frais de gaz insuffisants par rapport aux prix actuels. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- Cette action peut entraîner l'affichage de la tentative de transaction comme "En attente" dans votre portefeuille MetaMask pendant plus de 15 minutes. Dans ce cas, vous pouvez essayer des transactions ultérieures, mais elles ne seront traitées que lorsque la transaction initiale sera minée, car les transactions pour une adresse doivent être traitées dans l'ordre. +- Dans de tels cas, ces transactions peuvent être annulées dans MetaMask, mais les tentatives de transactions accumuleront des frais de gas sans aucune garantie que les tentatives ultérieures seront réussies. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +Une solution simple à ce problème consiste à redémarrer le navigateur (par exemple, en utilisant "abort:restart" dans la barre d'adresse), ce qui annulera toutes les tentatives précédentes sans que le gaz ne soit soustrait du portefeuille. Plusieurs utilisateurs ayant rencontré ce problème ont signalé des transactions réussies après avoir redémarré leur navigateur et tenté de déléguer. -## Guide vidéo pour l'interface utilisateur du réseau +## Guide Vidéo -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +Ce guide vidéo passe en revue cette page tout en interagissant avec l'interface utilisateur. diff --git a/website/pages/fr/network/developing.mdx b/website/pages/fr/network/developing.mdx index 9379c97ea334..dd4512515ba2 100644 --- a/website/pages/fr/network/developing.mdx +++ b/website/pages/fr/network/developing.mdx @@ -2,52 +2,29 @@ title: Le Développement --- -Les développeurs constituent le côté demande de l’écosystème The Graph. Les développeurs créent des subgraphs et les publient sur The Graph Network. Ensuite, ils interrogent les subgraphs en direct avec GraphQL afin d'alimenter leurs applications. +Pour commencer à coder immédiatement, rendez-vous sur [Démarrage rapide pour développeurs](/quick-start/). -## Flux du cycle de vie des subgraphs +## Aperçu -Les subgraphs déployés sur le réseau ont un cycle de vie défini. +En tant que développeur, vous avez besoin de données pour construire et alimenter votre d'Api. Interroger et indexer des données blockchain constitue un défi, mais The Graph fournit une solution à ce problème. -### Développer localement +Sur The Graph, vous pouvez : -Comme pour tout développement de subgraphs, cela commence par le développement et les tests locaux. Les développeurs peuvent utiliser la même configuration locale, qu'ils construisent pour The Graph Network, le service hébergé ou un nœud Graph local, en tirant parti de `graph-cli` et `graph-ts` pour créer leur subgraph. Les développeurs sont encouragés à utiliser des outils tels que [Matchstick](https://github.com/LimeChain/matchstick) pour les tests unitaires afin d'améliorer la robustesse de leurs subgraphs. +1. Créer, déployer et publier des subgraphs sur The Graph en utilisant Graph CLI et [Subgraph Studio](https://thegraph.com/studio/). +2. Utiliser GraphQL pour interroger des subgraphs existants. -> Le réseau de graphes est soumis à certaines contraintes, en termes de fonctionnalités et de réseaux pris en charge. Seuls les subgraphs des [réseaux pris en charge](/developing/supported-networks) obtiendront des récompenses en matière d'indexation, et les subgraphs qui récupèrent des données à partir d'IPFS ne sont pas non plus éligibles. +### Qu'est-ce que GraphQL ? -### Deploy to Subgraph Studio +- [GraphQL](https://graphql.org/learn/) est le langage de requête pour les API et un un moteur d'exécution pour exécuter ces requêtes avec vos données existantes. The Graph utilise GraphQL pour interroger les subgraphs. -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. +### Actions des Développeurs -### Publier sur le réseau +- Interroger des subgraphs créés par d'autres développeurs dans [The Graph Network](https://thegraph.com/explorer) et les intégrer dans vos propres dapps. +- Créer des subgraphs personnalisés pour répondre à des besoins de données spécifiques, permettant une meilleure évolutivité et flexibilité pour les autres développeurs. +- Déployer, publier et signaler vos subgraphs au sein de The Graph Network. -Lorsque le développeur est satisfait de son subgraph, il peut le publier sur le réseau The Graph. Il s'agit d'une action 'on-chain', qui enregistre le subgraph afin qu'il puisse être découvert par les indexeurs. Les subgraphs publiés ont un NFT correspondant, qui est alors facilement transférable. Le subgraph publié est associé à des métadonnées qui fournissent aux autres participants du réseau un contexte et des informations utiles. +### Que sont les subgraphs ? -### Signal pour encourager l'indexation +Un subgraph est une API personnalisée construite sur des données blockchain. Il extrait des données d'une blockchain, les traite et les stocke afin qu'elles puissent être facilement interrogées via GraphQL. -Les subgraphs publiés ont peu de chances d'être repérés par les indexeurs sans l'ajout d'un signal. Le signal est constitué de GRT verrouillés associés à un subgraph donné, ce qui indique aux indexeurs qu'un subgraph donné recevra du volume de requêtes et contribue également aux récompenses d'indexation disponibles pour le traiter. Les développeurs de subgraphs ajoutent généralement un signal à leur subgraph afin d'encourager l'indexation. Les curateurs tiers peuvent également ajouter un signal à un subgraph donné s'ils estiment que ce dernier est susceptible de générer un volume de requêtes. - -### Interrogation & Développement d'applications - -Une fois qu'un subgraph a été traité par les indexeurs et est disponible pour l'interrogation, les développeurs peuvent commencer à utiliser le subgraph dans leurs applications. Les développeurs interrogent les subgraphs via une passerelle, qui transmet leurs requêtes à un indexeur qui a traité le subgraph, en payant les frais de requête en GRT. - -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. - -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. - -### Mise à jour des subgraphs - -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. - -Une fois que le développeur de subgraph est prêt à mettre à jour, il peut lancer une transaction pour pointer son subgraph vers la nouvelle version. La mise à jour du subgraph migre tout signal vers la nouvelle version (en supposant que l'utilisateur qui a appliqué le signal a sélectionné "migrer automatiquement"), ce qui entraîne également une taxe de migration. Cette migration de signal devrait inciter les indexeurs à commencer à indexer la nouvelle version du subgraph, elle devrait donc bientôt être disponible pour les interrogations. - -### Dépréciation des subgraphs - -À un moment donné, un développeur peut décider qu'il n'a plus besoin d'un subgraph publié. À ce stade, ils peuvent déprécier le subgraph, qui renvoie tout GRT signalé aux curateurs. - -### Diversité des rôles des développeurs - -Certains développeurs s'engageront dans le cycle de vie complet des subgraphs sur le réseau, en publiant, en interrogeant et en itérant sur leurs propres subgraphs. D'autres se concentreront sur le développement de subgraphs, en créant des API ouvertes sur lesquelles d'autres pourront s'appuyer. D'autres peuvent se concentrer sur les applications, en interrogeant les subgraphs déployés par d'autres. - -### Economie du réseau et des développeurs - -Les développeurs sont des acteurs économiques clés dans le réseau, bloquant des GRT pour encourager l'indexation et, surtout, interroger des subgraphs, ce qui constitue l'échange de valeur principal du réseau. Les développeurs de subgraphs brûlent également des GRT à chaque mise à jour d'un subgraph. +Check out the documentation on [subgraphs](/subgraphs/) to learn specifics. diff --git a/website/pages/fr/network/explorer.mdx b/website/pages/fr/network/explorer.mdx index 8542686d9f27..fa9c6046f691 100644 --- a/website/pages/fr/network/explorer.mdx +++ b/website/pages/fr/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Découvrez Graph Explorer et accédez au monde des subgraphs et des données réseau. + +Graph Explorer se compose de plusieurs parties où vous pouvez interagir avec d'autres développeurs de subgraphs, développeurs de dapps, Curateurs, Indexeurs et Délégateurs. + +## Guide Vidéo + +Pour une vue d'ensemble de Graph Explorer, consultez la vidéo ci-dessous : ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +Après avoir terminé le déploiement et la publication de votre subgraph dans Subgraph Studio, cliquez sur l'onglet "subgraphs" en haut de la barre de navigation pour accéder aux éléments suivants : + +- Vos propres subgraphs terminés +- Les subgraphs publiés par d'autres +- Le subgraph exact que vous voulez (basé sur la date de création, le montant du signal ou le nom). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -Lorsque vous cliquerez sur un subgraph, vous pourrez tester des requêtes dans l'aire de jeu et exploiter les détails du réseau pour prendre des décisions éclairées. Vous pourrez également signaler le GRT sur votre propre subgraph ou sur les subgraphs d'autres personnes afin de sensibiliser les indexeurs à son importance et à sa qualité. Ceci est essentiel car le fait de signaler un subgraph incite à l'indexer, ce qui signifie qu'il fera surface sur le réseau pour éventuellement répondre à des requêtes. +Lorsque vous cliquez sur un subgraph, vous pourrez faire ce qui suit : + +- Tester des requêtes dans le l'environnement de test et utiliser les détails du réseau pour prendre des décisions éclairées. +- Signaler des GRT sur votre propre subgraph ou sur les subgraphs des autres pour informer les Indexeurs de son importance et de sa qualité. +- Ceci est crucial car le signalement sur un subgraph l'incite à être indexé, ce qui signifie qu'il finira par apparaître sur le réseau pour servir des requêtes. ![Explorer Image 2](/img/Subgraph-Details.png) -Sur la page dédiée à chaque subgraph, plusieurs détails font surface. Il s'agit notamment de: +Sur la page dédiée de chaque subgraph, vous pouvez faire ce qui suit : - Signal/Un-signal sur les subgraphs - Afficher plus de détails tels que des graphs, l'ID de déploiement actuel et d'autres métadonnées @@ -31,26 +45,32 @@ Sur la page dédiée à chaque subgraph, plusieurs détails font surface. Il s'a ## Participants -Dans cet onglet, vous aurez une vue d'ensemble de toutes les personnes qui participent aux activités du réseau, telles que les indexeurs, les délégateurs et les curateurs. Ci-dessous, nous examinerons en profondeur ce que chaque onglet signifie pour vous. +Cette section offre une vue d'ensemble de tous les "participants", ce qui inclut tous ceux qui participent au réseau, tels que les Indexeurs, les Délégateurs et les Curateurs. ### 1. Indexeurs ![Explorer Image 4](/img/Indexer-Pane.png) -Commençons par les indexeurs. Les indexeurs sont l'épine dorsale du protocole, étant ceux qui misent sur les subgraphs, les indexent et envoient des requêtes à toute personne consommant des subgraphs. Dans le tableau Indexeurs, vous pourrez voir les paramètres de délégation d'un indexeur, sa participation, le montant qu'ils ont misé sur chaque subgraph et le montant des revenus qu'ils ont tirés des frais de requête et des récompenses d'indexation. Analyses approfondies ci-dessous : +Les Indexeurs sont la colonne vertébrale du protocole. Ils stakent sur les subgraphs, les indexent et servent les requêtes à quiconque consomme les subgraphs. + +Dans le tableau des Indexeurs, vous pouvez voir les paramètres de délégation des Indexeurs, leur staking, combien ils ont staké sur chaque subgraph et combien de revenus ils ont généré à partir des frais de requête et des récompenses d'indexation. -- Query Fee Cut - le pourcentage des remises sur les frais de requête que l'indexeur conserve lorsqu'il les partage avec les délégués -- Réduction de récompense effective - la réduction de récompense d'indexation appliquée au pool de délégation. S’il est négatif, cela signifie que l’indexeur distribue une partie de ses récompenses. S'il est positif, cela signifie que l'indexeur conserve une partie de ses récompenses -- Cooldown Remaining : temps restant jusqu'à ce que l'indexeur puisse modifier les paramètres de délégation ci-dessus. Des périodes de refroidissement sont définies par les indexeurs lorsqu'ils mettent à jour leurs paramètres de délégation -- Propriété : il s'agit de la participation déposée par l'indexeur, qui peut être réduite en cas de comportement malveillant ou incorrect -- Délégué - Participation des délégués qui peut être allouée par l'indexeur, mais ne peut pas être réduite -- Alloué - Participation que les indexeurs allouent activement aux subgraphs qu'ils indexent -- Capacité de délégation disponible - le montant de la participation déléguée que les indexeurs peuvent encore recevoir avant qu'ils ne soient surdélégués +**Spécificités⁠** + +- Query Fee Cut - le % des frais de requête que l'Indexeur conserve lors de la répartition avec les Délégateurs. +- Effective Reward Cut - la réduction des récompenses d'indexation appliquée au pool de délégation. Si elle est négative, cela signifie que l'Indexeur donne une partie de ses récompenses. Si elle est positive, cela signifie que l'Indexeur garde une partie de ses récompenses. +- Cooldown Remaining - le temps restant avant que l'Indexeur puisse modifier les paramètres de délégation ci-dessus. Les périodes de cooldown sont définies par les Indexeurs lorsqu'ils mettent à jour leurs paramètres de délégation. +- Owned - Il s'agit du staking de l'Indexeur, qui peut être partiellement confisquée en cas de comportement malveillant ou incorrect. +- Delegated - Le staking des Délégateurs qui peut être allouée par l'Indexeur, mais ne peut pas être confisquée. +- Allocated - Le staking les Indexeurs allouent activement aux subgraphs qu'ils indexent. +- Available Delegation Capacity - le staking délégué que les Indexeurs peuvent encore recevoir avant d'être sur-délégués. - Capacité de délégation maximale : montant maximum de participation déléguée que l'indexeur peut accepter de manière productive. Une mise déléguée excédentaire ne peut pas être utilisée pour le calcul des allocations ou des récompenses. -- Frais de requête - il s'agit du total des frais que les utilisateurs finaux ont payés pour les requêtes d'un indexeur pendant toute la durée de l'indexation +- Query Fees - il s'agit du total des frais que les utilisateurs finaux ont payés pour les requêtes d'un Indexeur au fil du temps. - Récompenses de l'indexeur - il s'agit du total des récompenses de l'indexeur gagnées par l'indexeur et ses délégués sur toute la durée. Les récompenses des indexeurs sont payées par l'émission de GRT. -Les indexeurs peuvent gagner à la fois des frais de requête et des récompenses d'indexation. Fonctionnellement, cela se produit lorsque les participants au réseau délèguent GRT à un indexeur. Cela permet aux indexeurs de recevoir des frais de requête et des récompenses en fonction de leurs paramètres d'indexeur. Les paramètres d'indexation sont définis en cliquant sur le côté droit du tableau, ou en accédant au profil d'un indexeur et en cliquant sur le bouton « Délégué ». +Les Indexeurs peuvent gagner à la fois des frais de requête et des récompenses d'indexation. Fonctionnellement, cela se produit lorsque les participants au réseau délèguent des GRT à un Indexeur. Cela permet aux Indexeurs de recevoir des frais de requête et des récompenses en fonction de leurs paramètres d'Indexeur. + +- Les paramètres d'indexation peuvent être définis en cliquant sur le côté droit du tableau ou en accédant au profil d'un Indexeur et en cliquant sur le bouton "Delegate ". Pour en savoir plus sur la façon de devenir un indexeur, vous pouvez consulter la [documentation officielle](/network/indexing) ou les [guides de l'indexeur de la Graph Academy.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ Pour en savoir plus sur la façon de devenir un indexeur, vous pouvez consulter ### 2. Curateurs -Les curateurs analysent les subgraphs afin d'identifier ceux qui sont de la plus haute qualité. Une fois qu'un curateur a trouvé un subgraph potentiellement intéressant, il peut le curer en signalant sa courbe de liaison. Ce faisant, les curateurs indiquent aux indexeurs quels sont les subgraphs de haute qualité qui devraient être indexés. +Les Curateurs analysent les subgraphs pour identifier ceux de la plus haute qualité. Une fois qu'un Curateur a trouvé un subgraph potentiellement de haute qualité, il peut le curer en le signalant sur sa courbe de liaison. Ce faisant, les Curateurs informent les Indexeurs des subgraphs de haute qualité qui doivent être indexés. + +- Les Curateurs peuvent être des membres de la communauté, des consommateurs de données ou même des développeurs de subgraphs qui signalent leurs propres subgraphs en déposant des jetons GRT dans une courbe de liaison. + - En déposant des GRT, les Curateurs mintent des actions de curation d'un subgraph. En conséquence, ils peuvent gagner une partie des frais de requête générés par le subgraph sur lequel ils ont signalé. + - La courbe de liaison incite les Curateurs à curer les sources de données de la plus haute qualité. -Les conservateurs peuvent être des membres de la communauté, des consommateurs de données ou même des développeurs de subgraphs qui signalent sur leurs propres subgraphs en déposant des jetons GRT dans une courbe de liaison. En déposant GRT, les curateurs créent des actions de curation d'un subgraph. En conséquence, les curateurs sont éligibles pour gagner une partie des frais de requête générés par le subgraph sur lequel ils ont signalé. La courbe de liaison incite les curateurs à conserver des sources de données de la plus haute qualité. Le Tableau Curateurs de cette section vous permettra de voir : +Dans le tableau des Curateurs ci-dessous, vous pouvez voir : - La date à laquelle le curateur a commencé à organiser - Le nombre de GRT déposés @@ -68,34 +92,36 @@ Les conservateurs peuvent être des membres de la communauté, des consommateurs ![Explorer Image 6](/img/Curation-Overview.png) -Si vous souhaitez en savoir plus sur le rôle de curateur, vous pouvez le faire en visitant les liens suivants de [The Graph Academy](https://thegraph.academy/curators/) ou de la [documentation officielle.](/network/curating) +Si vous souhaitez en savoir plus sur le rôle de Curateur, vous pouvez le faire en consultant la [documentation officielle.](/network/curating) ou [The Graph Academy](https://thegraph.academy/curators/). ### 3. Délégués -Les délégués jouent un rôle clé dans le maintien de la sécurité et de la décentralisation de The Graph Network. Ils participent au réseau en déléguant (c'est-à-dire en « jalonnant ») des jetons GRT à un ou plusieurs indexeurs. Sans délégués, les indexeurs sont moins susceptibles de gagner des récompenses et des frais importants. Par conséquent, les indexeurs cherchent à attirer les délégants en leur offrant une partie des récompenses d'indexation et des frais de requête qu'ils gagnent. +Les Délégateurs jouent un rôle clé dans le maintien de la sécurité et de la décentralisation de The Graph Network. Ils participent au réseau en déléguant (c'est-à-dire en "stakant") des jetons GRT à un ou plusieurs Indexeurs. -Les délégués, quant à eux, sélectionnent les indexeurs sur la base d'un certain nombre de variables différentes, telles que les performances passées, les taux de récompense de l'indexation et les réductions des frais d'interrogation. La réputation au sein de la communauté peut également jouer un rôle à cet égard ! Il est recommandé d'entrer en contact avec les indexeurs sélectionnés via le [Discord du Graph](https://discord.gg/graphprotocol) ou le [Forum du Graph](https://forum.thegraph.com/) ! +- Sans Délégateurs, les Indexeurs sont moins susceptibles de gagner des récompenses et des frais importants. Par conséquent, les Indexeurs attirent les Délégateurs en leur offrant une partie de leurs récompenses d'indexation et de leurs frais de requête. +- Les Délégateurs sélectionnent leurs Indexeurs selon divers critères, telles que les performances passées, les taux de récompense d'indexation et le partage des frais. +- La réputation au sein de la communauté peut également jouer un rôle dans le processus de sélection. Il est recommandé de se connecter avec les Indexeurs sélectionnés via [Discord](https://discord.gg/graphprotocol) ou [le Forum](https://forum.thegraph.com/) de The Graph! ![Explorer Image 7](/img/Delegation-Overview.png) -Le tableau des délégués vous permet de voir les délégués actifs dans la communauté, ainsi que des indicateurs tels que : +Dans le tableau des Délégateurs, vous pouvez voir les Délégateurs actifs dans la communauté et les métriques importantes : - Le nombre d’indexeurs auxquels un délégant délègue - Délégation originale d’un délégant - Les récompenses qu'ils ont accumulées mais qu'ils n'ont pas retirées du protocole - Les récompenses obtenues qu'ils ont retirées du protocole - Quantité totale de GRT qu'ils ont actuellement dans le protocole -- La date de leur dernière délégation à +- La date de leur dernière délégation -Si vous voulez en savoir plus sur la façon de devenir un délégué, ne cherchez plus ! Il vous suffit de consulter la [documentation officielle](/network/delegating) ou [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +Si vous souhaitez en savoir plus sur comment devenir Délégateur, consultez la [documentation officielle](/network/delegating) ou [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Réseau -Dans la section Réseau, vous verrez des indicateurs globaux ainsi que la possibilité de passer à une base par écho et d'analyser les paramètres du réseau de manière plus détaillée. Ces détails vous donneront une idée des performances du réseau au fil du temps. +Dans cette section, vous pouvez consulter les indicateurs clés de performance (KPI) globaux du réseau et passer en mode par époque (epoch) pour analyser plus en détail les métriques du réseau. Ces informations vous donneront une vision de l’évolution des performances du réseau du réseau au fil du temps. ### Aperçu -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +La section d'aperçu présente à la fois toutes les métriques actuelles du réseau et certaines métriques cumulatives au fil du temps : - L’enjeu total actuel du réseau - La répartition des enjeux entre les indexeurs et leurs délégués @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Paramètres du protocole tels que la récompense de la curation, le taux d'inflation, etc - Récompenses et frais de l'époque actuelle -Quelques détails clés qui méritent d'être mentionnés : +Quelques détails clés à noter : -- Les **Frais de requête représentent les frais générés par les consommateurs**, et ils peuvent être réclamés (ou non) par les indexeurs après une période d'au moins 7 époques (voir ci-dessous) après la clôture de leurs allocations vers les subgraphs. et les données qu'ils ont servies ont été validées par les consommateurs. -- **Les récompenses d'indexation représentent le montant des récompenses que les indexeurs ont réclamé à l'émission du réseau au cours de l'époque.** Bien que l'émission du protocole soit fixe, les récompenses ne sont frappées qu'une fois que les indexeurs ont clôturé leurs allocations vers les subgraphs qu'ils ont indexés. Ainsi, le nombre de récompenses par époque varie (par exemple, au cours de certaines époques, les indexeurs peuvent avoir fermé collectivement des allocations qui étaient ouvertes depuis plusieurs jours). +- **Les frais de requête représentent les frais générés par les consommateurs**. Ils peuvent être réclamés (ou non) par les Indexeurs après une période d'au moins 7 époques (voir ci-dessous) après que leurs allocations vers les subgraphs ont été fermées et que les données qu'ils ont servies ont été validées par les consommateurs. +- **Les récompenses d'indexation représentent le montant des récompenses que les Indexeurs ont réclamées de l'émission du réseau pendant cette époque.** Bien que l'émission du protocole soit fixe, les récompenses ne sont mintées que lorsque les Indexeurs ferment leurs allocations vis-à-vis des subgraphs qu'ils ont indexés. Ainsi, le nombre de récompenses par époque varie (c'est-à-dire que pendant certaines époques, les Indexeurs peuvent avoir collectivement fermé des allocations qui ont été ouvertes pendant plusieurs jours). ![Explorer Image 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ Dans la section Époques, vous pouvez analyser, époque par époque, des métriq - L'époque active est celle où les indexeurs sont en train d'allouer des enjeux et de collecter des frais de requête - Les époques de règlement sont celles au cours desquelles les canaux d'État sont réglées. Cela signifie que les indexeurs sont soumis à des réductions si les consommateurs ouvrent des litiges à leur encontre. - Les époques de distribution sont les époques au cours desquelles les canaux d'État pour les époques sont réglés et les indexeurs peuvent réclamer leurs remises sur les frais de requête. - - Les époques finalisées sont les époques pour lesquelles il ne reste plus aucune remise sur les frais de requête à réclamer par les indexeurs, et sont donc finalisées. + - Les époques finalisées sont les époques qui n'ont plus de remboursements de frais de requête à réclamer par les Indexeurs. ![Explorer Image 9](/img/Epoch-Stats.png) ## Votre profil d'utilisateur -Maintenant que nous avons parlé des statistiques du réseau, passons à votre profil personnel. Votre profil personnel vous permet de voir votre activité sur le réseau, quelle que soit la manière dont vous participez au réseau. Votre portefeuille crypto fera office de profil utilisateur, et avec le tableau de bord utilisateur, vous pourrez voir : +Votre profil personnel est l'endroit où vous pouvez voir votre activité sur le réseau, quel que soit votre rôle sur le réseau. Votre portefeuille crypto agira comme votre profil utilisateur, et avec le tableau de bord utilisateur, vous pourrez voir les onglets suivants : ### Aperçu du profil -C'est ici que vous pouvez voir toutes les actions en cours que vous avez entreprises. Vous y trouverez également les informations relatives à votre profil, votre description et votre site web (si vous en avez ajouté un). +Dans cette section, vous pouvez voir ce qui suit : + +- Toutes les actions en cours que vous avez effectuées. +- Les informations de votre profil, description et site web (si vous en avez ajouté un). ![Explorer Image 10](/img/Profile-Overview.png) ### Onglet Subgraphs -Si vous cliquez sur l'onglet Subgraphs, vous verrez vos subgraphs publiés. Cela n'inclut pas les subgraphs déployés avec l'interface de programmation à des fins de test - les subgraphs ne s'affichent que lorsqu'ils sont publiés sur le réseau décentralisé. +Dans l'onglet Subgraphs, vous verrez vos subgraphs publiés. + +> Ceci n'inclura pas les subgraphs déployés avec la CLI à des fins de test. Les subgraphs n'apparaîtront que lorsqu'ils sont publiés sur le réseau décentralisé. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Onglet Indexation -Si vous cliquez sur l'onglet Indexation, vous trouverez un tableau avec toutes les allocations actives et historiques vers les subgraphs, ainsi que des graphs que vous pouvez analyser et voir vos performances passées en tant qu'indexeur. +Dans l'onglet Indexation, vous trouverez un tableau avec toutes les allocations actives et historiques via-à-vis des subgraphs. Vous trouverez également des graphiques où vous pourrez voir et analyser vos performances passées en tant qu'Indexeur. Cette section comprendra également des détails sur vos récompenses nettes d'indexeur et vos frais de requête nets. Vous verrez les métriques suivantes : @@ -158,7 +189,9 @@ Cette section comprendra également des détails sur vos récompenses nettes d'i ### Onglet Délégation -Les délégués sont importants pour le Graph Network. Un délégant doit utiliser ses connaissances pour choisir un indexeur qui fournira un bon retour sur récompenses. Vous trouverez ici les détails de vos délégations actives et historiques, ainsi que les mesures des indexeurs vers lesquels vous avez délégué. +Les Délégateurs sont importants pour The Graph Network. Ils doivent utiliser leurs connaissances pour choisir un Indexeur qui fournira un bon rendement sur les récompenses. + +Dans l'onglet Délégateurs, vous pouvez trouver les détails de vos délégations actives et historiques, ainsi que les métriques des Indexeurs vers lesquels vous avez délégué. Dans la première moitié de la page, vous pouvez voir votre diagramme de délégation, ainsi que le diagramme des récompenses uniquement. À gauche, vous pouvez voir les indicateurs clés de performance qui reflètent vos paramètres de délégation actuels. @@ -198,6 +231,6 @@ Dans votre profil utilisateur, vous pourrez gérer les détails de votre profil ![Explorer Image 15](/img/Profile-Settings.png) -As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. +En tant que portail officiel dans le monde des données décentralisées, Graph Explorer vous permet de prendre diverses actions, quel que soit votre rôle dans le réseau. Vous pouvez accéder aux paramètres de votre profil en ouvrant le menu déroulant à côté de votre adresse, puis en cliquant sur le bouton Paramètres.
![Détails du portefeuille](/img/Wallet-Details.png)
diff --git a/website/pages/fr/network/indexing.mdx b/website/pages/fr/network/indexing.mdx index 6fd8178366cd..cb42d6d859be 100644 --- a/website/pages/fr/network/indexing.mdx +++ b/website/pages/fr/network/indexing.mdx @@ -26,7 +26,7 @@ La mise minimale pour un indexeur est actuellement fixée à 100 000 GRT. Les récompenses de l'indexation proviennent de l'inflation du protocole qui est fixée à 3 % par an. Ils sont répartis entre les subraphs en fonction de la proportion de tous les signaux de curation sur chacun, puis distribués proportionnellement aux indexeurs en fonction de leur participation allouée sur ce subgraph. **Une allocation doit être clôturée avec une preuve d'indexation (POI) valide et répondant aux normes fixées par la charte d'arbitrage afin d'être éligible aux récompenses.** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. +De nombreux outils ont été créés par la communauté pour calculer les récompenses ; vous trouverez une collection de ces outils organisés dans la [collection de guides communautaires](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). Vous pouvez également trouver une liste à jour des outils dans les canaux #Delegators et #Indexers sur le [serveur Discord](https://discord.gg/graphprotocol). Nous recommandons [un optimiseur d'allocation](https://github.com/graphprotocol/allocation-optimizer) intégré à la pile logicielle de l'Indexeur. ### Qu'est-ce qu'une preuve d'indexation (POI) ? @@ -38,11 +38,11 @@ Les allocations accumulent continuellement des récompenses pendant qu'elles son ### Les récompenses d’indexation en attente peuvent-elles être surveillées ? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. +Le contrat RewardsManager a une fonction [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) en lecture seule qui peut être utilisée pour vérifier les récompenses en attente pour une allocation spécifique. De nombreux tableaux de bord créés par la communauté incluent des valeurs de récompenses en attente et ils peuvent être facilement vérifiés manuellement en suivant ces étapes : -1. Interrogez le [subgraph du mainnet](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) pour obtenir les ID de toutes les allocations actives : +1. Interrogez le [réseau principal de subgraphs](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) pour obtenir les identifiants de toutes les allocations actives : ```graphql query indexerAllocations { @@ -63,7 +63,7 @@ Utilisez Etherscan pour appeler `getRewards()` : - Naviguer vers [Interface d'étherscan pour le contrat de récompenses](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) * Appeller `getRewards()`: - - Expand the **9. getRewards** dropdown. + - Déroulez le menu **9. getRewards**. - Saisissez le **allocationID** dans l'entrée. - Cliquez sur le bouton **Requête**. @@ -182,9 +182,9 @@ Remarque : Pour prendre en charge la mise à l'échelle agile, il est recommand #### Créer un projet Google Cloud -- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). +- Cloner ou naviguez vers le [répertoire de l'Indexeur](https://github.com/graphprotocol/indexer). -- Navigate to the `./terraform` directory, this is where all commands should be executed. +- Accédez au répertoire `./terraform`, c'est là que toutes les commandes doivent être exécutées. ```sh cd terraform @@ -297,7 +297,7 @@ Déployez toutes les ressources avec `kubectl apply -k $dir`. ### Nœud de The Graph -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) est une implémentation open source en Rust qui source les événements de la blockchain Ethereum pour mettre à jour de manière déterministe un magasin de données pouvant être interrogé via l'endpoint GraphQL. Les développeurs utilisent des subgraphs pour définir leur schéma et un ensemble de mappages pour transformer les données provenant de la blockchain et le Graph Node gère la synchronisation de toute la chaîne, surveille les nouveaux blocs et les sert via un endpoint GraphQL. #### Commencer à partir des sources @@ -465,26 +465,26 @@ Consultez la section [Configurer l'infrastructure du serveur à l'aide de Terraf ```sh graph-indexer-agent start \ - --ethereum \ + --ethereum \ --ethereum-network mainnet \ --mnemonic \ - --indexer-address \ + --indexer-address \ --graph-node-query-endpoint http://localhost:8000/ \ --graph-node-status-endpoint http://localhost:8030/graphql \ --graph-node-admin-endpoint http://localhost:8020/ \ --public-indexer-url http://localhost:7600/ \ - --indexer-geo-coordinates \ + --indexer-geo-coordinates \ --index-node-ids default \ --indexer-management-port 18000 \ --metrics-port 7040 \ - --network-subgraph-endpoint https://gateway.network.thegraph.com/network \ + --network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \ --default-allocation-amount 100 \ --register true \ --inject-dai true \ --postgres-host localhost \ --postgres-port 5432 \ - --postgres-username \ - --postgres-password \ + --postgres-username \ + --postgres-password \ --postgres-database indexer \ --allocation-management auto \ | pino-pretty @@ -496,23 +496,23 @@ graph-indexer-agent start \ SERVER_HOST=localhost \ SERVER_PORT=5432 \ SERVER_DB_NAME=is_staging \ -SERVER_DB_USER= \ -SERVER_DB_PASSWORD= \ +SERVER_DB_USER= \ +SERVER_DB_PASSWORD= \ graph-indexer-service start \ - --ethereum \ + --ethereum \ --ethereum-network mainnet \ --mnemonic \ - --indexer-address \ + --indexer-address \ --port 7600 \ --metrics-port 7300 \ --graph-node-query-endpoint http://localhost:8000/ \ --graph-node-status-endpoint http://localhost:8030/graphql \ --postgres-host localhost \ --postgres-port 5432 \ - --postgres-username \ - --postgres-password \ + --postgres-username \ + --postgres-password \ --postgres-database is_staging \ - --network-subgraph-endpoint https://gateway.network.thegraph.com/network \ + --network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \ | pino-pretty ``` @@ -545,7 +545,7 @@ La **Indexer CLI** se connecte à l'agent Indexer, généralement via la redirec - `règles de l'indexeur graphique peut-être [options] ` — Définissez le `decisionBasis` pour un déploiement sur `rules`, afin que l'agent indexeur utilisez des règles d'indexation pour décider d'indexer ou non ce déploiement. -- `graph indexer actions get [options] ` - Récupère une ou plusieurs actions en utilisant `all` ou laissez `action-id` vide pour obtenir toutes les actions. Un argument supplémentaire `--status` peut être utilisé pour imprimer toutes les actions d'un certain statut. +- `graph indexer actions get [options] ` - Récupérez une ou plusieurs actions en utilisant `all` ou laissez `action-id` vide pour obtenir toutes les actions. Un argument supplémentaire `--status` peut être utilisé pour afficher toutes les actions d'un certain statut. - `file d'attente d'action de l'indexeur de graphs alloue ` - Action d'allocation de file d'attente @@ -751,9 +751,9 @@ indexer cost set model my_model.agora ### Enjeu dans le protocole -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. +Les premières étapes pour participer au réseau en tant qu'Indexeur sont d'approuver le protocole, de staker des fonds et (facultativement) de configurer une adresse opérateur pour les interactions quotidiennes avec le protocole. -> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). +> Note : Pour les besoins de ces instructions, Remix sera utilisé pour l'interaction avec le contrat, mais n'hésitez pas à utiliser l'outil de votre choix ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), et [MyCrypto](https://www.mycrypto.com/account) sont quelques autres outils connus). Une fois qu'un indexeur a staké des GRT dans le protocole, les [composants de l'indexeur](/network/indexing#indexer-components) peuvent être démarrés et commencer leurs interactions avec le réseau. @@ -763,7 +763,7 @@ Une fois qu'un indexeur a staké des GRT dans le protocole, les [composants de l 2. Dans `File Explorer`, créez un fichier nommé **GraphToken.abi** avec l'[ABI du jeton](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Avec `GraphToken.abi` sélectionné et ouvert dans l'éditeur, passez à la section `Déployer et exécuter des transactions` dans l’interface de Remix. 4. Sous Environnement, sélectionnez `Injected Web3` et sous `Compte` sélectionnez votre adresse d'indexeur. @@ -777,7 +777,7 @@ Une fois qu'un indexeur a staké des GRT dans le protocole, les [composants de l 2. Dans l'`Explorateur de fichiers`, créez un fichier nommé **Staking.abi** avec l'ABI de staking. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Avec `Staking.abi` sélectionné et ouvert dans l'éditeur, passez à la section `Déployer et exécuter des transactions` dans l’interface de Remix. 4. Sous Environnement, sélectionnez `Injected Web3` et sous `Compte` sélectionnez votre adresse d'indexeur. @@ -793,28 +793,28 @@ Une fois qu'un indexeur a staké des GRT dans le protocole, les [composants de l setDelegationParameters(950000, 600000, 500) ``` -### Setting delegation parameters +### Définition des paramètres de délégation -The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. +La fonction `setDelegationParameters()` dans le [contrat de staking](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) est essentielle pour les Indexeurs, leur permettant de définir des paramètres qui définissent leurs interactions avec les Délégateurs, influençant leur partage des récompenses et leur capacité de délégation. -### How to set delegation parameters +### Comment définir les paramètres de délégation -To set the delegation parameters using Graph Explorer interface, follow these steps: +Pour définir les paramètres de délégation à l'aide de l'interface Graph Explorer, suivez ces étapes : -1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). -2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. -3. Connect the wallet you have as a signer. -4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. -5. Submit the transaction to the network. +1. Naviguez vers [Graph Explorer](https://thegraph.com/explorer/). +2. Connectez votre portefeuille. Choisissez multisig (comme Gnosis Safe) puis sélectionnez mainnet. Note : Vous devrez répéter ce processus pour Arbitrum One. +3. Connectez le portefeuille que vous avez en tant que signataire. +4. Accédez à la section "Settings" puis sélectionnez "Delegation Parameters". Ces paramètres doivent être configurés afin d’obtenir un taux effectif dans la fourchette souhaitée. Une fois les valeurs saisies dans les champs prévus, l’interface calcule automatiquement ce taux effectif. Ajustez les valeurs selon vos besoins pour atteindre le pourcentage effectif désiré. +5. Soumettez la transaction au réseau. -> Note: This transaction will need to be confirmed by the multisig wallet signers. +> Note : Cette transaction devra être confirmée par les signataires du portefeuille multisig. ### La durée de vie d'une allocation -Après avoir été créée par un indexeur, une allocation saine passe par quatre états. +After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Une fois qu'une allocation est créée on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) elle est considérée comme **active**. Une partie du staking de l'Indexeur et/ou du staking délégué est allouée à un déploiement de subgraph, ce qui leur permet de réclamer des récompenses d'indexation et de servir des requêtes pour ce déploiement de subgraph. L'agent Indexeur gère la création des allocations en fonction des règles de l'Indexeur. -- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/network/indexing/#how-are-indexing-rewards-distributed)). +- **Closed** - Un Indexeur est libre de fermer une allocation une fois qu'une époque est passée ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) ou son agent Indexeur fermera automatiquement l'allocation après le **maxAllocationEpochs** (actuellement 28 jours). Lorsqu'une allocation est fermée avec une preuve d'indexation (POI) valide, leurs récompenses d'indexation sont distribuées à l'Indexeur et à ses Délégateurs ([en savoir plus](/network/indexing/#how-are-indexing-rewards-distributed)). Il est recommandé aux indexeurs d'utiliser la fonctionnalité de synchronisation hors chaîne pour synchroniser les déploiements de subgraphs avec Chainhead avant de créer l'allocation en chaîne. Cette fonctionnalité est particulièrement utile pour les sous-graphes dont la synchronisation peut prendre plus de 28 époques ou qui risquent d'échouer de manière indéterministe. diff --git a/website/pages/fr/network/overview.mdx b/website/pages/fr/network/overview.mdx index 09210f52ce37..44bb43d7f444 100644 --- a/website/pages/fr/network/overview.mdx +++ b/website/pages/fr/network/overview.mdx @@ -2,14 +2,20 @@ title: Présentation du réseau --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network est un protocole d'indexation décentralisé pour organiser les données de la blockchain. -## Aperçu +## Comment ça marche ? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Les applications utilisent [GraphQL](/querying/graphql-api/) pour interroger des API ouvertes appelées subgraphs et récupérer les données indexées sur le réseau. Avec The Graph, les développeurs peuvent créer des applications sans serveur qui fonctionnent entièrement sur une infrastructure publique. + +## Spécificités⁠ + +The Graph Network est composé d'Indexeurs, de Curateurs et de Délégateurs qui fournissent des services au réseau et servent des données aux applications web3. ![Économie des jetons](/img/Network-roles@2x.png) -Pour garantir la sécurité économique du Graph Network et l'intégrité des données interrogées, les participants misent et utilisent des jetons Graph ([GRT](/tokenomics)). GRT est un jeton utilitaire de travail qui est un ERC-20 utilisé pour allouer des ressources dans le réseau. +### Économie + +Pour garantir la sécurité économique de The Graph Network et l'intégrité des données interrogées, les participants stakent et utilisent des Graph Tokens ([GRT](/tokenomics)). Le GRT est un jeton utilitaire de travail qui est un ERC-20, utilisé pour allouer des ressources dans le réseau. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Les Indexeurs, Curateurs et Déléguateurs actifs peuvent fournir des services et tirer des revenus du réseau. Le revenu qu'ils perçoivent est proportionnel à la quantité de travail qu'ils effectuent et à leur GRT stakés. diff --git a/website/pages/fr/new-chain-integration.mdx b/website/pages/fr/new-chain-integration.mdx index ec6a7423d079..3f0e6d2612d6 100644 --- a/website/pages/fr/new-chain-integration.mdx +++ b/website/pages/fr/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Intégration de nouveaux réseaux +title: New Chain Integration --- -Graph Node peut actuellement indexer les données des types de chaînes suivants : +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -Si l'une de ces chaînes vous intéresse, l'intégration est une question de configuration et de test de Graph Node. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -Si la blockchain est équivalente à EVM et que le client/nœud expose l'API EVM JSON-RPC standard, Graph Node devrait pouvoir indexer la nouvelle chaîne. Pour plus d'informations, reportez-vous à [Test d'un EVM JSON-RPC] (new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Tester un EVM JSON-RPC -Pour les chaînes non basées sur EVM, Graph Node doit ingérer des données de blockchain via gRPC et des définitions de type connues. Cela peut être fait via [Firehose](firehose/), une nouvelle technologie développée par [StreamingFast](https://www.streamingfast.io/) qui fournit une solution de blockchain d'indexation hautement évolutive utilisant un système de streaming et de fichiers basé sur des fichiers. première approche. Contactez l'[équipe StreamingFast](mailto:integrations@streamingfast.io/) si vous avez besoin d'aide pour le développement de Firehose. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Différence entre EVM JSON-RPC et Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` *(optionally required for Graph Node to support call handlers)* -Bien que les deux conviennent aux subgraphs, un Firehose est toujours requis pour les développeurs souhaitant construire avec [Substreams](substreams/), comme la construction de [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). De plus, Firehose permet des vitesses d'indexation améliorées par rapport à JSON-RPC. +### 2. Firehose Integration -Les nouveaux intégrateurs de chaîne EVM peuvent également envisager l'approche basée sur Firehose, compte tenu des avantages des sous-flux et de ses capacités d'indexation parallélisées massives. La prise en charge des deux permet aux développeurs de choisir entre la création de sous-flux ou de subgraphs pour la nouvelle chaîne. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **REMARQUE** : Une intégration basée sur Firehose pour les chaînes EVM nécessitera toujours que les indexeurs exécutent le nœud RPC d'archive de la chaîne pour indexer correctement les subgraph. Cela est dû à l'incapacité de Firehose à fournir un état de contrat intelligent généralement accessible par la méthode RPC `eth_call`. (Il convient de rappeler que les eth_calls ne sont [pas une bonne pratique pour les développeurs](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Tester un EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -Pour que Graph Node puisse ingérer des données à partir d'une chaîne EVM, le nœud RPC doit exposer les méthodes EVM JSON RPC suivantes : +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Configuration Graph Node +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Commencez par préparer votre environnement local** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Configuration Graph Node + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modifiez [cette ligne](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) pour inclure le nouveau nom de réseau et l'URL compatible avec le RPC JSON EVM - > Ne modifiez pas le nom de la variable d'environnement lui-même. Il doit rester « Ethereum » même si le nom du réseau est différent. -3. Exécutez un nœud IPFS ou utilisez celui utilisé par The Graph : https://api.thegraph.com/ipfs/ -**Testez l'intégration en déployant localement un subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Créez un exemple de subgraph simple. Certaines options sont ci-dessous : - 1. Le contrat intelligent et le subgraph pré-emballés [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) sont un bon point de départ - 2. Amorcez un subgraph local à partir de n'importe quel contrat intelligent ou environnement de développement Solidity existant [en utilisant Hardhat avec un plugin Graph](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Créez votre subgraph dans Graph Node : `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publiez votre subgraph sur Graph Node : `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node devrait synchroniser le subgraph déployé s'il n'y a pas d'erreurs. Laissez-lui le temps de se synchroniser, puis envoyez des requêtes GraphQL au point de terminaison de l'API indiqué dans les journaux. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Intégration d'une nouvelle chaîne Firehose +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Créez un exemple de subgraph simple. Certaines options sont ci-dessous : + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node devrait synchroniser le subgraph déployé s'il n'y a pas d'erreurs. Laissez-lui le temps de se synchroniser, puis envoyez des requêtes GraphQL au point de terminaison de l'API indiqué dans les journaux. -L'intégration d'une nouvelle chaîne est également possible en utilisant l'approche Firehose. Il s'agit actuellement de la meilleure option pour les chaînes non-EVM et d'une exigence pour la prise en charge des substreams. La documentation supplémentaire se concentre sur le fonctionnement de Firehose, l'ajout de la prise en charge de Firehose pour une nouvelle chaîne et son intégration avec Graph Node. Documentation recommandée aux intégrateurs : +## Substreams-powered Subgraphs -1. [Documentation générale sur Firehose](firehose/) -2. [Ajout du support Firehose pour une nouvelle chaîne](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Intégration de Graph Node avec une nouvelle chaîne via Firehose] (https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/fr/publishing/publishing-a-subgraph.mdx b/website/pages/fr/publishing/publishing-a-subgraph.mdx index 7b93e7ad52d4..fba2c9221675 100644 --- a/website/pages/fr/publishing/publishing-a-subgraph.mdx +++ b/website/pages/fr/publishing/publishing-a-subgraph.mdx @@ -2,93 +2,93 @@ title: Publication d'un subgraph sur le réseau décentralisé --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio) and it's ready to go into production, you can publish it to the decentralized network. +Une fois que vous avez [déployé votre subgraph sur Subgraph Studio](/deploying/deploying-a-subgraph-to-studio) et qu'il est prêt à passer en production, vous pouvez le publier sur le réseau décentralisé. -When you publish a subgraph to the decentralized network, you make it available for: +Lorsque vous publiez un subgraph sur le réseau décentralisé, vous le rendez disponible pour : -- [Curators](/network/curating) to begin curating it. -- [Indexers](/network/indexing) to begin indexing it. +- [Les Curateurs](/network/curating), qui peuvent commencer à le curer. +- [Les Indexeurs](/network/indexing), qui peuvent commencer à l’indexer. -Check out the list of [supported networks](/developing/supported-networks). +Consultez la liste des [réseaux pris en charge](/developing/supported-networks). -## Publishing from Subgraph Studio +## Publication à partir de Subgraph Studio -1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard -2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +1. Allez sur le tableau de bord de [Subgraph Studio](https://thegraph.com/studio/) +2. Cliquez sur le bouton **Publish** +3. Votre subgraph sera maintenant visible dans [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +Toutes les versions publiées d'un subgraph existant peuvent : -- Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/arbitrum/arbitrum-faq). +- Être publiées sur Arbitrum One. [En savoir plus sur The Graph Network sur Arbitrum](/arbitrum/arbitrum-faq). -- Index data on any of the [supported networks](/developing/supported-networks), regardless of the network on which the subgraph was published. +- Indexer des données sur l'un des [réseaux pris en charge](/developing/supported-networks), quel que soit le réseau sur lequel le subgraph a été publié. ### Mise à jour des métadonnées d'un subgraph publié -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. -- Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. -- It's important to note that this process will not create a new version since your deployment has not changed. +- Après avoir publié votre subgraph sur le réseau décentralisé, vous pouvez mettre à jour les métadonnées à tout moment dans Subgraph Studio. +- Une fois que vous avez enregistré vos modifications et publié les mises à jour, elles apparaîtront dans Graph Explorer. +- Il est important de noter que ce processus ne créera pas une nouvelle version puisque votre déploiement n'a pas changé. -## Publishing from the CLI +## Publication à partir de la CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +À partir de la version 0.73.0, vous pouvez également publier votre subgraph avec la [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -1. Open the `graph-cli`. -2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +1. Ouvrez la `graph-cli`. +2. Utilisez les commandes suivantes : `graph codegen && graph build` puis `graph publish`. +3. Une fenêtre s'ouvrira, vous permettant de connecter votre portefeuille, d'ajouter des métadonnées et de déployer votre subgraph finalisé sur le réseau de votre choix. ![cli-ui](/img/cli-ui.png) -### Customizing your deployment +### Personnalisation de votre déploiement -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +Vous pouvez uploader votre build de subgraph sur un nœud IPFS spécifique et personnaliser davantage votre déploiement avec les options suivantes : ``` USAGE $ graph publish [SUBGRAPH-MANIFEST] [-h] [--protocol-network arbitrum-one|arbitrum-sepolia --subgraph-id ] [-i ] [--ipfs-hash ] [--webapp-url - ] + ] FLAGS - -h, --help Show CLI help. - -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node. - --ipfs-hash= IPFS hash of the subgraph manifest to deploy. - --protocol-network=