Skip to content

Databento

Loren1166 edited this page Sep 23, 2024 · 6 revisions

Databento

NautilusTrader provides an adapter for integrating with the Databento API and Databento Binary Encoding (DBN) format data. As Databento is purely a market data provider, there is no execution client provided - although a sandbox environment with simulated execution could still be set up. It's also possible to match Databento data with Interactive Brokers execution, or to calculate traditional asset class signals for crypto trading.

NautilusTrader 提供了一个适配器,用于与 Databento API 和 Databento 二进制编码 (DBN) 格式数据集成。由于 Databento 纯粹是一个市场数据提供商,因此不提供执行客户端 - 尽管仍然可以设置具有模拟执行的沙盒环境。还可以将 Databento 数据与 Interactive Brokers 执行相匹配,或者计算加密交易的传统资产类别信号。

The capabilities of this adapter include:

此适配器的功能包括:

  • Loading historical data from DBN files and decoding into Nautilus objects for backtesting or writing to the data catalog. 从 DBN 文件加载历史数据并解码为 Nautilus 对象,用于回测或写入数据目录。
  • Requesting historical data which is decoded to Nautilus objects to support live trading and backtesting. 请求解码为 Nautilus 对象的历史数据以支持实时交易和回测。
  • Subscribing to real-time data feeds which are decoded to Nautilus objects to support live trading and sandbox environments. 订阅实时数据馈送,这些数据馈送被解码为 Nautilus 对象以支持实时交易和沙盒环境。

tip 提示

Databento currently offers 125 USD in free data credits (historical data only) for new account sign-ups.

Databento 目前为新注册账户提供 125 美元的免费数据积分(仅限历史数据)。

With careful requests, this is more than enough for testing and evaluation purposes. It's recommended you make use of the /metadata.get_cost endpoint.

通过仔细的请求,这足以用于测试和评估目的。建议您使用 /metadata.get_cost 端点。

Overview 概述

The adapter implementation takes the databento-rs crate as a dependency, which is the official Rust client library provided by Databento. There are actually no Databento Python dependencies.

适配器实现将 databento-rs crate 作为依赖项,这是 Databento 提供的官方 Rust 客户端库。实际上没有 Databento Python 依赖项。

info 信息

There is no optional extra installation for databento, at this stage the core components of the adapter are compiled as static libraries and linked during the build by default.

目前不需要额外安装 databento,在此阶段,适配器的核心组件默认情况下编译为静态库并在构建期间链接。

The following adapter classes are available:

以下适配器类可用:

  • DatabentoDataLoader: Loads Databento Binary Encoding (DBN) data from files. DatabentoDataLoader:从文件加载 Databento 二进制编码 (DBN) 数据。
  • DatabentoInstrumentProvider: Integrates with the Databento API (HTTP) to provide latest or historical instrument definitions. DatabentoInstrumentProvider:与 Databento API (HTTP) 集成以提供最新的或历史的Instrument定义。
  • DatabentoHistoricalClient: Integrates with the Databento API (HTTP) for historical market data requests. DatabentoHistoricalClient:与 Databento API (HTTP) 集成以获取历史市场数据请求。
  • DatabentoLiveClient: Integrates with the Databento API (raw TCP) for subscribing to real-time data feeds. DatabentoLiveClient:与 Databento API(原始 TCP)集成以订阅实时数据馈送。
  • DatabentoDataClient: Provides a LiveMarketDataClient implementation for running a trading node in real time. DatabentoDataClient:提供 LiveMarketDataClient 实现以实时运行交易节点。

info 信息

As with the other integration adapters, most users will simply define a configuration for a live trading node (covered below), and won't need to necessarily work with these lower level components directly.

与其他集成适配器一样,大多数用户只需为实时交易节点定义一个配置(如下所述),而无需直接使用这些较低级别的组件。

Databento documentation Databento 文档

Databento provides extensive documentation for users which can be found in the Databento knowledge base. It's recommended you also refer to the Databento documentation in conjunction with this NautilusTrader integration guide.

Databento 为用户提供了大量的文档,可以在 Databento 知识库中找到。建议您还参考 Databento 文档以及此 NautilusTrader 集成指南。

Databento Binary Encoding (DBN) Databento 二进制编码 (DBN)

Databento Binary Encoding (DBN) is an extremely fast message encoding and storage format for normalized market data. The DBN specification includes a simple, self-describing metadata header and a fixed set of struct definitions, which enforce a standardized way to normalize market data.

Databento 二进制编码 (DBN) 是一种极其快速的标准化市场数据消息编码和存储格式。DBN 规范包含一个简单的、自描述的元数据标头和一组固定的结构定义,这些定义强制执行标准化的市场数据规范化方式。

The integration provides a decoder which can convert DBN format data to Nautilus objects.

集成提供了一个解码器,可以将 DBN 格式的数据转换为 Nautilus 对象。

The same Rust implemented Nautilus decoder is used for:

相同的 Rust 实现的 Nautilus 解码器用于:

  • Loading and decoding DBN files from disk. 从磁盘加载和解码 DBN 文件。
  • Decoding historical and live data in real time. 实时解码历史和实时数据。

Supported schemas 支持的架构

The following Databento schemas are supported by NautilusTrader:

NautilusTrader 支持以下 Databento 架构:

Databento schema Nautilus data type
MBO OrderBookDelta
MBP_1 (QuoteTick, Option<TradeTick>)
MBP_10 OrderBookDepth10
TBBO (QuoteTick, TradeTick)
TRADES TradeTick
OHLCV_1S Bar
OHLCV_1M Bar
OHLCV_1H Bar
OHLCV_1D Bar
DEFINITION Instrument (various types)
IMBALANCE DatabentoImbalance
STATISTICS DatabentoStatistics
STATUS InstrumentStatus

Instrument IDs and symbology Instrument ID 和符号

Databento market data includes an instrument_id field which is an integer assigned by either the original source venue, or internally by Databento during normalization.

Databento 市场数据包括一个 instrument_id 字段,它是由原始来源交易平台或 Databento 在规范化期间内部分配的整数。

It's important to realize that this is different to the Nautilus InstrumentId which is a string made up of a symbol + venue with a period separator i.e. "{symbol}.{venue}".

重要的是要意识到这与 Nautilus InstrumentId 不同,后者是由符号 + 交易平台组成的字符串,并使用句点分隔符,即“{symbol}.{venue}”。

The Nautilus decoder will use the Databento raw_symbol for the Nautilus symbol and an ISO 10383 MIC (Market Identifier Code) from the Databento instrument definition message for the Nautilus venue.

Nautilus 解码器将使用 Databento raw_symbol 作为 Nautilus 符号,并使用 Databento Instrument定义消息中的 ISO 10383 MIC(市场标识符代码)作为 Nautilus 交易平台。

Databento datasets are identified with a dataset code which is not the same as a venue identifier. You can read more about Databento dataset naming conventions here.

Databento 数据集使用与交易平台标识符不同的数据集代码进行标识。您可以在此处阅读有关 Databento 数据集命名约定的更多信息。

Of particular note is for CME Globex MDP 3.0 data (GLBX.MDP3 dataset code), the following exchanges are all grouped under the GLBX venue. These mappings can be determined from the instruments exchange field:

特别要注意的是 CME Globex MDP 3.0 数据(GLBX.MDP3 数据集代码),以下交易所都归入 GLBX 交易平台。这些映射可以从Instrument的 exchange 字段确定:

  • CBCM: XCME-XCBT inter-exchange spread. CBCM:XCME-XCBT 交易所间价差。
  • NYUM: XNYM-DUMX inter-exchange spread. NYUM:XNYM-DUMX 交易所间价差。
  • XCBT: Chicago Board of Trade (CBOT). XCBT:芝加哥期货交易所 (CBOT)。
  • XCEC: Commodities Exchange Center (COMEX). XCEC:纽约商品交易所 (COMEX)。
  • XCME: Chicago Mercantile Exchange (CME). XCME:芝加哥商品交易所 (CME)。
  • XFXS: CME FX Link spread. XFXS:CME 外汇链接价差。
  • XNYM: New York Mercantile Exchange (NYMEX). XNYM:纽约商品交易所 (NYMEX)。

info 信息

Other venue MICs can be found in the venue field of responses from the metadata.list_publishers endpoint.

其他交易平台 MIC 可以在 metadata.list_publishers 端点响应的 venue 字段中找到。

Timestamps 时间戳

Databento data includes various timestamp fields including (but not limited to):

Databento 数据包括各种时间戳字段,包括(但不限于):

  • ts_event: The matching-engine-received timestamp expressed as the number of nanoseconds since the UNIX epoch. ts_event:匹配引擎接收到的时间戳,表示为自 UNIX 纪元以来的纳秒数。
  • ts_in_delta: The matching-engine-sending timestamp expressed as the number of nanoseconds before ts_recv. ts_in_delta:匹配引擎发送时间戳,表示为 ts_recv 之前的纳秒数。
  • ts_recv: The capture-server-received timestamp expressed as the number of nanoseconds since the UNIX epoch. ts_recv:捕获服务器接收到的时间戳,表示为自 UNIX 纪元以来的纳秒数。
  • ts_out: The Databento sending timestamp. ts_out:Databento 发送时间戳。

Nautilus data includes at least two timestamps (required by the Data contract):

Nautilus 数据至少包含两个时间戳(Data 契约要求):

  • ts_event: UNIX timestamp (nanoseconds) when the data event occurred. ts_event:数据事件发生时的 UNIX 时间戳(纳秒)。
  • ts_init: UNIX timestamp (nanoseconds) when the data object was initialized. ts_init:数据对象初始化时的 UNIX 时间戳(纳秒)。

When decoding and normalizing Databento to Nautilus we generally assign the Databento ts_recv value to the Nautilus ts_event field, as this timestamp is much more reliable and consistent, and is guaranteed to be monotonically increasing per instrument. The exception to this are the DatabentoImbalance and DatabentoStatistics data types, which have fields for all timestamps

将 Databento 解码和规范化为 Nautilus 时,我们通常将 Databento ts_recv 值分配给 Nautilus ts_event 字段,因为此时间戳更可靠、更一致,并且保证每个Instrument单调递增。例外情况是 DatabentoImbalanceDatabentoStatistics 数据类型,它们具有所有时间戳的字段

as the types are defined specifically for the adapter.

因为这些类型是专门为适配器定义的。

info 信息

See the following Databento docs for further information:

有关更多信息,请参阅以下 Databento 文档:

Data types 数据类型

The following section discusses Databento schema -> Nautilus data type equivalence and considerations.

下一节将讨论 Databento 架构 -> Nautilus 数据类型的等效性和注意事项。

info 信息

See the Databento list of fields by schema guide.

请参阅 Databento 按架构列出的字段指南

Instrument definitions Instrument定义

Databento provides a single schema to cover all instrument classes, these are decoded to the appropriate Nautilus Instrument types.

Databento 提供了一个架构来涵盖所有Instrument类别,这些架构被解码为相应的 Nautilus Instrument 类型。

The following Databento instrument classes are supported by NautilusTrader:

NautilusTrader 支持以下 Databento Instrument类别:

Databento instrument class Code Nautilus instrument type
Stock K Equity
Future F FuturesContract
Call C OptionsContract
Put P OptionsContract
Future spread S FuturesSpread
Option spread T OptionsSpread
Mixed spread M OptionsSpread
FX spot X CurrencyPair
Bond B Not yet available

MBO (market by order) MBO(按订单排列的市场)

This schema is the highest granularity data offered by Databento, and represents full order book depth. Some messages also provide trade information, and so when decoding MBO messages Nautilus will produce an OrderBookDelta and optionally a TradeTick.

此架构是 Databento 提供的最高粒度数据,表示完整的订单簿深度。某些消息还提供交易信息,因此在解码 MBO 消息时,Nautilus 将生成一个 OrderBookDelta 和一个可选的 TradeTick

The Nautilus live data client will buffer MBO messages until an F_LAST flag is seen. A discrete OrderBookDeltas container object will then be passed to the registered handler.

Nautilus 实时数据客户端将缓冲 MBO 消息,直到看到 F_LAST 标志。然后,一个离散的 OrderBookDeltas 容器对象将被传递给注册的处理程序。

Order book snapshots are also buffered into a discrete OrderBookDeltas container object, which occurs during the replay startup sequence.

订单簿快照也被缓冲到一个离散的 OrderBookDeltas 容器对象中,这发生在回放启动序列期间。

MBP-1 (market by price, top-of-book) MBP-1(按价格排列的市场,最佳报价)

This schema represents the top-of-book only (quotes and trades). Like with MBO messages, some messages carry trade information, and so when decoding MBP-1 messages Nautilus will produce a QuoteTick and also a TradeTick if the message is a trade.

此架构仅表示最佳报价(报价和交易)。与 MBO 消息一样,某些消息携带交易信息,因此在解码 MBP-1 消息时,如果消息是交易,Nautilus 将生成一个 QuoteTick 和一个 TradeTick

OHLCV (bar aggregates) OHLCV(K线聚合)

The Databento bar aggregation messages are timestamped at the open of the bar interval. The Nautilus decoder will normalize the ts_event timestamps to the close of the bar (original ts_event + bar interval).

Databento K线聚合消息的时间戳是K线间隔的开始时间。Nautilus 解码器会将 ts_event 时间戳规范化为K线的结束时间(原始 ts_event + K线间隔)。

Imbalance & Statistics 不平衡和统计

The Databento imbalance and statistics schemas cannot be represented as a built-in Nautilus data types, and so they have specific types defined in Rust DatabentoImbalance and DatabentoStatistics. Python bindings are provided via pyo3 (Rust) so the types behave a little differently to a built-in Nautilus data types, where all attributes are pyo3 provided objects and not directly compatible with certain methods which may expect a Cython provided type. There are pyo3 -> legacy Cython object conversion methods available, which can be found in the API reference.

Databento 不平衡和统计架构不能表示为内置的 Nautilus 数据类型,因此它们在 Rust DatabentoImbalanceDatabentoStatistics 中有特定的类型定义。Python 绑定是通过 pyo3 (Rust) 提供的,因此这些类型的行为与内置的 Nautilus 数据类型略有不同,其中所有属性都是 pyo3 提供的对象,并且与某些可能期望 Cython 提供的类型的方法不直接兼容。可以使用 pyo3 -> 旧 Cython 对象转换方法,这些方法可以在 API 参考中找到。

Here is a general pattern for converting a pyo3 Price to a Cython Price:

以下是将 pyo3 Price 转换为 Cython Price 的一般模式:

price = Price.from_raw(pyo3_price.raw, pyo3_price.precision)

Additionally requesting for and subscribing to these data types requires the use of the lower level generic methods for custom data types. The following example subscribes to the imbalance schema for the AAPL.XNAS instrument (Apple Inc trading on the Nasdaq exchange):

此外,请求和订阅这些数据类型需要使用用于自定义数据类型的较低级别通用方法。以下示例订阅 AAPL.XNAS Instrument(Apple Inc 在 Nasdaq 交易所交易)的不平衡架构:

from nautilus_trader.adapters.databento import DATABENTO_CLIENT_ID
from nautilus_trader.adapters.databento import DatabentoImbalance
from nautilus_trader.model.data import DataType

instrument_id = InstrumentId.from_str("AAPL.XNAS")
self.subscribe_data(
    data_type=DataType(DatabentoImbalance, metadata={"instrument_id": instrument_id}),
    client_id=DATABENTO_CLIENT_ID,
)

Or requesting the previous days statistics schema for the ES.FUT parent symbol (all active E-mini S&P 500 futures contracts on the CME Globex exchange):

或者请求 ES.FUT 父符号(CME Globex 交易所上所有活跃的 E-mini S&P 500 期货合约)前几天的统计信息架构:

from nautilus_trader.adapters.databento import DATABENTO_CLIENT_ID
from nautilus_trader.adapters.databento import DatabentoStatisics
from nautilus_trader.model.data import DataType

instrument_id = InstrumentId.from_str("ES.FUT.GLBX")
metadata = {
    "instrument_id": instrument_id,
    "start": "2024-03-06",
}
self.request_data(
    data_type=DataType(DatabentoImbalance, metadata=metadata),
    client_id=DATABENTO_CLIENT_ID,
)

Performance considerations 性能注意事项

When backtesting with Databento DBN data, there are two options:

使用 Databento DBN 数据进行回测时,有两个选项:

  • Store the data in DBN (.dbn.zst) format files and decode to Nautilus objects on every run. 将数据存储在 DBN (.dbn.zst) 格式文件中,并在每次运行时解码为 Nautilus 对象。
  • Convert the DBN files to Nautilus objects and then write to the data catalog once (stored as Nautilus Parquet format on disk). 将 DBN 文件转换为 Nautilus 对象,然后将其写入数据目录一次(以 Nautilus Parquet 格式存储在磁盘上)。

Whilst the DBN -> Nautilus decoder is implemented in Rust and has been optimized, the best performance for backtesting will be achieved by writing the Nautilus objects to the data catalog, which performs the decoding step once.

虽然 DBN -> Nautilus 解码器是用 Rust 实现的并且已经过优化,但通过将 Nautilus 对象写入数据目录可以实现回测的最佳性能,该目录执行一次解码步骤。

DataFusion provides a query engine backend to efficiently load and stream the Nautilus Parquet data from disk, which achieves extremely high through-put (at least an order of magnitude faster than converting DBN -> Nautilus on the fly for every backtest run).

DataFusion 提供了一个查询引擎后端,可以有效地从磁盘加载和流式传输 Nautilus Parquet 数据,这可以实现极高的吞吐量(至少比每次回测运行时动态转换 DBN -> Nautilus 快一个数量级)。

note 注意

Performance benchmarks are currently under development.

性能基准测试目前正在开发中。

Loading DBN data 加载 DBN 数据

You can load DBN files and convert the records to Nautilus objects using the DatabentoDataLoader class. There are two main purposes for doing so:

您可以使用 DatabentoDataLoader 类加载 DBN 文件并将记录转换为 Nautilus 对象。这样做的主要目的有两个:

  • Pass the converted data to BacktestEngine.add_data directly for backtesting. 将转换后的数据直接传递给 BacktestEngine.add_data 以进行回测。
  • Pass the converted data to ParquetDataCatalog.write_data for later streaming use with a BacktestNode. 将转换后的数据传递给 ParquetDataCatalog.write_data,以便稍后与 BacktestNode 一起用于流式传输。

DBN data to a BacktestEngine DBN 数据到 BacktestEngine

This code snippet demonstrates how to load DBN data and pass to a BacktestEngine. Since the BacktestEngine needs an instrument added, we'll use a test instrument provided by the TestInstrumentProvider (you could also pass an instrument object which was parsed from a DBN file too). The data is a month of TSLA (Tesla Inc) trades on the Nasdaq exchange:

此代码片段演示了如何加载 DBN 数据并将其传递给 BacktestEngine。由于 BacktestEngine 需要添加Instrument,因此我们将使用 TestInstrumentProvider 提供的测试Instrument(您也可以传递从 DBN 文件解析的Instrument对象)。数据是 Nasdaq 交易所一个月的 TSLA(特斯拉公司)交易:

# Add instrument
# 添加Instrument
TSLA_NASDAQ = TestInstrumentProvider.equity(symbol="TSLA")
engine.add_instrument(TSLA_NASDAQ)

# Decode data to legacy Cython objects
# 将数据解码为旧的 Cython 对象
loader = DatabentoDataLoader()
trades = loader.from_dbn_file(
    path=TEST_DATA_DIR / "databento" / "temp" / "tsla-xnas-20240107-20240206.trades.dbn.zst",
    instrument_id=TSLA_NASDAQ.id,
)

# Add data
# 添加数据
engine.add_data(trades)

DBN data to a ParquetDataCatalog DBN 数据到 ParquetDataCatalog

This code snippet demonstrates how to load DBN data and write to a ParquetDataCatalog. We pass a value of false for the as_legacy_cython flag, which will ensure the DBN records are decoded as pyo3 (Rust) objects. It's worth noting that legacy Cython objects can also be passed to write_data, but these need to be converted back to pyo3 objects under the hood (so passing pyo3 objects is an optimization).

此代码片段演示了如何加载 DBN 数据并将其写入 ParquetDataCatalog。我们为 as_legacy_cython 标志传递一个 false 值,这将确保 DBN 记录被解码为 pyo3 (Rust) 对象。值得注意的是,旧的 Cython 对象也可以传递给 write_data,但这些对象需要在后台转换回 pyo3 对象(因此传递 pyo3 对象是一种优化)。

# Initialize the catalog interface
# (will use the `NAUTILUS_PATH` env var as the path)
# 初始化目录接口
# (将使用 `NAUTILUS_PATH` 环境变量作为路径)
catalog = ParquetDataCatalog.from_env()

instrument_id = InstrumentId.from_str("TSLA.XNAS")

# Decode data to pyo3 objects
# 将数据解码为 pyo3 对象
loader = DatabentoDataLoader()
trades = loader.from_dbn_file(
    path=TEST_DATA_DIR / "databento" / "temp" / "tsla-xnas-20240107-20240206.trades.dbn.zst",
    instrument_id=instrument_id,
    as_legacy_cython=False,  # This is an optimization for writing to the catalog 这是写入目录的优化
)

# Write data
# 写入数据
catalog.write_data(trades)

info 信息

See also the Data concepts guide.

另请参阅 数据概念指南

Real-time client architecture 实时客户端架构

The DatabentoDataClient is a Python class which contains other Databento adapter classes. There are two DatabentoLiveClients per Databento dataset:

DatabentoDataClient 是一个 Python 类,其中包含其他 Databento 适配器类。每个 Databento 数据集有两个 DatabentoLiveClients

  • One for MBO (order book deltas) real-time feeds. 一个用于 MBO(订单簿增量)实时馈送。
  • One for all other real-time feeds. 一个用于所有其他实时馈送。

warning 警告

There is currently a limitation that all MBO (order book deltas) subscriptions for a dataset have to be made at node startup, to then be able to replay data from the beginning of the session. If subsequent subscriptions arrive after start, then an error will be logged (and the subscription ignored).

目前存在一个限制,即必须在节点启动时进行数据集的所有 MBO(订单簿增量)订阅,然后才能从会话开始时重放数据。如果后续订阅在启动后到达,则会记录错误(并且忽略订阅)。

There is no such limitation for any of the other Databento schemas.

其他任何 Databento 架构都没有此类限制。

A single DatabentoHistoricalClient instance is reused between the DatabentoInstrumentProvider and DatabentoDataClient, which makes historical instrument definitions and data requests.

单个 DatabentoHistoricalClient 实例在 DatabentoInstrumentProviderDatabentoDataClient 之间重复使用,这使得历史Instrument定义和数据请求成为可能。

Configuration 配置

The most common use case is to configure a live TradingNode to include a Databento data client. To achieve this, add a DATABENTO section to your client configuration(s):

最常见的用例是配置实时 TradingNode 以包含 Databento 数据客户端。为此,请将 DATABENTO 部分添加到您的客户端配置中:

from nautilus_trader.adapters.databento import DATABENTO
from nautilus_trader.live.node import TradingNode

config = TradingNodeConfig(
    ...,  # Omitted
    data_clients={
        DATABENTO: {
            "api_key": None,  # 'DATABENTO_API_KEY' env var
            "http_gateway": None,  # Override for the default HTTP historical gateway
            "live_gateway": None,  # Override for the default raw TCP real-time gateway
            "instrument_provider": InstrumentProviderConfig(load_all=True),
            "instrument_ids": None,  # Nautilus instrument IDs to load on start
            "parent_symbols": None,  # Databento parent symbols to load on start
        },
    },
    ..., # Omitted
)

Then, create a TradingNode and add the client factory:

然后,创建一个 TradingNode 并添加客户端工厂:

from nautilus_trader.adapters.databento.factories import DatabentoLiveDataClientFactory
from nautilus_trader.live.node import TradingNode

# Instantiate the live trading node with a configuration
# 使用配置实例化实时交易节点
node = TradingNode(config=config)

# Register the client factory with the node
# 向节点注册客户端工厂
node.add_data_client_factory(DATABENTO, DatabentoLiveDataClientFactory)

# Finally build the node
# 最后构建节点
node.build()

Configuration parameters 配置参数

  • api_key: The Databento API secret key. If None then will source the DATABENTO_API_KEY environment variable. api_key:Databento API 密钥。如果为 None,则将获取 DATABENTO_API_KEY 环境变量。
  • http_gateway: The historical HTTP client gateway override (useful for testing and typically not needed by most users). http_gateway:历史 HTTP 客户端网关覆盖(对测试有用,大多数用户通常不需要)。
  • live_gateway: The raw TCP real-time client gateway override (useful for testing and typically not needed by most users). live_gateway:原始 TCP 实时客户端网关覆盖(对测试有用,大多数用户通常不需要)。
  • parent_symbols: The Databento parent symbols to subscribe to instrument definitions for on start. This is a map of Databento dataset keys -> to a sequence of the parent symbols, e.g. {'GLBX.MDP3', ['ES.FUT', 'ES.OPT']} (for all E-mini S&P 500 futures and options products). parent_symbols:启动时订阅Instrument定义的 Databento 父符号。这是 Databento 数据集键到父符号序列的映射,例如 {'GLBX.MDP3', ['ES.FUT', 'ES.OPT']}(适用于所有 E-mini S&P 500 期货和期权产品)。
  • instrument_ids: The instrument IDs to request instrument definitions for on start. instrument_ids:启动时请求Instrument定义的Instrument ID。
  • timeout_initial_load: The timeout (seconds) to wait for instruments to load (concurrently per dataset). timeout_initial_load:等待Instrument加载的超时时间(以秒为单位)(每个数据集并发)。
  • mbo_subscriptions_delay: The timeout (seconds) to wait for MBO/L3 subscriptions (concurrently per dataset). After the timeout the MBO order book feed will start and replay messages from the start of the week which encompasses the initial snapshot and then all deltas. mbo_subscriptions_delay:等待 MBO/L3 订阅的超时时间(以秒为单位)(每个数据集并发)。超时后,MBO 订单簿馈送将启动并从包含初始快照和所有增量的星期一开始重放消息。
文档 (Documentation)
入门 (Getting Started)
概念 (Concepts)
教程 (Tutorials)
集成 (Integrations)
Python API
Rust API[未翻译]
开发者指南 (Developer Guide)
Clone this wiki locally