Table of Contents generated with DocToc
Internal API used to return seed nodes for FiloDB Cluster initialization. See the akka-bootstrapper docs.
Currently returns All good
if the node is up.
TODO: expose more detailed health, status, etc.?
Post the new loglevel (debug,info,warn,error,trace) with the logger name in dot notation. Allows dynamically changing the log level for the node in question only.
Example to turn on TRACE logging for Kryo serialization:
curl -d 'trace' http://localhost:8080/admin/loglevel/com.esotericsoftware.minlog
Returns text explaining what got changed (sorry not JSON).
- Returns a JSON list of the datasets currently set up in the cluster for ingestion
{
"status": "success",
"data": ["prometheus"]
}
- Returns the shard status of the given dataset
- Returns 404 if the dataset is not currently setup for ingestion
{
"status": "success",
"data": [
{ "shard": 0,
"status": "ShardStatusActive",
"address": "akka://[email protected]:2552" },
{ "shard": 1,
"status": "ShardStatusActive",
"address": "akka://[email protected]:2552" }
]
}
- Returns the shard status grouped by node for the given dataset
- Returns 404 if the dataset is not currently setup for ingestion
{
"status": "success",
"data": [
{ "address": "akka://[email protected]:2552",
"shardList": [
{ "shard": 0,
"status": "ShardStatusActive" },
{ "shard": 1,
"status": "ShardStatusRecovery(94)"}
]
},
{ "address": "akka://[email protected]:53532",
"shardList": [
{ "shard": 2,
"status": "ShardStatusActive" },
{ "shard": 3,
"status": "ShardStatusActive"}
]
}
]
}
Initializes streaming ingestion of a dataset across the whole FiloDB cluster. The POST body describes the ingestion source and parameters, such as Kafka configuration. Only needs to be done one time, as the configuration is persisted to the MetaStore and automatically restored on restarts.
- POST body should be an ingestion source configuration, such as the one in
conf/timeseries-dev-source.conf
. It could be in Typesafe Config format, or JSON. - A successful POST results in something like
{"status": "success", "data": []}
- 400 is returned if the POST body cannot be parsed or does not contain all the necessary configuration keys
- If the dataset has already been set up, the response will be a 409 ("Resource conflict") with
{
"status": "error",
"errorType": "DatasetExists"
"error": "The dataset timeseries has already been setup for ingestion"
}
Stop all the given shards. The POST body describes the stop shard config that should have the list of shards to be stopped.
- POST body should be a UnassignShardConfig in JSON format as follows:
{
"shardList": [
2, 3
]
}
- A successful POST results in something like
{"status": "success", "data": []}
- 400 is returned if the POST body cannot be parsed or any of the following validation fails:
- If the given dataset does not exist
- Check if all the given shards are valid
- Shard number should be >= 0 and < maxAllowedShard
- Shard should be assigned to a node
Start the shards on the given node. The POST body describes the start shard config that should have both destination node address and the list of shards to be started.
- POST body should be a AssignShardConfig in JSON format as follows:
{
"address": "akka.tcp://[email protected]:2552",
"shardList": [
2, 3
]
}
- A successful POST results in something like
{"status": "success", "data": []}
- 400 is returned if the POST body cannot be parsed or any of the following validation fails:
- If the given dataset does not exist
- If the given node doesn not exist
- Check if all the given shards are valid
- Shard number should be >= 0 and < maxAllowedShard
- Shard should not be assigned to any node
- Verify whether there are enough capacity to add new shards on the node
- Compatible with Grafana Prometheus Plugin
GET /promql/{dataset}/api/v1/query_range?query={promQLString}&start={startTime}&step={step}&end={endTime}
Used to issue a promQL query for a time range with a start
and end
timestamp and at regular step
intervals.
For more details, see Prometheus HTTP API Documentation
Range Queries
params:
• `explainOnly` -- returns an ExecPlan instead of the query results if `true`
• `spread` -- override default spread
* `histogramMap` -- if true, returns histograms in results as a map/object of bucket values. If false, histograms are automatically translated to Prometheus bucket-per-vector format. Defaults to `false`.
Normal/double value output:
"values": [
[
1580319538,
"24.0"
],
[
1580319598,
"24.0"
],
[
1580319658,
"30.0"
],
]
histogramMap=true
output:
"values": [
[
1580319538,
{
"100.0": 18,
"1000.0": 20,
"30.0": 0,
"100000.0": 24,
"10.0": 0,
"30000.0": 24,
"3000.0": 22,
"10000.0": 24,
"300.0": 18,
"+Inf": 24
}
]
]
Used to issue a promQL query for a single time instant time
. Can also be used to query raw data by issuing a PromQL
range expression. For more details, see Prometheus HTTP API Documentation
Instant Queries
params:
• `explainOnly` -- returns an ExecPlan instead of the query results
• `spread` -- override default spread
Used to extract raw data for integration with other TSDB systems.
- Input: ReadRequest Protobuf
- Output: ReadResponse Protobuf See Prometheus Remote Proto definition for more details
Important Note: The Prometheus API should not be used for extracting raw data out from FiloDB at scale. Current implementation includes the same 'limit' settings that apply in the Akka Actor interface.
- Returns the values (up to a limit) for a given label or tag in the internal index. NOTE: it only searches the local node, this is not a distributed query.
- Returns 404 if there is no such label indexed
{
"status" : "success",
"data" : [
"node",
"prometheus"
]
}
Anything not listed above. Especially:
- GET /api/v1/targets
- GET /api/v1/alertmanagers