Skip to content

Latest commit

 

History

History
269 lines (215 loc) · 9.15 KB

http_api.md

File metadata and controls

269 lines (215 loc) · 9.15 KB

Table of Contents generated with DocToc

FiloDB HTTP API

FiloDB Specific APIs

GET /__members

Internal API used to return seed nodes for FiloDB Cluster initialization. See the akka-bootstrapper docs.

GET /admin/health

Currently returns All good if the node is up.

TODO: expose more detailed health, status, etc.?

POST /admin/loglevel/{loggerName}

Post the new loglevel (debug,info,warn,error,trace) with the logger name in dot notation. Allows dynamically changing the log level for the node in question only.

Example to turn on TRACE logging for Kryo serialization:

curl -d 'trace' http://localhost:8080/admin/loglevel/com.esotericsoftware.minlog

Returns text explaining what got changed (sorry not JSON).

GET /api/v1/cluster

  • Returns a JSON list of the datasets currently set up in the cluster for ingestion
{
    "status": "success",   
    "data": ["prometheus"]
}

GET /api/v1/cluster/{dataset}/status

  • Returns the shard status of the given dataset
  • Returns 404 if the dataset is not currently setup for ingestion
{
    "status": "success",   
    "data": [
        { "shard": 0,
          "status": "ShardStatusActive",
          "address": "akka://[email protected]:2552" },
        { "shard": 1,
          "status": "ShardStatusActive",
          "address": "akka://[email protected]:2552" }
    ]
}

GET /api/v1/cluster/{dataset}/statusByAddress

  • Returns the shard status grouped by node for the given dataset
  • Returns 404 if the dataset is not currently setup for ingestion
{
    "status": "success",   
    "data": [
        { "address": "akka://[email protected]:2552",
          "shardList": [
                        { "shard": 0,
                          "status": "ShardStatusActive" },
                        { "shard": 1,
                          "status": "ShardStatusRecovery(94)"}
          ]
        },
        { "address": "akka://[email protected]:53532",
          "shardList": [
                        { "shard": 2,
                          "status": "ShardStatusActive" },
                        { "shard": 3,
                          "status": "ShardStatusActive"}
          ]
        }
    ]
}

POST /api/v1/cluster/{dataset}

Initializes streaming ingestion of a dataset across the whole FiloDB cluster. The POST body describes the ingestion source and parameters, such as Kafka configuration. Only needs to be done one time, as the configuration is persisted to the MetaStore and automatically restored on restarts.

  • POST body should be an ingestion source configuration, such as the one in conf/timeseries-dev-source.conf. It could be in Typesafe Config format, or JSON.
  • A successful POST results in something like {"status": "success", "data": []}
  • 400 is returned if the POST body cannot be parsed or does not contain all the necessary configuration keys
  • If the dataset has already been set up, the response will be a 409 ("Resource conflict") with
{
    "status": "error",   
    "errorType": "DatasetExists"
    "error": "The dataset timeseries has already been setup for ingestion"
}

POST /api/v1/cluster/{dataset}/stopshards

Stop all the given shards. The POST body describes the stop shard config that should have the list of shards to be stopped.

  • POST body should be a UnassignShardConfig in JSON format as follows:
{
    "shardList": [
       2, 3
    ]
}
  • A successful POST results in something like {"status": "success", "data": []}
  • 400 is returned if the POST body cannot be parsed or any of the following validation fails:
    1. If the given dataset does not exist
    2. Check if all the given shards are valid
      1. Shard number should be >= 0 and < maxAllowedShard
      2. Shard should be assigned to a node

POST /api/v1/cluster/{dataset}/startshards

Start the shards on the given node. The POST body describes the start shard config that should have both destination node address and the list of shards to be started.

  • POST body should be a AssignShardConfig in JSON format as follows:
{
    "address": "akka.tcp://[email protected]:2552",
    "shardList": [
       2, 3
    ]
}
  • A successful POST results in something like {"status": "success", "data": []}
  • 400 is returned if the POST body cannot be parsed or any of the following validation fails:
    1. If the given dataset does not exist
    2. If the given node doesn not exist
    3. Check if all the given shards are valid
      1. Shard number should be >= 0 and < maxAllowedShard
      2. Shard should not be assigned to any node
    4. Verify whether there are enough capacity to add new shards on the node

Prometheus-compatible APIs

  • Compatible with Grafana Prometheus Plugin

GET /promql/{dataset}/api/v1/query_range?query={promQLString}&start={startTime}&step={step}&end={endTime}

Used to issue a promQL query for a time range with a start and end timestamp and at regular step intervals. For more details, see Prometheus HTTP API Documentation Range Queries

params:
• `explainOnly` -- returns an ExecPlan instead of the query results if `true`
• `spread` -- override default spread
* `histogramMap` -- if true, returns histograms in results as a map/object of bucket values.  If false, histograms are automatically translated to Prometheus bucket-per-vector format.  Defaults to `false`.

Normal/double value output:

      "values": [
          [
            1580319538,
            "24.0"
          ],
          [
            1580319598,
            "24.0"
          ],
          [
            1580319658,
            "30.0"
          ],
      ]

histogramMap=true output:

      "values": [
          [
            1580319538,
            {
              "100.0": 18,
              "1000.0": 20,
              "30.0": 0,
              "100000.0": 24,
              "10.0": 0,
              "30000.0": 24,
              "3000.0": 22,
              "10000.0": 24,
              "300.0": 18,
              "+Inf": 24
            }
          ]
      ]

GET /promql/{dataset}/api/v1/query?query={promQLString}&time={timestamp}

Used to issue a promQL query for a single time instant time. Can also be used to query raw data by issuing a PromQL range expression. For more details, see Prometheus HTTP API Documentation Instant Queries

params:
• `explainOnly` -- returns an ExecPlan instead of the query results
• `spread` -- override default spread

POST /promql/{dataset}/api/v1/read

Used to extract raw data for integration with other TSDB systems.

Important Note: The Prometheus API should not be used for extracting raw data out from FiloDB at scale. Current implementation includes the same 'limit' settings that apply in the Akka Actor interface.

GET /api/v1/label/{label_name}/values

  • Returns the values (up to a limit) for a given label or tag in the internal index. NOTE: it only searches the local node, this is not a distributed query.
  • Returns 404 if there is no such label indexed
{
   "status" : "success",
   "data" : [
      "node",
      "prometheus"
   ]
}

Prometheus APIs not supported

Anything not listed above. Especially:

  • GET /api/v1/targets
  • GET /api/v1/alertmanagers