Transactions information from the Antelope blockchains, powered by Substreams
Path | Description |
---|---|
GET /actions/tx_hash/{tx_hash} |
Actions by transaction |
GET /actions/block_number/{block_number} |
Actions by block |
GET /actions/block_date/{block_date} |
Actions by date |
GET /authorizations/tx_hash/{tx_hash} |
Authorizations by transaction |
GET /authorizations/block_number/{block_number} |
Authorizations by block |
GET /authorizations/block_date/{block_date} |
Authorizations by date |
GET /blocks/hash/{hash} |
Blocks by hash |
GET /blocks/number/{number} |
Blocks by number |
GET /blocks/date/{date} |
Blocks by date |
GET /db_ops/tx_hash/{tx_hash} |
Database operations by transaction |
GET /db_ops/block_number/{block_number} |
Database operations by block |
GET /db_ops/block_date/{block_date} |
Database operations by date |
GET /transactions/hash/{hash} |
Transactions by hash |
GET /transactions/block_number/{block_number} |
Transactions by block |
GET /transactions/block_date/{block_date} |
Transactions by date |
Note
All endpoints support first
, skip
, order_by
, order_direction
as additional query parameters.
Path | Description |
---|---|
GET /openapi |
OpenAPI specification |
GET /version |
API version and Git short commit hash |
Path | Description |
---|---|
GET /health |
Checks database connection |
GET /metrics |
Prometheus metrics |
Use the Variables
tab at the bottom to add your API key:
Get API key: https://app.pinax.network/
{
"X-Api-Key": "PINAX_API_KEY"
}
- The response contains pagination and statistics.
- ClickHouse, databases should follow a
{chain}_transactions_{version}
naming scheme. - The
substreams-raw-blocks
Antelope spkg will be used as data source. - A Substream sink for loading data into ClickHouse. We recommend Substreams Sink SQL.
Example on how to set up the ClickHouse backend for sinking EOS data.
- Start the ClickHouse server
clickhouse server
- Create the transactions database
echo "CREATE DATABASE eos_transactions_v1" | clickhouse client -h <host> --port 9000 -d <database> -u <user> --password <password>
- Run the
create_schema.sh
script
./create_schema.sh -o /tmp/schema.sql
- Execute the schema
cat /tmp/schema.sql | clickhouse client -h <host> --port 9000 -d <database> -u <user> --password <password>
- Run the sink
substreams-sink-sql run clickhouse://<username>:<password>@<host>:9000/eos_transactions_v1 \
https://github.com/pinax-network/substreams-raw-blocks/releases/download/antelope-v0.3.0/raw-blocks-antelope-v0.3.0.spkg `#Substreams package` \
-e eos.substreams.pinax.network:443 `#Substreams endpoint` \
1: `#Block range <start>:<end>` \
--final-blocks-only --undo-buffer-size 1 --on-module-hash-mistmatch=warn --batch-block-flush-interval 100 --development-mode `#Additional flags`
- Start the API
# Will be available on locahost:8080 by default
antelope-transactions-api --host <host> --database eos_transactions_v1 --username <username> --password <password> --verbose
If you run ClickHouse in a cluster, change step 2 & 3:
- Create the transactions database
echo "CREATE DATABASE eos_transactions_v1 ON CLUSTER <cluster>" | clickhouse client -h <host> --port 9000 -d <database> -u <user> --password <password>
Run the(SQL Sink should handle creating schema)create_schema.sh
script
./create_schema.sh -o /tmp/schema.sql -c <cluster>
-
-
- Follow the same steps as without a cluster.
-
Warning
Linux x86 only
$ wget https://github.com/pinax-network/antelope-transactions-api/releases/download/v0.3.4/antelope-transactions-api
$ chmod +x ./antelope-transactions-api
$ ./antelope-transactions-api --help
Usage: antelope-transactions-api [options]
Transactions information from the Antelope blockchains
Options:
-V, --version output the version number
-p, --port <number> HTTP port on which to attach the API (default: "8080", env: PORT)
--hostname <string> Server listen on HTTP hostname (default: "localhost", env: HOSTNAME)
--host <string> Database HTTP hostname (default: "http://localhost:8123", env: HOST)
--database <string> The database to use inside ClickHouse (default: "default", env: DATABASE)
--username <string> Database user (default: "default", env: USERNAME)
--password <string> Password associated with the specified username (default: "", env: PASSWORD)
--max-limit <number> Maximum LIMIT queries (default: 10000, env: MAX_LIMIT)
-v, --verbose <boolean> Enable verbose logging (choices: "true", "false", default: false, env: VERBOSE)
-h, --help display help for command
# API Server
PORT=8080
HOSTNAME=localhost
# Clickhouse Database
HOST=http://127.0.0.1:8123
DATABASE=default
USERNAME=default
PASSWORD=
MAX_LIMIT=500
# Logging
VERBOSE=true
- Pull from GitHub Container registry
For latest tagged release
docker pull ghcr.io/pinax-network/antelope-transactions-api:latest
For head of main
branch
docker pull ghcr.io/pinax-network/antelope-transactions-api:develop
- Build from source
docker build -t antelope-transactions-api .
- Run with
.env
file
docker run -it --rm --env-file .env ghcr.io/pinax-network/antelope-transactions-api
See CONTRIBUTING.md
.
Install Bun
bun install
bun dev
Tests
bun lint
bun test