REST Endpoints
All HTTP endpoints with request/response schemas
This page is the complete reference for every HTTP/REST endpoint exposed by Nabu Store. You will find the URL paths, supported HTTP methods, request and response schemas, authentication requirements, replication policy options, and error codes for the Blob, Cluster, and Internal services. Use this reference when building client integrations, automating workflows, or debugging API interactions against a running Nabu Store cluster.
Before making API requests you need:
- A running Nabu Store node or cluster (single-node or multi-node)
- Network access to the node's HTTP listener (default port varies by deployment — confirm with your cluster configuration)
- A valid bearer token or API key issued by your cluster's authentication provider (see Authentication)
curl7.68+, or any HTTP client that supports sending JSON request bodies and binary payloads- Familiarity with base-16 (hex) blob ID encoding; all blob IDs returned by the API are 16-byte values rendered as hex strings
- For Kubernetes deployments:
kubectlaccess to resolve the service endpoint before issuing requests
Nabu Store's REST gateway is included in the standard server binary — no separate installation step is required. To verify the API is reachable after starting your node:
-
Start (or confirm) your Nabu Store node is running.
# Single-node example nabu-store start --config /etc/nabu/config.yaml -
Confirm the HTTP listener is accepting connections.
curl -i http://<node-host>:<api-port>/healthzA
200 OKresponse indicates the API gateway is ready. -
Obtain an authentication token (replace with your IdP or static token as configured).
export NABU_TOKEN="<your-bearer-token>" -
Set a convenience base-URL variable used throughout the examples on this page.
export NABU_API="http://<node-host>:<api-port>"
The table below describes the request-level fields and server-side options that affect API behaviour. These are not startup flags; they are values you supply per request or that the cluster operator sets in the server configuration file.
Replication policy
Every PUT request accepts a policy field that controls how Nabu Store durability-protects the blob. Choose based on your tolerance for storage overhead versus node-failure resilience.
| Policy constant | Value sent in JSON | Replicas / shards | Minimum nodes required | Storage overhead |
|---|---|---|---|---|
REPLICATION_POLICY_UNSPECIFIED | 0 | Falls back to REPLICA3 | 3 | 3× |
REPLICATION_POLICY_NONE | 1 | 1 copy, no redundancy | 1 | 1× |
REPLICATION_POLICY_REPLICA2 | 2 | 2 full copies | 2 | 2× |
REPLICATION_POLICY_REPLICA3 | 3 | 3 full copies (default) | 3 | 3× |
REPLICATION_POLICY_EC42 | 4 | 4 data + 2 parity shards | 6 | 1.5× |
REPLICATION_POLICY_EC82 | 5 | 8 data + 2 parity shards | 10 | 1.25× |
EC policies (EC42, EC82) stripe the blob across nodes in 1 MiB shards (16 MiB for blobs larger than 64 MiB) and require at least data + parity nodes to be available at write time. Up to parity shard-store failures are tolerated before the write is rejected.
Pagination defaults
The List endpoint uses cursor-based pagination. If you omit limit, the server defaults to 100 items per page. The maximum is not enforced by the API itself; choose a limit appropriate to your client's memory budget.
Blob ID format
Blob IDs are deterministic SHA-256-derived 16-byte values computed from the content of the blob at write time. You cannot choose your own ID; you receive the ID in the PUT response and use it for all subsequent operations. IDs are encoded as hex strings in JSON and as raw bytes in binary/streaming contexts.
All endpoints follow a consistent pattern:
POST /v1/blobs → store a blob
GET /v1/blobs/{id} → retrieve a blob
DELETE /v1/blobs/{id} → delete a blob
HEAD /v1/blobs/{id} → check existence and fetch metadata
GET /v1/blobs → list blobs (paginated)
POST /v1/cluster/join → add a node to the cluster
POST /v1/cluster/leave → remove a node from the cluster
POST /v1/cluster/heartbeat → send a node heartbeat
GET /v1/cluster/state → retrieve cluster topology
POST /v1/cluster/sync-ring → synchronise the consistent-hash ring
Authentication
Include your bearer token in every request:
Authorization: Bearer $NABU_TOKEN
Requests without a valid token receive 401 Unauthorized.
Content types
| Operation | Request Content-Type | Response Content-Type |
|---|---|---|
| Store blob (JSON body) | application/json | application/json |
| Store blob (raw binary) | application/octet-stream | application/json |
| Retrieve blob | — | application/octet-stream |
| All cluster endpoints | application/json | application/json |
Blob IDs in URLs
When a blob ID appears in a URL path or query parameter, encode it as a lowercase hex string (32 hex characters representing the 16-byte ID). Example: a3f2c1d4e5b6a7f8c9d0e1f2a3b4c5d6.
Error handling
All error responses share this envelope:
{
"code": 404,
"error": "blob not found",
"message": "blob a3f2c1d4...c5d6 not found on any node"
}
| HTTP status | Meaning |
|---|---|
200 OK | Request succeeded |
201 Created | Blob stored successfully |
400 Bad Request | Missing or malformed field (e.g., invalid blob ID length) |
401 Unauthorized | Missing or invalid bearer token |
404 Not Found | Blob does not exist on any node |
409 Conflict | Blob already exists (idempotent — treated as success on PUT) |
412 Precondition Failed | Not enough nodes available for the requested EC policy |
500 Internal Server Error | Unexpected server-side error |
503 Service Unavailable | Too many shard-store failures exceeded EC parity tolerance |
Store a blob with the default replication policy
Send the blob as a base64-encoded value inside a JSON body, or post raw binary with application/octet-stream.
curl -s -X POST "$NABU_API/v1/blobs" \
-H "Authorization: Bearer $NABU_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"data": "SGVsbG8gTmFidSBTdG9yZSE=",
"policy": 3,
"labels": {"env": "prod", "model": "llama3"}
}'
Expected response (201 Created):
{
"id": "a3f2c1d4e5b6a7f8c9d0e1f2a3b4c5d6",
"size": 17
}
Save the returned id — it is required for all subsequent operations on this blob.
Store a large blob using erasure coding (EC 4+2)
Use EC42 for blobs where storage efficiency matters more than simplicity. You need at least 6 active nodes.
curl -s -X POST "$NABU_API/v1/blobs" \
-H "Authorization: Bearer $NABU_TOKEN" \
-H "Content-Type: application/octet-stream" \
-H "X-Nabu-Policy: 4" \
--data-binary @/path/to/large-model-weights.bin
Expected response (201 Created):
{
"id": "7c8d9e0f1a2b3c4d5e6f7a8b9c0d1e2f",
"size": 134217728
}
Retrieve a blob
curl -s -X GET "$NABU_API/v1/blobs/a3f2c1d4e5b6a7f8c9d0e1f2a3b4c5d6" \
-H "Authorization: Bearer $NABU_TOKEN" \
-o retrieved-blob.bin
Expected outcome: The blob bytes are written to retrieved-blob.bin. The server attempts local retrieval first; on a cache miss it fetches from replica nodes transparently.
Check blob existence and metadata (HEAD / Stat)
curl -s -I "$NABU_API/v1/blobs/a3f2c1d4e5b6a7f8c9d0e1f2a3b4c5d6" \
-H "Authorization: Bearer $NABU_TOKEN"
Expected response headers:
HTTP/1.1 200 OK
X-Nabu-Blob-Size: 17
X-Nabu-Created-At: 1735689600000000000
X-Nabu-Policy: 3
If the blob does not exist the response is 404 Not Found with no body.
Delete a blob
curl -s -X DELETE "$NABU_API/v1/blobs/a3f2c1d4e5b6a7f8c9d0e1f2a3b4c5d6" \
-H "Authorization: Bearer $NABU_TOKEN"
Expected response (200 OK):
{
"deleted": true
}
Important: The current implementation removes the blob from the local node and the index. Propagation to replica nodes is not yet implemented — see the Troubleshooting section.
List blobs with cursor-based pagination
# First page
curl -s "$NABU_API/v1/blobs?limit=50" \
-H "Authorization: Bearer $NABU_TOKEN"
Expected response:
{
"ids": [
"a3f2c1d4e5b6a7f8c9d0e1f2a3b4c5d6",
"7c8d9e0f1a2b3c4d5e6f7a8b9c0d1e2f"
],
"next_cursor": "7c8d9e0f1a2b3c4d5e6f7a8b9c0d1e2f",
"has_more": true
}
# Second page — pass next_cursor as the start parameter
curl -s "$NABU_API/v1/blobs?limit=50&cursor=7c8d9e0f1a2b3c4d5e6f7a8b9c0d1e2f" \
-H "Authorization: Bearer $NABU_TOKEN"
Join a node to the cluster
This is typically called by the joining node itself during startup, but you can issue it manually for integration testing.
curl -s -X POST "$NABU_API/v1/cluster/join" \
-H "Authorization: Bearer $NABU_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"node_id": "node-west-3",
"address": "10.0.1.23:7070",
"capacity_bytes": 1099511627776
}'
Expected response (200 OK):
{
"accepted": true,
"ring_version": 4,
"nodes": [
{"node_id": "node-west-1", "address": "10.0.1.21:7070", "state": "active", "capacity_bytes": 1099511627776, "used_bytes": 21474836480},
{"node_id": "node-west-3", "address": "10.0.1.23:7070", "state": "active", "capacity_bytes": 1099511627776, "used_bytes": 0}
]
}
Retrieve cluster topology
curl -s "$NABU_API/v1/cluster/state" \
-H "Authorization: Bearer $NABU_TOKEN"
Expected response:
{
"nodes": [
{"node_id": "node-west-1", "address": "10.0.1.21:7070", "state": "active", "capacity_bytes": 1099511627776, "used_bytes": 21474836480, "blob_count": 1024},
{"node_id": "node-west-2", "address": "10.0.1.22:7070", "state": "offline", "capacity_bytes": 1099511627776, "used_bytes": 18253611008, "blob_count": 891},
{"node_id": "node-west-3", "address": "10.0.1.23:7070", "state": "joining", "capacity_bytes": 1099511627776, "used_bytes": 0, "blob_count": 0}
],
"ring_version": 4
}
Node state values: joining, active, leaving, offline.
404 Not Found when retrieving a blob that was just stored
Symptom: A GET /v1/blobs/{id} returns 404 immediately after a successful PUT.
Likely cause: The PUT stored the blob on a specific set of replica nodes determined by consistent hashing, but the GET was routed to a node that is not one of those replicas and could not reach them (network partition, node offline).
Fix:
- Call
GET /v1/cluster/stateand confirm all expected nodes are inactivestate. - Retry the
GET— the server attempts up to 3 replica nodes before returning 404. - If nodes are
offline, restore them before retrying; EC-encoded blobs require at leastdata_shardsnodes to reconstruct.
412 Precondition Failed on PUT with an EC policy
Symptom: Storing a blob returns HTTP 412 with message not enough nodes for EC (have N, need M).
Likely cause: The cluster does not have enough active nodes to satisfy the shard count required by the policy you selected. EC42 requires 6 nodes; EC82 requires 10.
Fix:
- Check cluster size:
GET /v1/cluster/state. - Either expand the cluster to meet the node requirement (see the cluster expansion guide) or switch to a less demanding policy (
REPLICA3= 3 nodes).
400 Bad Request — "invalid blob ID"
Symptom: Any request that includes a blob ID returns 400 with invalid blob ID.
Likely cause: The blob ID is not exactly 16 bytes (32 hex characters). Common mistakes: truncated copy-paste, passing the full 32-byte SHA-256 hash instead of the 16-byte derived ID, or URL-encoding issues.
Fix: Use the exact hex string returned in the id field of the original PUT response. It must be exactly 32 lowercase hex characters.
503 Service Unavailable — too many shard-store failures
Symptom: A PUT with EC42 or EC82 policy returns 503 with message too many shard store failures.
Likely cause: More shard writes failed than the policy's parity tolerance allows. EC42 tolerates 2 failures; EC82 also tolerates 2. This can happen during a rolling restart, network instability, or a disk-full condition on target nodes.
Fix:
- Inspect the server logs on each target node for
StoreSharderrors. - Resolve the underlying node issue (free disk space, restore network connectivity).
- Retry the
PUT— the operation is idempotent; a blob that already exists on some shards will not be double-stored.
Deleted blob still visible on replica nodes
Symptom: After DELETE /v1/blobs/{id} returns {"deleted": true}, the blob is still retrievable from other nodes in the cluster.
Likely cause: Delete propagation to replica nodes is not yet implemented in this release. The delete removes the blob from the receiving node's local backend and index only.
Workaround: Issue the DELETE request against each node individually until cross-node propagation is implemented. Track the affected node addresses using GET /v1/cluster/state.
Node receives {"commands": ["rejoin"]} in heartbeat response
Symptom: A node's heartbeat response contains ok: false and commands: ["rejoin"].
Likely cause: The receiving coordinator node does not recognise the sending node (e.g., the coordinator was restarted and lost in-memory state, or the node was removed from the cluster while offline).
Fix: The node must re-issue POST /v1/cluster/join with its node_id, address, and capacity_bytes. Once accepted, normal heartbeat cycles will resume.
