Your First Request
Example API call with curl or HTTP client
This page walks you through making your first API calls to Nabu Store using its gRPC interface. You will store a blob, retrieve it, inspect its metadata, list available blobs, and delete it — the five core operations you will use in every integration. Completing this guide confirms that your node is reachable, your client is configured correctly, and the storage backend is functioning before you move on to multi-node deployments or replication policies.
Before you begin, make sure you have the following in place:
- A running Nabu Store node (see Install and start a single-node cluster). The default gRPC listener is
:50051. - Go 1.21 or later installed if you plan to build the test client from source.
- The
testclientbinary built or available on yourPATH. Build it with:go build -o testclient ./cmd/testclient - Network access from your workstation to the node's gRPC port (default
50051). - No authentication is required for this quickstart — the test client connects without TLS credentials. For production use, see Authenticate and issue API requests programmatically.
The testclient binary is the fastest way to exercise the API. If you have not built it yet, follow these steps.
-
Clone the repository and enter the project directory:
git clone https://github.com/trilio/aistore.git cd aistore -
Build the test client:
go build -o testclient ./cmd/testclient -
Verify the binary is executable:
./testclient --helpYou should see the available flags printed to stdout.
-
Confirm the server is reachable before running any operations:
./testclient --addr=localhost:50051 --size=1024A successful connection prints
Connected!immediately after the address line. If the connection times out, check that the node is running and that port50051is not blocked by a firewall.
The test client accepts three flags that control how it connects to the server and what data it sends. Understanding these options lets you tailor each run to your environment or benchmark goals.
| Flag | Default | Valid values | Effect |
|---|---|---|---|
--addr | localhost:50051 | Any host:port string | The gRPC address of the Nabu Store node to connect to. Change this when the node is on a remote host or listening on a non-default port. |
--policy | replica3 | none, replica2, replica3, ec42, ec82 | The replication policy applied to the blob at write time. none stores a single copy. replica2/replica3 store two or three full copies across nodes. ec42 and ec82 use Reed-Solomon erasure coding (4+2 and 8+2 data/parity shards respectively) for storage efficiency at the cost of requiring enough nodes to hold the shards. |
--size | 1048576 (1 MB) | Any positive integer (bytes) | The size of the randomly generated test blob in bytes. Use a small value such as 1024 for a fast smoke test, or a large value to measure throughput. |
Why replication policy matters at write time: The policy is encoded with the blob when it is stored. You cannot change a blob's policy after the fact without deleting and re-uploading it. Choose the policy that matches your durability requirements before writing production data.
Once the test client is built, a single command exercises the full request lifecycle against a running node.
Smoke test against a local node (1 KB payload, no replication):
./testclient --addr=localhost:50051 --policy=none --size=1024
Store a 1 MB blob with three-way replication (the default):
./testclient --addr=localhost:50051
Target a remote node and use erasure coding:
./testclient --addr=192.168.1.100:50051 --policy=ec42 --size=4194304
Under the hood, each run performs these six operations in order:
- Put — uploads a randomly generated blob and receives an assigned blob ID.
- Stat — fetches metadata (size, creation timestamp) for the blob ID returned by Put.
- Get — downloads the blob and verifies byte-for-byte integrity against the original data.
- List — retrieves up to 10 blob IDs from the node to confirm the blob appears in the index.
- Delete — removes the blob from the cluster.
- Verify deletion — calls Stat again and confirms the blob no longer exists.
If any step fails, the client prints an error to stderr and exits with a non-zero status code, making it suitable for use in CI pipelines or health checks.
Example 1 — Minimal smoke test (1 KB, no replication)
./testclient --addr=localhost:50051 --policy=none --size=1024
Expected output:
Connecting to AIStore at localhost:50051...
Connected!
=== Test 1: Put blob (policy=none, size=1024) ===
Put succeeded: size=1024 bytes
Blob ID: 3f7a2c1e9b804d56a1f023cc7e8d5b12
=== Test 2: Stat blob ===
Stat succeeded: exists=true
Size: 1024
Created: 1748000000
=== Test 3: Get blob ===
Get succeeded: retrieved 1024 bytes
Data verified!
=== Test 4: List blobs ===
List succeeded: 1 blobs, hasMore=false
1. 3f7a2c1e9b804d56a1f023cc7e8d5b12
=== Test 5: Delete blob ===
Delete succeeded: deleted=true
=== Test 6: Verify deletion ===
Blob no longer exists (expected)
✓ All tests passed!
Example 2 — Default run (1 MB blob, replica3 policy)
./testclient --addr=localhost:50051
Expected output (same structure as above, with policy=replica3 and size=1048576 in the Put header):
=== Test 1: Put blob (policy=replica3, size=1048576) ===
Put succeeded: size=1048576 bytes
Blob ID: a8d3f091c4e7b25e3f1c09d7a6e42b88
...
✓ All tests passed!
Example 3 — Erasure-coded blob against a remote node
This example is representative of a multi-node cluster where ec42 requires at least 6 nodes (4 data + 2 parity shards).
./testclient --addr=192.168.1.100:50051 --policy=ec42 --size=4194304
Expected output:
Connecting to AIStore at 192.168.1.100:50051...
Connected!
=== Test 1: Put blob (policy=ec42, size=4194304) ===
Put succeeded: size=4194304 bytes
Blob ID: c1b2d3e4f5061728394a5b6c7d8e9f00
...
✓ All tests passed!
Note: Blob IDs in the output above are illustrative hex strings. Your actual IDs are randomly generated and will differ on every run.
Connection times out or hangs at "Connecting to..."
Symptom: The client prints Connecting to AIStore at localhost:50051... and then waits up to 120 seconds before printing Failed to connect: context deadline exceeded.
Likely cause: The Nabu Store node is not running, or it is listening on a different address or port than the one you specified with --addr.
Fix:
- Confirm the node process is running:
ps aux | grep aistore - Check which port it is bound to:
ss -tlnp | grep aistore(Linux) orlsof -i :50051(macOS). - If the node is on a remote host, ensure port
50051(or your custom port) is open in the host's firewall and any intervening network ACLs. - Pass the correct address explicitly:
./testclient --addr=<correct-host>:<correct-port>
Put fails with "Unknown policy"
Symptom: The client immediately prints Unknown policy: <value> to stderr and exits with code 1.
Likely cause: You passed a --policy value that is not one of the five supported identifiers.
Fix: Use one of the accepted values exactly as listed: none, replica2, replica3, ec42, ec82. Policy names are case-sensitive.
Put or Get fails on an erasure-coded policy
Symptom: Put failed: ... or Get failed: ... when using --policy=ec42 or --policy=ec82.
Likely cause: Erasure coding requires enough nodes to distribute shards. ec42 needs at least 6 nodes; ec82 needs at least 10. A single-node cluster cannot satisfy these requirements.
Fix: Either switch to a replication policy compatible with your cluster size (none, replica2, or replica3), or deploy the required number of nodes first (see Deploy a multi-node cluster on Kubernetes).
Data mismatch error after Get
Symptom: The client prints Data mismatch at byte N: expected XX, got YY and exits non-zero.
Likely cause: This indicates a storage integrity issue — the bytes returned by Get differ from what was written by Put. This should not occur under normal operation.
Fix:
- Check available disk space on the node's
--data-dir: a full disk can cause partial writes. - Review the node's logs for backend errors (SPDK or LocalFS write failures).
- If CXL tiering is enabled, verify the CXL device is healthy (see Enable CXL memory tiering).
- If the issue persists, file a bug report including the node ID, backend type, policy, and blob size.
List returns zero blobs immediately after a successful Put
Symptom: List succeeded: 0 blobs, hasMore=false even though Put reported success.
Likely cause: The blob was written to a different node in the cluster than the one you listed against. The List operation returns blobs indexed on the specific node you queried.
Fix: This is expected behavior in a multi-node cluster where data is distributed via consistent hashing. Query the node that owns the blob's hash range, or use the blob ID returned by Put to call Stat and Get directly rather than relying on List for discovery.
