Getting Started
Making your first API request
This page walks you through making your first API requests to Nabu Store (AIStore). You will connect a client to a running single-node cluster, store a blob, retrieve it, and verify its integrity — covering the core read/write lifecycle in minutes. Completing this tutorial gives you a working foundation before you move on to multi-node deployments, replication policies, and advanced storage backends.
Before you begin, make sure you have the following in place:
- Go 1.21+ installed and available on your
PATH(required to build the server and test client) - AIStore source code cloned locally — the binaries are built from source
- A local directory for blob storage, for example
/data/aistore— the directory must be writable by the user running the server - Port 50051 available on
localhost(the default gRPC listen address) - Basic familiarity with running commands in a terminal
Get from zero to your first successful blob round-trip in five steps:
-
Build the binaries
go build -o aistore ./cmd/aistore go build -o testclient ./cmd/testclient -
Start a single-node server
./aistore --node-id=node1 --listen=:50051 --data-dir=/data/aistore -
Open a second terminal and run the test client
./testclient --addr=localhost:50051 -
Confirm all tests pass — the client output should end with:
✓ All tests passed! -
Proceed to the detailed steps below to understand what each API call does and how to call the API programmatically from your own code.
The following procedure explains each stage of the end-to-end blob lifecycle that the test client exercises against the gRPC API.
Step 1 — Build the server and client binaries
From the repository root, compile both binaries:
go build -o aistore ./cmd/aistore
go build -o testclient ./cmd/testclient
Successful output: the commands complete silently, and you see aistore and testclient files in the current directory.
Step 2 — Start the single-node server
Launch the server, providing a unique node ID, a listen address, and a data directory:
./aistore --node-id=node1 --listen=:50051 --data-dir=/data/aistore
| Flag | Value used | Why |
|---|---|---|
--node-id | node1 | Uniquely identifies this node in the cluster |
--listen | :50051 | The gRPC address clients connect to |
--data-dir | /data/aistore | Where blobs are persisted on disk |
Successful output: the server starts and listens without errors. Leave this terminal open.
Step 3 — Connect the client
In a second terminal, run the test client pointing at the server:
./testclient --addr=localhost:50051
The client establishes a gRPC connection over plain TCP (no TLS in this quickstart). You should see:
Connecting to AIStore at localhost:50051...
Connected!
If the connection is refused, verify the server is still running in the first terminal and that port 50051 is not blocked.
Step 4 — Store a blob (Put)
The client automatically generates 1 MB of random data and issues a Put RPC call with the default replication policy (replica3). The server assigns a content-addressable blob ID and returns it.
You can override the data size and policy:
./testclient --addr=localhost:50051 --policy=none --size=4096
Available policies: none, replica2, replica3, ec42, ec82.
Successful output:
=== Test 1: Put blob (policy=none, size=4096) ===
Put succeeded: size=4096 bytes
Blob ID: <hex-encoded-id>
Step 5 — Inspect blob metadata (Stat)
The client calls Stat with the blob ID returned by Put. This confirms the blob exists and returns its size and creation timestamp without transferring the data — useful for lightweight existence checks.
Successful output:
=== Test 2: Stat blob ===
Stat succeeded: exists=true
Size: 4096
Created: <unix-timestamp>
Step 6 — Retrieve the blob (Get)
The client calls Get with the same blob ID. The server returns the full byte payload. The client then verifies every byte matches the original data to confirm end-to-end integrity.
Successful output:
=== Test 3: Get blob ===
Get succeeded: retrieved 4096 bytes
Data verified!
Step 7 — List stored blobs
The client calls List requesting up to 10 blob IDs. This is useful for auditing stored objects or paginating through a namespace.
Successful output:
=== Test 4: List blobs ===
List succeeded: 1 blobs, hasMore=false
1. <hex-encoded-id>
Step 8 — Delete the blob
The client calls Delete with the blob ID. The server removes the blob and its index entry.
Successful output:
=== Test 5: Delete blob ===
Delete succeeded: deleted=true
Step 9 — Verify deletion
The client calls Stat a second time to confirm the blob no longer exists. A Stat on a deleted blob either returns exists=false or an error — both are expected.
Successful output:
=== Test 6: Verify deletion ===
Blob no longer exists (expected)
✓ All tests passed!
The following examples show each blob operation in isolation using the gRPC API through the test client CLI. Use these as a reference when integrating AIStore into your own application.
Example 1 — Store a 1 MB blob with no replication
Stores exactly 1 MB (the default size) of random bytes using the none policy, which writes a single copy with no redundancy. This is the lowest-overhead option for ephemeral or easily-regenerated data.
./testclient --addr=localhost:50051 --policy=none --size=1048576
Expected output:
Connecting to AIStore at localhost:50051...
Connected!
=== Test 1: Put blob (policy=none, size=1048576) ===
Put succeeded: size=1048576 bytes
Blob ID: a3f2c1d409e7b85600000000000000001
=== Test 2: Stat blob ===
Stat succeeded: exists=true
Size: 1048576
Created: 1718000000
=== Test 3: Get blob ===
Get succeeded: retrieved 1048576 bytes
Data verified!
=== Test 4: List blobs ===
List succeeded: 1 blobs, hasMore=false
1. a3f2c1d409e7b85600000000000000001
=== Test 5: Delete blob ===
Delete succeeded: deleted=true
=== Test 6: Verify deletion ===
Blob no longer exists (expected)
✓ All tests passed!
The blob ID is a random 16-byte value encoded as hex. Your output will differ.
Example 2 — Store a small blob with 3-way replication
Stores a 4 KB blob using replica3, which writes three independent copies across available nodes. On a single-node cluster all three copies reside on the same node; the policy takes full effect when you expand to a multi-node cluster.
./testclient --addr=localhost:50051 --policy=replica3 --size=4096
Expected output (abbreviated):
Connecting to AIStore at localhost:50051...
Connected!
=== Test 1: Put blob (policy=replica3, size=4096) ===
Put succeeded: size=4096 bytes
Blob ID: 7b1e4f2a903cd56100000000000000002
...
✓ All tests passed!
Example 3 — Store a blob with erasure coding (EC 4+2)
Stores a blob using ec42, which splits data into 4 data shards and 2 parity shards (Reed-Solomon). This tolerates up to 2 simultaneous node failures while using less storage than full replication. EC schemes are recommended for large, infrequently accessed objects.
./testclient --addr=localhost:50051 --policy=ec42 --size=1048576
Expected output (abbreviated):
Connecting to AIStore at localhost:50051...
Connected!
=== Test 1: Put blob (policy=ec42, size=1048576) ===
Put succeeded: size=1048576 bytes
Blob ID: 9c3d7a1b204ef78300000000000000003
...
✓ All tests passed!
Use the following table to diagnose common failures when making your first API requests.
Symptom: Failed to connect: context deadline exceeded or connection refused
- Likely cause: The AIStore server is not running, or it is listening on a different address or port.
- Fix: Confirm the server process is active (
ps aux | grep aistore). Verify the--listenflag used when starting the server matches the--addrflag passed to the client. The default is:50051/localhost:50051.
Symptom: Put failed: ... with a permissions error
- Likely cause: The server process does not have write access to the directory specified by
--data-dir. - Fix: Create the directory and grant write permissions to the user running the server:
mkdir -p /data/aistore chmod 755 /data/aistore
Symptom: Unknown policy: <value>
- Likely cause: An unrecognised string was passed to
--policy. - Fix: Use one of the supported policy values:
none,replica2,replica3,ec42,ec82. The flag is case-sensitive.
Symptom: Size mismatch! or Data mismatch at byte N during the Get verification step
- Likely cause: Data corruption during storage or retrieval, or a backend write error that did not surface as a gRPC error.
- Fix: Check the server logs for write errors. If you are using the
localfsbackend, verify the filesystem on--data-diris healthy (df -h,dmesg). Re-run the test client to see if the issue is intermittent.
Symptom: Build fails with cannot find package or module errors
- Likely cause: Go module dependencies have not been downloaded, or the Go toolchain is not installed.
- Fix: Run
go mod downloadfrom the repository root before building. Confirm your Go installation withgo version.
Symptom: Delete succeeded: deleted=false
- Likely cause: The blob ID used in the
Deleterequest did not match any stored blob — the blob may have already been deleted or the ID was not captured correctly. - Fix: Call
Statfirst to confirm the blob exists before deleting. In the test client, the blob ID is captured automatically from thePutresponse.
