Skip to main content

GenLayer Studio

Use AntSeed as an inference provider inside GenLayer Studio validators.

FrameworksOpenAI Chat Completions~5 min

What GenLayer Studio is. Studio runs Intelligent Contract validators that consult LLMs to reach consensus. Each validator is configured with a provider entry that has a provider name, a plugin (one of openai-compatible / anthropic / google / ollama / custom), a model id, and a plugin_config with api_url and api_key_env_var.

How AntSeed plugs in. Drop one JSON file per model into backend/node/create_nodes/default_providers/ with plugin: "openai-compatible" and api_url: "http://host.docker.internal:8377". Studio's openai-compatible plugin appends /v1/chat/completions automatically, so the buyer proxy receives a standard OpenAI Chat request and routes it to your pinned peer. Mirror the existing LibertAI entry (PR #1526) — it is the closest analogue: an openai-compatible host with a hosted base URL replaced by your local proxy.

Why host.docker.internal, not localhost. Studio's backend runs in Docker via genlayer up. From inside the container, localhost means the container itself, not your host machine — it cannot reach the AntSeed buyer proxy on the host. Mac/Windows Docker exposes the host as host.docker.internal; on Linux you must add extra_hosts: ["host.docker.internal:host-gateway"] to the backend service in docker-compose.yml or run with --network=host.

Run AntSeed first

Every integration assumes a buyer proxy at http://localhost:8377. One-time setup, ~2 minutes.

Before you start

Prerequisites

  • GenLayer Studio cloned and running locally with `genlayer up` (see https://docs.genlayer.com/developers/intelligent-contracts/tools/genlayer-studio)

Step 1

Install GenLayer Studio

  • On Linux only — make `host.docker.internal` resolve from inside the backend container
    # docker-compose.yml — patch the backend (jsonrpc) service services: jsonrpc: extra_hosts: - "host.docker.internal:host-gateway"

    Mac and Windows Docker Desktop already expose the host as `host.docker.internal` automatically — skip this step on those platforms. Restart with `genlayer up --reset` after editing.

Step 2

Point GenLayer Studio at AntSeed

backend/node/create_nodes/default_providers/antseed_claude-sonnet-4-6.json
{ "provider": "antseed", "plugin": "openai-compatible", "model": "claude-sonnet-4-6", "config": {}, "plugin_config": { "api_key_env_var": "ANTSEED_API_KEY", "api_url": "http://host.docker.internal:8377" } }
backend/node/create_nodes/default_providers/antseed_deepseek-v4-flash.json
{ "provider": "antseed", "plugin": "openai-compatible", "model": "deepseek-v4-flash", "config": {}, "plugin_config": { "api_key_env_var": "ANTSEED_API_KEY", "api_url": "http://host.docker.internal:8377" } }
.env (next to docker-compose.yml)
# AntSeed authenticates with your local identity key, not this value. # Studio's openai-compatible plugin still requires the env var to be set. ANTSEED_API_KEY=antseed
backend/node/create_nodes/providers_schema.json AND frontend/src/assets/schemas/providers_schema.json
// In each schema, add "antseed" to the provider enum's examples… "provider": { "type": "string", "examples": ["ollama", "openrouter", "libertai", "antseed", …] }, // …and add an if/then block locking provider:antseed to plugin:openai-compatible { "if": { "properties": { "provider": { "const": "antseed" } } }, "then": { "properties": { "plugin": { "const": "openai-compatible" } } } }

Both schema files must be kept in sync — the backend uses one for validation, the frontend uses the other for the UI dropdown. This is exactly what PR #1526 did for LibertAI.

Step 3

Pick a model

claude-sonnet-4-6deepseek-v4-flashgpt-oss-120bqwen3-coder-480b

Each provider JSON file pins exactly one `model`. Studio enumerates these into the validator-creation UI; pick services you know your pinned peer offers (check `antseed network peer <peerId> --json | jq '.matchingServices[].service'`). To expose more models later, drop in more `antseed_<model>.json` files — no schema edit needed.

The exact list of models depends on which peer you pin. Run antseed network browse or open the live network page to see what's available right now.

Verify

Test it

  • Restart Studio so it re-scans `default_providers/`
    genlayer up --reset

    `get_default_providers()` in `backend/node/create_nodes/providers.py` reads every `*.json` in that folder once on boot, validates against `providers_schema.json`, and caches the result. Schema-validation errors abort startup with the offending file path — watch the logs.

  • In the Studio UI, create a new validator with provider "antseed"

    You should see your `antseed_*.json` model ids in the dropdown. Save and trigger a contract that calls `genlayer.eq_principle.prompt(…)` — the request hits `http://host.docker.internal:8377/v1/chat/completions` on the AntSeed proxy and is forwarded to your pinned peer.

  • Confirm the validator call hit AntSeed
    antseed buyer metering

    Each validator call adds tokens + USDC to the channel for the peer you pinned. Run after a Studio request to see the totals update. To poll live: `watch -n 1 antseed buyer metering`.

How GenLayer Studio talks to AntSeed

  • Wire format sent by GenLayer Studio: OpenAI Chat Completions (hits /v1/chat/completions on the buyer proxy)
  • Best-fit services: any service whose protocols array contains openai-chat-completions. That's what the peer advertises as natively-supported — zero translation overhead, no transform edge cases.
  • How to check a peer: run antseed network peer <peerId> --json and look at matchingServices[].protocols for each model. The browse command shows the same data per peer in providerServiceApiProtocols.
  • What happens when protocols don't match: AntSeed's @antseed/api-adapter translates between OpenAI Chat Completions and the service's native protocol on the fly. So a request from GenLayer Studio can still reach a service that only advertises anthropic-messages — just with a small transform step.
  • One known caveat: services whose only advertised protocol is openai-responses require streaming. If GenLayer Studio sends a non-streaming request and the proxy routes it to one of those services, the call fails with HTTP 400: Stream must be set to true. Pick a service whose protocols includes openai-chat-completions (or another non-responses protocol) to avoid this.

If it goes wrong

Troubleshooting

  • `Error validating file … antseed_*.json` on `genlayer up`The schema rejected your provider JSON. Most common cause: missing the if/then rule for `provider:antseed`, so it falls through with the wrong `plugin`. Add the rule to *both* `backend/.../providers_schema.json` and `frontend/.../providers_schema.json`. Run `genlayer up --reset` after editing.
  • Validator hangs, then errors with `Connection refused` to `host.docker.internal:8377`The backend container can't see your host. On Linux, add `extra_hosts: ["host.docker.internal:host-gateway"]` under the backend service in `docker-compose.yml` (see install step 2). On Mac/Windows, confirm Docker Desktop is running and the AntSeed proxy is up: `curl http://host.docker.internal:8377/v1/models` from inside the container with `docker compose exec jsonrpc curl …`.
  • Validator returns `no_peer_pinned`No peer is pinned in the buyer proxy. Run `antseed network browse`, pick one, then `antseed buyer connection set --peer <peerId>`. Alternatively, send a per-request `x-antseed-pin-peer` header by extending the openai-compatible plugin — not currently exposed in the standard schema, so session pin is the path of least resistance.
  • `404 model_not_found` from a validator using e.g. `claude-sonnet-4-6`Your pinned peer doesn't advertise that service id. Run `antseed network peer <peerId> --json | jq '.matchingServices[].service'` to see what it does serve. Either pin a different peer or remove that `antseed_<model>.json` file.
  • First call after a restart takes 5–15 secondsAntSeed opens a payment channel on the first request to a new peer (one Base-mainnet transaction). Subsequent calls reuse the channel. Pre-warm with `curl -s http://localhost:8377/v1/chat/completions -d '{"model":"<id>","messages":[{"role":"user","content":"hi"}]}'` before triggering Studio.

Heads up

Caveats

  • AntSeed is a local daemon, not a hosted endpoint. Every Studio operator must run AntStation or `antseed buyer start` on their own machine and fund their wallet — there is no central account.
  • Free services exist on the AntSeed network (`in: 0, out: 0`), but using paid ones requires a USDC deposit on Base. AntStation guides users through this on first launch; the CLI exposes it as `antseed payments`.

Reference

Links

Same category

Related