Skip to main content

Hermes

Nous Research's agent framework — register AntSeed as a custom provider in `config.yaml`.

Autonomous agentsOpenAI Chat Completions~3 min

What Hermes is. Hermes is the agent framework from Nous Research (successor to OpenClaw's lineage). It's designed for autonomous, multi-step workflows — research agents, coding agents, swarms — and reads its model catalog from ~/.hermes/config.yaml.

How AntSeed plugs in. Add an entry under custom_providers with base_url: http://127.0.0.1:8377/v1, api_mode: chat_completions, and a list of models. Each model id must be a service id your pinned peer advertises. Then point model.default at the one you want as primary.

One Hermes-specific gotcha. Some peers serve GPT-style models via the openai-responses protocol, which requires streaming. Hermes' auxiliary calls (title generation, context compression) are non-streaming and will fail against those models with HTTP 400: Stream must be set to true. Pin auxiliary slots to a chat_completions model (config example below).

Run AntSeed first

Every integration assumes a buyer proxy at http://localhost:8377. One-time setup, ~2 minutes.

Step 1

Install Hermes

  • Install or build Hermes
    # Follow Nous Research setup at https://github.com/NousResearch/hermes-agent

    Hermes is typically run as a long-lived process (often under systemd on a server). The config file `~/.hermes/config.yaml` is read at startup — changes require a restart.

Step 2

Point Hermes at AntSeed

~/.hermes/config.yaml (merge into your existing config)
model: default: claude-sonnet-4-6 provider: antseed custom_providers: - name: antseed base_url: http://127.0.0.1:8377/v1 api_key: antseed-p2p api_mode: chat_completions models: - claude-sonnet-4-6 - claude-opus-4-7 - deepseek-v4-flash - gpt-oss-120b - minimax-m2.7 # Pin auxiliary calls to a chat_completions model so non-streaming # requests (title generation, compression) don't break against # openai-responses peers. auxiliary: title_generation: provider: antseed model: minimax-m2.7 compression: provider: antseed model: minimax-m2.7

Step 3

Pick a model

claude-sonnet-4-6minimax-m2.7deepseek-v4-flashgpt-oss-120b

Only ids listed under `models:` show up in Hermes' picker — mirror it against `curl http://127.0.0.1:8377/v1/models` so you don't advertise models no peer serves. `model.provider: antseed` pins the default to this custom provider.

The exact list of models depends on which peer you pin. Run antseed network browse or open the live network page to see what's available right now.

Verify

Test it

  • Confirm the proxy advertises the same ids your config references
    curl -s http://127.0.0.1:8377/v1/models | jq '.data[].id'
    Example response
    "claude-opus-4-7" "claude-sonnet-4-6" "deepseek-v4-flash" "gpt-oss-120b" "minimax-m2.7"
  • Restart Hermes to pick up the new provider
    sudo systemctl restart hermes

    Or whatever supervisor you use. Then check the journal: `sudo journalctl -u hermes --no-pager -n 30`.

  • After the first request, confirm a channel opened and is being metered
    antseed buyer status antseed buyer metering

    `status` shows `Active channels: 1` once the first request settles (~5–15s on Base — one on-chain tx to open the channel). `metering` shows the per-peer token + USDC totals for each channel. To poll: `watch -n 1 antseed buyer metering`.

How Hermes talks to AntSeed

  • Wire format sent by Hermes: OpenAI Chat Completions (hits /v1/chat/completions on the buyer proxy)
  • Best-fit services: any service whose protocols array contains openai-chat-completions. That's what the peer advertises as natively-supported — zero translation overhead, no transform edge cases.
  • How to check a peer: run antseed network peer <peerId> --json and look at matchingServices[].protocols for each model. The browse command shows the same data per peer in providerServiceApiProtocols.
  • What happens when protocols don't match: AntSeed's @antseed/api-adapter translates between OpenAI Chat Completions and the service's native protocol on the fly. So a request from Hermes can still reach a service that only advertises anthropic-messages — just with a small transform step.
  • One known caveat: services whose only advertised protocol is openai-responses require streaming. If Hermes sends a non-streaming request and the proxy routes it to one of those services, the call fails with HTTP 400: Stream must be set to true. Pick a service whose protocols includes openai-chat-completions (or another non-responses protocol) to avoid this.

If it goes wrong

Troubleshooting

  • `HTTP 400: Stream must be set to true` from auxiliary callsYou're routing through a peer that serves the model via `openai-responses` (which requires streaming), but Hermes' auxiliaries are non-streaming. Pin the `auxiliary.*` slots to a `chat_completions` model (see the config block above). Confirm a model's protocol with `antseed network peer <peerId>` — look for `protocols: openai-chat-completions` vs `openai-responses`.
  • Hermes loads the provider but every call returns `no_peer_pinned`In the default manual flow AntSeed does not auto-select a peer — pin one with `antseed buyer connection set --peer <peerId>`, send `x-antseed-pin-peer` per request, or start the buyer with a router plugin. The session pin survives buyer-proxy restarts (it's persisted to `~/.antseed/buyer.state.json`).
  • Hermes runs on a remote host and can't reach `127.0.0.1:8377`Either run the buyer proxy on the same host as Hermes (recommended — keeps the hot signing key local), or expose the proxy via SSH tunnel: `ssh -N -L 127.0.0.1:8377:127.0.0.1:8377 user@hermes-host`. Do not bind the buyer proxy to a public interface.
  • Want to swap the routed model without restarting AntSeedEdit `model.default` (and `models:` if needed) in `config.yaml`, re-pin a peer that serves it (`antseed buyer connection set --peer <peerId>`), then `sudo systemctl restart hermes`. The buyer proxy stays up; no contract calls.

Reference

Links

Same category

Related