Skip to main content

OpenAI Codex CLI

OpenAI's official CLI coding agent — add an AntSeed profile to ~/.codex/config.toml.

Coding agentsOpenAI Chat Completions~3 min

Codex is OpenAI's terminal coding agent. Recent versions ignore `OPENAI_BASE_URL` and instead read `~/.codex/config.toml`, where you declare custom inference providers under `[model_providers]` and bundle them into named `[profiles]` you can select with `--profile`.

AntSeed plugs in as a `model_provider` pointed at the local buyer proxy. Pair it with a profile and you can swap between OpenAI proper and AntSeed by changing one flag.

Run AntSeed first

Every integration assumes a buyer proxy at http://localhost:8377. One-time setup, ~2 minutes.

Step 1

Install OpenAI Codex CLI

  • Install Codex globally
    npm install -g @openai/codex
  • Verify it runs
    codex --version

Step 2

Point OpenAI Codex CLI at AntSeed

~/.codex/config.toml
# Register AntSeed as a custom model provider. [model_providers.antseed] name = "AntSeed" base_url = "http://localhost:8377/v1" wire_api = "chat" # or "responses" — AntSeed supports both # Bundle the provider + a default model into a profile. [profiles.antseed] model = "claude-sonnet-4-6" model_provider = "antseed" # Optional: make AntSeed the default profile so you don't need --profile every time. # profile = "antseed"

This must be your **user-level** `~/.codex/config.toml`. Codex silently ignores `model_provider` / `model_providers` if they appear in a project-local `./.codex/config.toml` and prints a one-line warning at launch (see Troubleshooting).

No API key is needed — the AntSeed proxy authenticates every request using your local identity, not an Authorization header. If Codex prompts for a key on first run, type any non-empty value and continue.

Step 3

Pick a model

claude-sonnet-4-6deepseek-v3.1kimi-k2.5qwen-3-coder-480b

Set `model = "<service-id>"` inside `[profiles.antseed]`, or override per-session with `codex --profile antseed --model <service-id>`. Anything your pinned peer advertises works.

The exact list of models depends on which peer you pin. Run antseed network browse or open the live network page to see what's available right now.

Verify

Test it

  • See which service ids your pinned peer exposes
    curl -s http://localhost:8377/v1/models | jq '.data[].id'
    Example response
    "claude-opus-4-7" "claude-sonnet-4-6" "deepseek-v4-flash" "gpt-oss-120b"

    Whatever appears here is a valid value for `model = ...` inside `[profiles.antseed]` (or for `codex --profile antseed --model <id>`).

  • Run Codex against AntSeed
    codex --profile antseed

    Or pin a model for one session: `codex --profile antseed --model deepseek-v4-flash`.

  • Verify inference is actually paid through AntSeed
    open http://localhost:3118 # or: antseed buyer status
    What to look for after one real prompt
    Deposits available: 4.289391 USDC → 3.289391 USDC Deposits reserved: 0 USDC → 1 USDC

    The buyer dashboard at http://localhost:3118 is the authoritative real-time signal: a non-zero `Reserved` (channel opened) and/or a drop in `Available` (settled spend) after a real prompt confirms AntSeed served the request. The `antseed buyer status` CLI output is cached and may lag the dashboard — refresh the web view for confirmation. Do not rely on `lsof -i | grep codex` or `~/.codex/log/codex-tui.log`: Codex keeps persistent TCP connections to Cloudflare/ChatGPT IPs (e.g. 172.64.0.0/13) for non-inference purposes (the cause was not isolated during testing), and the `provider=OpenAI` lines in the TUI log are not a reliable indicator that inference went to OpenAI — the on-chain numbers can show AntSeed served the request despite that log line.

How OpenAI Codex CLI talks to AntSeed

  • Wire format sent by OpenAI Codex CLI: OpenAI Chat Completions (hits /v1/chat/completions on the buyer proxy)
  • Best-fit services: any service whose protocols array contains openai-chat-completions. That's what the peer advertises as natively-supported — zero translation overhead, no transform edge cases.
  • How to check a peer: run antseed network peer <peerId> --json and look at matchingServices[].protocols for each model. The browse command shows the same data per peer in providerServiceApiProtocols.
  • What happens when protocols don't match: AntSeed's @antseed/api-adapter translates between OpenAI Chat Completions and the service's native protocol on the fly. So a request from OpenAI Codex CLI can still reach a service that only advertises anthropic-messages — just with a small transform step.
  • One known caveat: services whose only advertised protocol is openai-responses require streaming. If OpenAI Codex CLI sends a non-streaming request and the proxy routes it to one of those services, the call fails with HTTP 400: Stream must be set to true. Pick a service whose protocols includes openai-chat-completions (or another non-responses protocol) to avoid this.

If it goes wrong

Troubleshooting

  • `OPENAI_BASE_URL` / `OPENAI_API_KEY` are being ignoredExpected on Codex 0.40+ — it no longer reads OpenAI env vars and only loads providers from `~/.codex/config.toml`. Use the profile shown above and launch with `codex --profile antseed`.
  • How can I tell if Codex is actually routing through AntSeed?Check the buyer dashboard at http://localhost:3118 (or `antseed buyer status`) after sending a test prompt. `Reserved` going from $0 to a non-zero value (a channel was opened) and/or `Available` dropping (spend settled) confirms AntSeed served the request. If both stay flat after a real prompt, the profile is not being applied. Do not trust `lsof` connections to Cloudflare IPs or `provider=OpenAI` lines in `~/.codex/log/codex-tui.log` — neither is a reliable routing signal.
  • Codex prints `Ignored unsupported project-local config keys … model_provider, model_providers`Provider settings must live in your **user-level** `~/.codex/config.toml`. Codex silently rejects them in a project-local `./.codex/config.toml` and falls back to its default (OpenAI). Move the `[model_providers.antseed]` and `[profiles.antseed]` blocks to `~/.codex/config.toml` and relaunch.
  • Declaring the provider on the command line via `-c model_provider=…` / `-c model_providers.antseed=…`Prefer `~/.codex/config.toml` + `--profile antseed`. Declaring the provider via `-c` flags has been observed to apply on `codex resume` but silently revert to OpenAI on a fresh `codex` launch. The config-file path is the only setup we reliably reproduce.
  • Streaming stops after the first chunkSwitch `wire_api` between `"chat"` and `"responses"` in `[model_providers.antseed]`. AntSeed implements both; one may behave better with your Codex build.
  • `unknown profile: antseed`Codex caches the config on launch. Make sure you saved `~/.codex/config.toml`, then start a fresh `codex` session.
  • Hangs forever on first messageNo peer is pinned. Run `antseed network browse`, then `antseed buyer connection set --peer <peerId>`.

Reference

Links

Same category

Related