
OpenAI Codex CLI
OpenAI's official CLI coding agent — add an AntSeed profile to ~/.codex/config.toml.
Codex is OpenAI's terminal coding agent. Recent versions ignore `OPENAI_BASE_URL` and instead read `~/.codex/config.toml`, where you declare custom inference providers under `[model_providers]` and bundle them into named `[profiles]` you can select with `--profile`.
AntSeed plugs in as a `model_provider` pointed at the local buyer proxy. Pair it with a profile and you can swap between OpenAI proper and AntSeed by changing one flag.
Run AntSeed first
Every integration assumes a buyer proxy at http://localhost:8377. One-time setup, ~2 minutes.
Step 1
Install OpenAI Codex CLI
- Install Codex globallynpm install -g @openai/codex
- Verify it runscodex --version
Step 2
Point OpenAI Codex CLI at AntSeed
This must be your **user-level** `~/.codex/config.toml`. Codex silently ignores `model_provider` / `model_providers` if they appear in a project-local `./.codex/config.toml` and prints a one-line warning at launch (see Troubleshooting).
Step 3
Pick a model
Set `model = "<service-id>"` inside `[profiles.antseed]`, or override per-session with `codex --profile antseed --model <service-id>`. Anything your pinned peer advertises works.
The exact list of models depends on which peer you pin. Run antseed network browse or open the live network page to see what's available right now.
Verify
Test it
- See which service ids your pinned peer exposescurl -s http://localhost:8377/v1/models | jq '.data[].id'Example response"claude-opus-4-7" "claude-sonnet-4-6" "deepseek-v4-flash" "gpt-oss-120b"
Whatever appears here is a valid value for `model = ...` inside `[profiles.antseed]` (or for `codex --profile antseed --model <id>`).
- Run Codex against AntSeedcodex --profile antseed
Or pin a model for one session: `codex --profile antseed --model deepseek-v4-flash`.
- Verify inference is actually paid through AntSeedopen http://localhost:3118 # or: antseed buyer statusWhat to look for after one real promptDeposits available: 4.289391 USDC → 3.289391 USDC Deposits reserved: 0 USDC → 1 USDC
The buyer dashboard at http://localhost:3118 is the authoritative real-time signal: a non-zero `Reserved` (channel opened) and/or a drop in `Available` (settled spend) after a real prompt confirms AntSeed served the request. The `antseed buyer status` CLI output is cached and may lag the dashboard — refresh the web view for confirmation. Do not rely on `lsof -i | grep codex` or `~/.codex/log/codex-tui.log`: Codex keeps persistent TCP connections to Cloudflare/ChatGPT IPs (e.g. 172.64.0.0/13) for non-inference purposes (the cause was not isolated during testing), and the `provider=OpenAI` lines in the TUI log are not a reliable indicator that inference went to OpenAI — the on-chain numbers can show AntSeed served the request despite that log line.
How OpenAI Codex CLI talks to AntSeed
- Wire format sent by OpenAI Codex CLI:
OpenAI Chat Completions(hits/v1/chat/completionson the buyer proxy) - Best-fit services: any service whose
protocolsarray containsopenai-chat-completions. That's what the peer advertises as natively-supported — zero translation overhead, no transform edge cases. - How to check a peer: run
antseed network peer <peerId> --jsonand look atmatchingServices[].protocolsfor each model. The browse command shows the same data per peer inproviderServiceApiProtocols. - What happens when protocols don't match: AntSeed's
@antseed/api-adaptertranslates between OpenAI Chat Completions and the service's native protocol on the fly. So a request from OpenAI Codex CLI can still reach a service that only advertisesanthropic-messages— just with a small transform step. - One known caveat: services whose only advertised protocol is
openai-responsesrequire streaming. If OpenAI Codex CLI sends a non-streaming request and the proxy routes it to one of those services, the call fails withHTTP 400: Stream must be set to true. Pick a service whoseprotocolsincludesopenai-chat-completions(or another non-responses protocol) to avoid this.
If it goes wrong
Troubleshooting
- `OPENAI_BASE_URL` / `OPENAI_API_KEY` are being ignoredExpected on Codex 0.40+ — it no longer reads OpenAI env vars and only loads providers from `~/.codex/config.toml`. Use the profile shown above and launch with `codex --profile antseed`.
- How can I tell if Codex is actually routing through AntSeed?Check the buyer dashboard at http://localhost:3118 (or `antseed buyer status`) after sending a test prompt. `Reserved` going from $0 to a non-zero value (a channel was opened) and/or `Available` dropping (spend settled) confirms AntSeed served the request. If both stay flat after a real prompt, the profile is not being applied. Do not trust `lsof` connections to Cloudflare IPs or `provider=OpenAI` lines in `~/.codex/log/codex-tui.log` — neither is a reliable routing signal.
- Codex prints `Ignored unsupported project-local config keys … model_provider, model_providers`Provider settings must live in your **user-level** `~/.codex/config.toml`. Codex silently rejects them in a project-local `./.codex/config.toml` and falls back to its default (OpenAI). Move the `[model_providers.antseed]` and `[profiles.antseed]` blocks to `~/.codex/config.toml` and relaunch.
- Declaring the provider on the command line via `-c model_provider=…` / `-c model_providers.antseed=…`Prefer `~/.codex/config.toml` + `--profile antseed`. Declaring the provider via `-c` flags has been observed to apply on `codex resume` but silently revert to OpenAI on a fresh `codex` launch. The config-file path is the only setup we reliably reproduce.
- Streaming stops after the first chunkSwitch `wire_api` between `"chat"` and `"responses"` in `[model_providers.antseed]`. AntSeed implements both; one may behave better with your Codex build.
- `unknown profile: antseed`Codex caches the config on launch. Make sure you saved `~/.codex/config.toml`, then start a fresh `codex` session.
- Hangs forever on first messageNo peer is pinned. Run `antseed network browse`, then `antseed buyer connection set --peer <peerId>`.
Reference
Links
Same category