LangChain (Python)
Drop-in `ChatOpenAI(base_url=…)` — works in chains, LCEL, and LangGraph agents.
What LangChain is. LangChain is the Python framework for composing LLMs with tools, retrievers, memory, and agents. The chat-model interface is BaseChatModel; ChatOpenAI from langchain-openai is a concrete subclass that talks the OpenAI Chat Completions wire format.
How AntSeed plugs in. Pass base_url="http://localhost:8377/v1" and any non-empty api_key to ChatOpenAI. Once you have an instance, every primitive that accepts a chat model — LCEL pipes (prompt | llm | parser), tool-calling agents, create_react_agent, LangGraph nodes, RAG chains, structured-output binding via with_structured_output — will route through AntSeed without any further changes.
One thing to know. LangChain's ChatOpenAI is OpenAI-strict by design: it will not preserve non-standard response fields like reasoning_content, reasoning, or reasoning_details that some third-party servers emit. For chat, tool-calling, and structured output this is fine. If you specifically need a model's reasoning traces, consider using the AntSeed buyer proxy with the OpenAI Responses endpoint (/v1/responses) via a different provider package, or use a model that returns reasoning inline.
Run AntSeed first
Every integration assumes a buyer proxy at http://localhost:8377. One-time setup, ~2 minutes.
Before you start
Prerequisites
- Python 3.10 or newer
Step 1
Install LangChain (Python)
- Install LangChain and the OpenAI integrationpip install -U langchain langchain-openai
Step 2
Point LangChain (Python) at AntSeed
Step 3
Pick a model
Pick services whose `protocols` array includes `openai-chat-completions` (most do natively; the rest are translated automatically by `@antseed/api-adapter`). Tool calling and structured output rely on the service supporting OpenAI-style function-call syntax — confirm with a quick smoke test before building large agents.
The exact list of models depends on which peer you pin. Run antseed network browse or open the live network page to see what's available right now.
Verify
Test it
- Run the basic examplepython antseed_llm.pyExample outputHello! How can I help you today?
- Per-request peer override (no session pin needed)# extra_headers is forwarded as-is to the proxy. from langchain_openai import ChatOpenAI llm = ChatOpenAI( model="claude-sonnet-4-6", base_url="http://localhost:8377/v1", api_key="antseed", extra_headers={ "x-antseed-pin-peer": "cccccccccccccccccccccccccccccccccccccccc", }, ) print(llm.invoke("hi").content)
Use this when a single Python process needs to fan out to different peers per call (multi-tenant, scheduled jobs, A/B tests across peers).
- Verify it actually went through AntSeedantseed buyer metering
`buyer metering` reads the local SQLite log and prints per-channel token + USDC totals. After your `python` call, the channel for the peer you pinned should show non-zero input/output tokens. (`buyer status` is a snapshot view — it shows the active-channel count but not per-call usage.)
How LangChain (Python) talks to AntSeed
- Wire format sent by LangChain (Python):
OpenAI Chat Completions(hits/v1/chat/completionson the buyer proxy) - Best-fit services: any service whose
protocolsarray containsopenai-chat-completions. That's what the peer advertises as natively-supported — zero translation overhead, no transform edge cases. - How to check a peer: run
antseed network peer <peerId> --jsonand look atmatchingServices[].protocolsfor each model. The browse command shows the same data per peer inproviderServiceApiProtocols. - What happens when protocols don't match: AntSeed's
@antseed/api-adaptertranslates between OpenAI Chat Completions and the service's native protocol on the fly. So a request from LangChain (Python) can still reach a service that only advertisesanthropic-messages— just with a small transform step. - One known caveat: services whose only advertised protocol is
openai-responsesrequire streaming. If LangChain (Python) sends a non-streaming request and the proxy routes it to one of those services, the call fails withHTTP 400: Stream must be set to true. Pick a service whoseprotocolsincludesopenai-chat-completions(or another non-responses protocol) to avoid this.
If it goes wrong
Troubleshooting
- `openai.NotFoundError: 404 … model_not_found`The pinned peer does not advertise the id you passed. Confirm with `curl http://localhost:8377/v1/models | jq` and either pin a different peer or change the `model=` argument.
- `openai.APIConnectionError: Connection refused`The buyer proxy is not running. Start it with `antseed buyer start` (or open AntStation desktop). Confirm `curl http://localhost:8377/v1/models` works before retrying from Python.
- `with_structured_output` returns the right schema but empty fieldsEither the model behind the pinned peer does not support OpenAI tool-call syntax, or you used `method="json_mode"` against a service that does not honor it. Try `method="function_calling"` (the default), and prefer services tagged `coding` or `tools` in `antseed network peer <peerId> --json`.
- Streaming with `stream=True` truncates mid-responseA buffering proxy (nginx, Cloudflare) sits between your code and the buyer proxy. The AntSeed proxy itself does not buffer SSE. Either bypass the intermediate proxy or set its buffering off (`proxy_buffering off;` in nginx).
- Reasoning traces missing on a model you know emits themSee the third paragraph above: `langchain-openai` does not preserve non-standard response fields. For first-class reasoning support, route the request through the OpenAI Responses endpoint (`POST /v1/responses` on the proxy) using a Responses-aware client, or pick a model that puts reasoning inline in `content`.
Reference
Links
Same category