Skip to main content

LangChain (Python)

Drop-in `ChatOpenAI(base_url=…)` — works in chains, LCEL, and LangGraph agents.

FrameworksOpenAI Chat Completions~5 min

What LangChain is. LangChain is the Python framework for composing LLMs with tools, retrievers, memory, and agents. The chat-model interface is BaseChatModel; ChatOpenAI from langchain-openai is a concrete subclass that talks the OpenAI Chat Completions wire format.

How AntSeed plugs in. Pass base_url="http://localhost:8377/v1" and any non-empty api_key to ChatOpenAI. Once you have an instance, every primitive that accepts a chat model — LCEL pipes (prompt | llm | parser), tool-calling agents, create_react_agent, LangGraph nodes, RAG chains, structured-output binding via with_structured_output — will route through AntSeed without any further changes.

One thing to know. LangChain's ChatOpenAI is OpenAI-strict by design: it will not preserve non-standard response fields like reasoning_content, reasoning, or reasoning_details that some third-party servers emit. For chat, tool-calling, and structured output this is fine. If you specifically need a model's reasoning traces, consider using the AntSeed buyer proxy with the OpenAI Responses endpoint (/v1/responses) via a different provider package, or use a model that returns reasoning inline.

Run AntSeed first

Every integration assumes a buyer proxy at http://localhost:8377. One-time setup, ~2 minutes.

Before you start

Prerequisites

  • Python 3.10 or newer

Step 1

Install LangChain (Python)

  • Install LangChain and the OpenAI integration
    pip install -U langchain langchain-openai

Step 2

Point LangChain (Python) at AntSeed

# antseed_llm.py — import this once, reuse everywhere. from langchain_openai import ChatOpenAI antseed = ChatOpenAI( model="claude-sonnet-4-6", # an AntSeed service id base_url="http://localhost:8377/v1", api_key="antseed", # any non-empty string temperature=0.7, # max_completion_tokens=2048, # uncomment for hard caps ) print(antseed.invoke("Hello").content)
# pipeline.py — LCEL chain. Identical to OpenAI; the swap is invisible. from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser from antseed_llm import antseed prompt = ChatPromptTemplate.from_messages([ ("system", "You are a concise technical writer."), ("human", "Explain {topic} in one paragraph."), ]) chain = prompt | antseed | StrOutputParser() print(chain.invoke({"topic": "payment channels"}))
# tools.py — tool-calling agent. Works because AntSeed forwards OpenAI tool calls verbatim. from langchain_core.tools import tool from langgraph.prebuilt import create_react_agent from antseed_llm import antseed @tool def get_weather(city: str) -> str: """Return the current weather for a city.""" return f"It's 22°C and sunny in {city}." agent = create_react_agent(antseed, [get_weather]) result = agent.invoke({ "messages": [("user", "What's the weather in Lisbon?")] }) print(result["messages"][-1].content)

Step 3

Pick a model

claude-sonnet-4-6deepseek-v4-flashgpt-oss-120bqwen3-coder-480b

Pick services whose `protocols` array includes `openai-chat-completions` (most do natively; the rest are translated automatically by `@antseed/api-adapter`). Tool calling and structured output rely on the service supporting OpenAI-style function-call syntax — confirm with a quick smoke test before building large agents.

The exact list of models depends on which peer you pin. Run antseed network browse or open the live network page to see what's available right now.

Verify

Test it

  • Run the basic example
    python antseed_llm.py
    Example output
    Hello! How can I help you today?
  • Per-request peer override (no session pin needed)
    # extra_headers is forwarded as-is to the proxy. from langchain_openai import ChatOpenAI llm = ChatOpenAI( model="claude-sonnet-4-6", base_url="http://localhost:8377/v1", api_key="antseed", extra_headers={ "x-antseed-pin-peer": "cccccccccccccccccccccccccccccccccccccccc", }, ) print(llm.invoke("hi").content)

    Use this when a single Python process needs to fan out to different peers per call (multi-tenant, scheduled jobs, A/B tests across peers).

  • Verify it actually went through AntSeed
    antseed buyer metering

    `buyer metering` reads the local SQLite log and prints per-channel token + USDC totals. After your `python` call, the channel for the peer you pinned should show non-zero input/output tokens. (`buyer status` is a snapshot view — it shows the active-channel count but not per-call usage.)

How LangChain (Python) talks to AntSeed

  • Wire format sent by LangChain (Python): OpenAI Chat Completions (hits /v1/chat/completions on the buyer proxy)
  • Best-fit services: any service whose protocols array contains openai-chat-completions. That's what the peer advertises as natively-supported — zero translation overhead, no transform edge cases.
  • How to check a peer: run antseed network peer <peerId> --json and look at matchingServices[].protocols for each model. The browse command shows the same data per peer in providerServiceApiProtocols.
  • What happens when protocols don't match: AntSeed's @antseed/api-adapter translates between OpenAI Chat Completions and the service's native protocol on the fly. So a request from LangChain (Python) can still reach a service that only advertises anthropic-messages — just with a small transform step.
  • One known caveat: services whose only advertised protocol is openai-responses require streaming. If LangChain (Python) sends a non-streaming request and the proxy routes it to one of those services, the call fails with HTTP 400: Stream must be set to true. Pick a service whose protocols includes openai-chat-completions (or another non-responses protocol) to avoid this.

If it goes wrong

Troubleshooting

  • `openai.NotFoundError: 404 … model_not_found`The pinned peer does not advertise the id you passed. Confirm with `curl http://localhost:8377/v1/models | jq` and either pin a different peer or change the `model=` argument.
  • `openai.APIConnectionError: Connection refused`The buyer proxy is not running. Start it with `antseed buyer start` (or open AntStation desktop). Confirm `curl http://localhost:8377/v1/models` works before retrying from Python.
  • `with_structured_output` returns the right schema but empty fieldsEither the model behind the pinned peer does not support OpenAI tool-call syntax, or you used `method="json_mode"` against a service that does not honor it. Try `method="function_calling"` (the default), and prefer services tagged `coding` or `tools` in `antseed network peer <peerId> --json`.
  • Streaming with `stream=True` truncates mid-responseA buffering proxy (nginx, Cloudflare) sits between your code and the buyer proxy. The AntSeed proxy itself does not buffer SSE. Either bypass the intermediate proxy or set its buffering off (`proxy_buffering off;` in nginx).
  • Reasoning traces missing on a model you know emits themSee the third paragraph above: `langchain-openai` does not preserve non-standard response fields. For first-class reasoning support, route the request through the OpenAI Responses endpoint (`POST /v1/responses` on the proxy) using a Responses-aware client, or pick a model that puts reasoning inline in `content`.

Reference

Links

Same category

Related