Allgemein

Open Responses vs. Chat Completion: A new era for AI apps

Open Responses vs. Chat Completion: A new era for AI apps

responses

The ability to build portable, provider-agnostic AI applications is the future of agentic development. For the past few years, OpenAI’s Chat Completion API has been considered the de facto standard for interacting with LLMs. Major model providers, open source serving platforms, and AI gateways supported this standard. While this API served well during the stateless chatbot era, it falls short of many capabilities that agents expect.

OpenAI officially transitioned from Chat Completion API to Responses API in March 2025. Compared to the former, the Responses API is designed to support native stateful conversations to handle multi-turn interactions. Through caching and reasoning support, it dramatically improves the API’s performance. Other capabilities include built-in tool integration, event streaming, and support for multimodal inputs (including text and images).

Recently, OpenAI created a specification called Open Responses in collaboration with major ecosystem players, including Nvidia, Vercel, OpenRouter, Hugging Face, LM Studio, Ollama, and vLLM. Based on the Responses API, the specification is meant for building multi-provider, interoperable LLM interfaces. It defines a shared schema, client library, and tooling layer that enable a unified experience independent of the model type and model provider.

For data scientists and AI developers building intelligent applications, understanding Open Responses is essential. The patterns mirror familiar API concepts from OpenAI’s ecosystem, including chat completions for message exchanges, tool calls for function invocation, streaming outputs for real-time responses, and multimodal inputs for handling text and images.

This article breaks down Open Responses using these accessible parallels, delivering clarity for practitioners who operate in production environments.

The problems that Open Responses solve

Modern LLM applications have outgrown the chatbot paradigm. Developers building autonomous agents need models that reason over multiple steps, invoke tools autonomously, and maintain context across complex workflows. Yet the ecosystem remains fragmented around the Chat Completions format, which was a specification originally designed for turn-based conversations — but it falls short for agentic use cases.

The mismatch manifests in several concrete problems:

Manual state management: Chat Completions is stateless, requiring developers to shuttle entire conversation histories back and forth with each request.

Tool orchestration complexity: Multi-step tool calling requires manual “loop until done” logic in application code.

Lost reasoning context: Reasoning tokens from models like o3 and o4-mini are discarded between turns, degrading performance on agentic tasks.

No built-in capabilities: Web search, file retrieval, and code execution require custom infrastructure.

Though OpenAI addressed these limitations with the Responses API (/v1/responses) in March 2025, it has remained an opaque, proprietary interface. Open Responses defines a consistent request/response shape that any provider can implement. In practical terms, it lets you keep one client integration while switching the backend model runtime. If you’ve ever maintained multiple SDKs for multiple model providers, you already understand the pain this removes.

For teams utilizing both a hosted frontier model and a local open source model, managing branching logic across applications without a unified API becomes complex. By adopting Open Responses, the integration achieves stability with only routing modifications required. This approach, centered on stable contracts and swappable implementations, is essential for maintaining robust and maintainable real-world systems.

Comparing Chat Completion API with Open Responses

To understand the magnitude of the shift, we must compare the developer experience and architectural footprint of the two paradigms.

Feature Legacy Chat Completion (v1/chat) Open Responses (v1/responses)
Control Logic Client-Side: Developer writes while loops, parses JSON, handles retries. Server-Side: Developer declares intent; server manages the loop/state machine.
State Stateless: History must be re-uploaded with every request. Stateful: previous_response_id loads context from server cache.
Streaming Token Deltas: Raw text chunks. Hard to parse structures. Semantic Events: Typed events (tool.start, tool.end, content.add).
Tool Execution Client-Driven: Client executes, re-prompts. High latency. Server-Driven: Server executes internal tools; manages flow for external ones.
Reasoning Implicit: Mixed into content or hacked via thinking tags. Explicit: Dedicated fields (content , encrypted_content , summary).
Multimodality Bolted on: Images sent as URLs in text messages. Native: Polymorphic Items support images/video as first-class citizens.
Network Traffic High: N round-trips for N steps. Full history upload. Low: 1 request for N steps. Only the delta input upload.

Ecosystem backing

The launch partners represent comprehensive ecosystem coverage for the Responses API specification:

Provider Implementation Status
OpenAI Full Responses API (original)
Hugging Face Inference Providers integration, early access via Spaces
OpenRouter Launch partner, enabling “almost every existing model”
NVIDIA NIM Experimental /v1/responses endpoint support
Ollama Added in v0.13.3, non-stateful flavor
vLLM Full Responses API compatible server
LM Studio Open Responses compliant endpoint
Azure OpenAI Full Responses API via Microsoft

The beginning of the agentic era

The introduction of Open Responses marks the end of the “Chatbot Era” and the beginning of the “Agentic Era.” For too long, developers have struggled with the “Square Peg, Round Hole” problem of forcing autonomous behaviors into conversational APIs. The resulting “Agentic Hell” of brittle, high-latency, client-side loops held back the true potential of AI.

Open Responses solves this by recognizing that Agency is an Infrastructure Problem, not just a model capability problem. By standardizing the Agentic Loop, defining polymorphic Items, and solving the state management crisis, it provides the robust foundation needed to build the next generation of software.

For the enterprise, the path is clear: Adopt the standard to future-proof applications against vendor lock-in and enable hybrid-cloud deployments via Nvidia NIM and local models. For the open source community, the standard provides a rallying cry — a shared language that allows a federated ecosystem of models, tools, and routers to compete with the monolithic silos of the proprietary giants.

The new standard offers clear benefits for both enterprises and the open-source community.

For Enterprises: Adopting the standard is key to future-proofing applications, safeguarding against vendor lock-in, and facilitating hybrid-cloud deployments by leveraging Nvidia NIM and locally hosted models.

For the Open Source Community: The standard acts as a unifying force, providing a common language. This enables a federated ecosystem of models, tools, and routers to effectively compete with the closed, proprietary systems of major industry players.

We are no longer just chatting with text. We are orchestrating cognition, and Open Responses is the conductor’s baton.

The post Open Responses vs. Chat Completion: A new era for AI apps appeared first on The New Stack.