11/13/2025 | News release | Distributed by Public on 11/13/2025 13:01
November 13, 2025
ProductToday we're releasing GPT-5.1 in the API platform, the next model in the GPT-5 series that balances intelligence and speed for a wide range of agentic and coding tasks. GPT-5.1 dynamically adapts how much time it spends thinking based on the complexity of the task, making the model significantly faster and more token-efficient on simpler everyday tasks. The model also features a "no reasoning" mode to respond faster on tasks that don't require deep thinking, while maintaining the frontier intelligence of GPT-5.1.
To make GPT-5.1 even more efficient, we're releasing extended prompt caching for up to 24 hour cache retention, driving faster responses for follow-up questions at a lower cost. Our Priority Processing(opens in a new window)customers will also experience noticeably faster performance with GPT-5.1 over GPT-5.
On coding, we've worked closely with startups like Cursor, Cognition, Augment Code, Factory, and Warp to improve GPT-5.1's coding personality, steerability, and code quality. In general, GPT-5.1 feels more intuitive to use for coding and more communicative with user-facing updates as it completes tasks.
Finally, we're introducing two new tools with GPT-5.1: an apply_patchtool designed to edit code more reliably and a shell tool to let the model run shell commands.
GPT-5.1 is the next advancement in the GPT-5 series, and we plan to continue to invest in more intelligent and capable models to help developers build reliable agentic workflows.
To make GPT-5.1 faster, we overhauled the way we trained it to think. On straightforward tasks, GPT-5.1 spends fewer tokens thinking, enabling snappier product experiences and lower token bills. On difficult tasks that require extra thinking, GPT-5.1 remains persistent, exploring options and checking its work in order to maximize reliability.
Balyasny Asset Management(opens in a new window)said GPT-5.1 "outperformed both GPT-4.1 and GPT-5 in our full dynamic evaluation suite, while running 2-3x faster than GPT-5." They also said across their tool-heavy reasoning tasks, GPT-5.1 "consistently used about half as many tokens as leading competitors at similar or better quality." Similarly, AI insurance BPO Pace(opens in a new window)also tested the model and said their agents run "50% faster on GPT-5.1 while exceeding accuracy of GPT-5 and other leading models across our evals."
GPT-5.1 varies its thinking time more dynamically than GPT-5. On a representative distribution of ChatGPT tasks, GPT-5.1 is much faster at the easier tasks, even at high reasoning effort.
As an example, when asked "show an npm command to list globally installed packages", GPT-5.1 answers in 2 seconds instead of 10 seconds.
show an npm command to list globally installed packages
npm list -g --depth=0
show an npm command to list globally installed packages
You can list globally installed npm packages with:
The first one is usually what you want.
Developers can now use GPT-5.1 without reasoning by setting reasoning_effort to 'none'. This makes the model behave like a non-reasoning model for latency-sensitive use cases, with the high intelligence of GPT-5.1 and added bonus of performant tool-calling. Relative to GPT-5 with 'minimal' reasoning, GPT-5.1 with no reasoning is better at parallel tool calling (which itself increases end-to-end task completion speed), coding tasks, following instructions, and using search tools-and supports web search(opens in a new window)in our API platform. Sierra(opens in a new window)shared that GPT-5.1 on "no reasoning" mode showed a "20% improvement on low-latency tool calling performance compared to GPT-5 minimal reasoning" in their real-world evals.
With the introduction of 'none' as a value in reasoning_effort, developers now have even more flexibility and control over the balance between speed, cost, and intelligence for their use case. GPT-5.1 defaults to 'none', which is ideal for latency-sensitive workloads. We recommend developers choose 'low' or 'medium' for tasks of higher complexity and 'high' when intelligence and reliability matter more than speed.
Extended caching improves reasoning efficiency by allowing prompts to remain active in the cache for up to 24 hours, rather than the few minutes supported today. With a longer retention window, more follow-up requests can leverage cached context-resulting in lower latency, reduced cost, and smoother performance for long-running interactions such as multi-turn chat, coding sessions, or knowledge retrieval workflows.
Prompt cache pricing remains unchanged, with cached input tokens 90% cheaper than uncached tokens, and no additional charge for cache writes or storage. To use extended caching with GPT-5.1, add the parameter "prompt_cache_retention='24h'"on the Responses or Chat Completions API. See the prompt caching docs(opens in a new window)for more detail.
GPT-5.1 builds on GPT-5's coding capabilities with a more steerable coding personality, less overthinking, improved code quality, better user-targeted update messages (preambles) during sequences of tool calls, and more functional frontend designs-especially at low reasoning effort.
On simpler coding tasks like quick code edits, GPT-5.1's faster speeds make it easier to iterate back and forth. GPT-5.1's faster speeds on simple tasks don't degrade performance on difficult tasks. On SWE-bench Verified, GPT-5.1 works even longer than GPT-5 and reaches 76.3%.
In SWE-bench Verified , a model is given a code repository and issue description, and must generate a patch to solve the issue. Labels indicate reasoning effort. Accuracy is averaged across all 500 problems. All models used a harness with JSON-based apply_patch tool.
We got early feedback on GPT-5.1 from a handful of coding companies. Here are their impressions:
We're introducing two new tools with GPT-5.1 to help developers get the most out of the model in the Responses API: a freeform apply_patchtoolto make code edits even more reliable without the need for JSON escaping, and a shell toolthat lets the model write commands to run on your local machine.
The freeform apply_patchtool lets GPT-5.1 create, update, and delete files in a codebase using structured diffs. Instead of just suggesting edits, the model emits patch operations that an application applies and reports back on, enabling iterative, multi-step code editing workflows.
To use the apply_patchtool in the Responses API, include it in the tools array with "tools": [{"type": "apply_patch"}]and either include file content in your input or give the model tools for interacting with your file system. The model will generate apply_patch_callitems for creating, updating, or deleting files that contain diffs that you apply on your file system. For more information on how to integrate with the apply_patch tool, check out our developer documentation(opens in a new window).
The shell tool allows the model to interact with a local computer through a controlled command-line interface. The model proposes shell commands; a developer's integration executes them and returns the outputs. This creates a simple plan-execute loop that lets models inspect the system, run utilities, and gather data until they can finish the task.
To use the shell tool in Responses API, developers can include it in the tools array with "tools": [{"type": "shell"}]. The API will generate "shell_call"items that include the shell commands to execute. Developers execute the commands in the local environment and pass back the execution results in the "shell_call_output" item in the next API request. Learn more in our developer documentation(opens in a new window).
GPT-5.1 and gpt-5.1-chat-latest are available to developers on all paid tiers in the API. Pricing and rate limits(opens in a new window)are the same as GPT-5. We're also releasing gpt-5.1-codexand gpt-5.1-codex-miniin the API. While GPT-5.1 excels at most coding tasks, gpt-5.1-codex models are optimized for long-running, agentic coding tasks in Codex or Codex-like harnesses.
Developers can start building using our GPT-5.1 developer documentation(opens in a new window)and model prompting guide(opens in a new window). We don't currently plan to deprecate GPT-5 in the API and will give developers advanced notice if and when we decide to do so.
We're committed to iteratively deploying the most capable, reliable models for real agentic and coding work-models that think efficiently, iterate quickly, and handle complex tasks while keeping developers in flow. With adaptive reasoning, stronger coding performance, clearer user-facing updates, and new tools like apply_patchand shell, GPT-5.1 is designed to help you build with less friction. And we're continuing to invest heavily here: you can expect more capable agentic and coding models in the weeks and months ahead.
|
Evaluation |
GPT-5.1 (high) |
GPT-5 (high) |
|
SWE-bench Verified |
76.3% |
72.8% |
|
GPQA Diamond |
88.1% |
85.7% |
|
AIME 2025 |
94.0% |
94.6% |
|
FrontierMath |
26.7% |
26.3% |
|
MMMU |
85.4% |
84.2% |
|
Tau2-bench Airline |
67.0% |
62.6% |
|
Tau2-bench Telecom* |
95.6% |
96.7% |
|
Tau2-bench Retail |
77.9% |
81.1% |
|
BrowseComp Long Context 128k |
90.0% |
90.0% |
* For Tau2-bench Telecom, we gave GPT-5.1 a short, generically helpful prompt to improve its performance.
ProductNov 12, 2025
SecurityOct 30, 2025
ProductOct 29, 2025