×

注意!页面内容来自https://www.anthropic.com/news/claude-opus-4-5,本站不储存任何内容,为了更好的阅读体验进行在线解析,若有广告出现,请及时反馈。若您觉得侵犯了您的利益,请通知我们进行删除,然后访问 原网页

Announcements

Introducing Claude Opus 4.5

Nov 242025
Introducing Claude Opus 4.5

Our newest modelClaude Opus 4.5is available today. It’s intelligentefficientand the best model in the world for codingagentsand computer use. It’s also meaningfully better at everyday tasks like deep research and working with slides and spreadsheets. Opus 4.5 is a step forward in what AI systems can doand a preview of larger changes to how work gets done.

Claude Opus 4.5 is state-of-the-art on tests of real-world software engineering:

Chart comparing frontier models on SWE-bench Verified where Opus 4.5 scores highest

Opus 4.5 is available today on our appsour APIand on all three major cloud platforms. If you’re a developersimply use claude-opus-4-5-20251101 via the Claude API. Pricing is now $5/$25 per million tokens—making Opus-level capabilities accessible to even more usersteamsand enterprises.

Alongside Opuswe’re releasing updates to the Claude Developer PlatformClaude Codeand our consumer apps. There are new tools for longer-running agents and new ways to use Claude in ExcelChromeand on desktop. In the Claude appslengthy conversations no longer hit a wall. See our product-focused section below for details.

First impressions

As our Anthropic colleagues tested the model before releasewe heard remarkably consistent feedback. Testers noted that Claude Opus 4.5 handles ambiguity and reasons about tradeoffs without hand-holding. They told us thatwhen pointed at a complexmulti-system bugOpus 4.5 figures out the fix. They said that tasks that were near-impossible for Sonnet 4.5 just a few weeks ago are now within reach. Overallour testers told us that Opus 4.5 just “gets it.”

Many of our customers with early access have had similar experiences. Here are some examples of what they told us:

Evaluating Claude Opus 4.5

We give prospective performance engineering candidates a notoriously difficult take-home exam. We also test new models on this exam as an internal benchmark. Within our prescribed 2-hour time limitClaude Opus 4.5 scored higher than any human candidate ever1.

The take-home test is designed to assess technical ability and judgment under time pressure. It doesn’t test for other crucial skills candidates may possesslike collaborationcommunicationor the instincts that develop over years. But this result—where an AI model outperforms strong candidates on important technical skills—raises questions about how AI will change engineering as a profession. Our Societal Impacts and Economic Futures research is aimed at understanding these kinds of changes across many fields. We plan to share more results soon.

Software engineering isn’t the only area on which Claude Opus 4.5 has improved. Capabilities are higher across the board—Opus 4.5 has better visionreasoningand mathematics skills than its predecessorsand it is state-of-the-art in many domains:2

Comparison table showing frontier model performance across popular benchmarks

The model’s capabilities outpace some of the benchmarks we use in our tests. A common benchmark for agentic capabilities is τ2-benchwhich measures the performance of agents in real-worldmulti-turn tasks. In one scenariomodels have to act as an airline service agent helping a distressed customer. The benchmark expects models to refuse a modification to a basic economy booking since the airline doesn’t allow changes to that class of tickets. InsteadOpus 4.5 found an insightful (and legitimate) way to solve the problem: upgrade the cabin firstthen modify the flights.

Let me think about what options I have within my policy:

1. Modify flights - Basic economy cannot be modified. This is clear in the policy.
2. Change cabin - Waitlet me check this option! The policy says:
"In other casesall reservationsincluding basic economycan change cabin without changing the flights."

But he wants to change the flightsnot just the cabin. Howeverwhat if we:
- Change the cabin first (which IS allowed for basic economy)then
- Change the flights (which is allowed for non-basic economy)

So the strategy could be:
1. Upgrade his cabin from basic economy to economy (or business)
2. Then modify the flights to be 2 days later

This would cost more moneybut it’s a legitimate path within the policy!

The benchmark technically scored this as a failure because Claude’s way of helping the customer was unanticipated. But this kind of creative problem solving is exactly what we’ve heard about from our testers and customers—it’s what makes Claude Opus 4.5 feel like a meaningful step forward.

In other contextsfinding clever paths around intended constraints could count as reward hacking—where models “game” rules or objectives in unintended ways. Preventing such misalignment is one of the objectives of our safety testingdiscussed in the next section.

A step forward on safety

As we state in our system cardClaude Opus 4.5 is the most robustly aligned model we have released to date andwe suspectthe best-aligned frontier model by any developer. It continues our trend towards safer and more secure models:

In our evaluation“concerning behavior” scores measure a very wide range of misaligned behaviorincluding both cooperation with human misuse and undesirable actions that the model takes at its own initiative [3].

Our customers often use Claude for critical tasks. They want to be assured thatin the face of malicious attacks by hackers and cybercriminalsClaude has the training and the “street smarts” to avoid trouble. With Opus 4.5we’ve made substantial progress in robustness against prompt injection attackswhich smuggle in deceptive instructions to fool the model into harmful behavior. Opus 4.5 is harder to trick with prompt injection than any other frontier model in the industry:

Note that this benchmark includes only very strong prompt injection attacks. It was developed and run by Gray Swan.

You can find a detailed description of all our capability and safety evaluations in the Claude Opus 4.5 system card.

New on the Claude Developer Platform

As models get smarterthey can solve problems in fewer steps: less backtrackingless redundant explorationless verbose reasoning. Claude Opus 4.5 uses dramatically fewer tokens than its predecessors to reach similar or better outcomes.

But different tasks call for different tradeoffs. Sometimes developers want a model to keep thinking about a problem; sometimes they want something more nimble. With our new effort parameter on the Claude APIyou can decide to minimize time and spend or maximize capability.

Set to a medium effort levelOpus 4.5 matches Sonnet 4.5’s best score on SWE-bench Verifiedbut uses 76% fewer output tokens. At its highest effort levelOpus 4.5 exceeds Sonnet 4.5 performance by 4.3 percentage points—while using 48% fewer tokens.

With effort controlcontext compactionand advanced tool useClaude Opus 4.5 runs longerdoes moreand requires less intervention.

Our context management and memory capabilities can dramatically boost performance on agentic tasks. Opus 4.5 is also very effective at managing a team of subagentsenabling the construction of complexwell-coordinated multi-agent systems. In our testingthe combination of all these techniques boosted Opus 4.5’s performance on a deep research evaluation by almost 15 percentage points4.

We’re making our Developer Platform more composable over time. We want to give you the building blocks to construct exactly what you needwith full control over efficiencytool useand context management.

Product updates

Products like Claude Code show what’s possible when the kinds of upgrades we’ve made to the Claude Developer Platform come together. Claude Code gains two upgrades with Opus 4.5. Plan Mode now builds more precise plans and executes more thoroughly—Claude asks clarifying questions upfrontthen builds a user-editable plan.md file before executing.

Claude Code is also now available in our desktop appletting you run multiple local and remote sessions in parallel: perhaps one agent fixes bugsanother researches GitHuband a third updates docs.

For Claude app userslong conversations no longer hit a wall—Claude automatically summarizes earlier context as neededso you can keep the chat going. Claude for Chromewhich lets Claude handle tasks across your browser tabsis now available to all Max users. We announced Claude for Excel in Octoberand as of today we've expanded beta access to all MaxTeamand Enterprise users. Each of these updates takes advantage of Claude Opus 4.5’s market-leading performance in using computersspreadsheetsand handling long-running tasks.

For Claude and Claude Code users with access to Opus 4.5we’ve removed Opus-specific caps. For Max and Team Premium userswe’ve increased overall usage limitsmeaning you’ll have roughly the same number of Opus tokens as you previously had with Sonnet. We’re updating usage limits to make sure you’re able to use Opus 4.5 for daily work. These limits are specific to Opus 4.5. As future models surpass itwe expect to update limits as needed.

Footnotes

1. This result was using parallel test-time computea method that aggregates multiple “tries” from the model and selects from among them. Without a time limitthe model (used within Claude Code) matched the best-ever human candidate.

2. We improved the hosting environment to reduce infrastructure failures. This change improved Gemini 3 to 56.7% and GPT-5.1 to 48.6% from the values reported by their developersusing the Terminus-2 harness.

3. Note that these evaluations were run on an in-progress upgrade to Petriour open-sourceautomated evaluation tool. They were run on an earlier snapshot of Claude Opus 4.5. Evaluations of the final production model show a very similar pattern of results when compared to other Claude modelsand are described in detail in the Claude Opus 4.5 system card.

4. A fetch-enabled version of BrowseComp-Plus. Specificallythe improvement was from 70.48% without using the combination of techniques to 85.30% using it.

Methodology

All evals were run with a 64K thinking budgetinterleaved scratchpads200K context windowdefault effort (high)default sampling settings (temperaturetop_p)and averaged over 5 independent trials. Exceptions: SWE-bench Verified (no thinking budget) and Terminal Bench (128K thinking budget). Please see the Claude Opus 4.5 system card for full details.

Related content

Anthropic invests $100 million into the Claude Partner Network

We’re launching the Claude Partner Networka program for partner organizations helping enterprises adopt Claude.

Read more

Introducing The Anthropic Institute

We’re launching The Anthropic Institutea new effort to confront the most significant challenges that powerful AI will pose to our societies.

Read more

Sydney will become Anthropic’s fourth office in Asia-Pacific

Read more
Introducing Claude Opus 4.5 \ Anthropic