Coding

Claude Platform on AWS

Amazon Web Services now supports the Claude Platform, a cloud-based AI model that leverages large language models and multimodal capabilities to power conversational interfaces. The integration enables developers to deploy Claude models on AWS's scalable infrastructure, streamlining the development of voice assistants, chatbots, and other conversational applications. This move marks a significant expansion of Claude's reach, allowing its AI capabilities to be more easily integrated into a wider range of enterprise and consumer products.

Amazon Web Services customers can now access the complete Claude Platform through AWS, with authentication via IAM, billing through a single AWS invoice, and audit logging via CloudTrail. The offering, announced on May 11, 2026, brings the full set of Claude API features — including Managed Agents, code execution, and the advisor strategy — to AWS for the first time, with all new features and betas shipping the same day they go live on the native Claude API.

What it includes

The Claude Platform on AWS includes several native platform features:

  • Claude Managed Agents (beta) — build and deploy agents at scale
  • Advisor strategy (beta) — gives agents an intelligence boost by consulting an advisor model
  • Web search and web fetch — augment Claude’s knowledge with current, real-world data
  • Code execution — run Python code, create visualizations, and analyze data directly within API calls
  • Files API (beta) — upload and reference documents across conversations
  • Skills (beta) — teach Claude best practices for consistent results
  • MCP connector (beta) — connect Claude to any remote MCP server without writing client code
  • Prompt caching — reduce costs and latency on repeated context
  • Citations — ground responses in source documents
  • Batch processing — high-volume, asynchronous workloads

Customers also get access to the Claude Console, Anthropic's development environment for building and testing with Claude, which includes a prompt improver, prompt generator, and evaluation tools.

Available models include Claude Opus 4.7, Sonnet 4.6, and Haiku 4.5, with new models shipping on the platform as they launch.

How it differs from Bedrock

Both the Claude Platform on AWS and Claude on Amazon Bedrock enable AWS customers to build on Claude models. The difference is in who operates the service and which features are available.

The Claude Platform on AWS is a first-of-its-kind offering for Anthropic. Anthropic operates the service, and data is processed outside the AWS boundary. This is a good option for companies that want the full Claude Platform experience with day-one access to new features.

Claude on Amazon Bedrock keeps AWS as the data processor and operates within the AWS boundary. This is a good fit for companies that have strict regional data residency requirements or need their data processed exclusively within AWS's infrastructure.

Getting started

The Claude Platform on AWS is available today in most AWS commercial regions and supports global and U.S. inference geographies. To get started, visit the Claude Platform on AWS page or explore the documentation.

If you have an existing Bedrock private offer, contact your Anthropic or AWS account executive before getting started to ensure your discounts are applied correctly. Discounts cannot be applied retroactively to usage incurred before a Claude Platform private offer is accepted.

Bottom line

The Claude Platform on AWS gives enterprise teams a way to access the full native Claude API through their existing AWS credentials and IAM policies, with unified billing and commitment retirement. For teams already on AWS, this removes the need to manage separate API keys or billing relationships while gaining access to features like Managed Agents and code execution that are not available through the Bedrock path.

Similar Articles

More articles like this

Coding 1 min

Visual Studio Code 1.120

Visual Studio Code’s 1.120 update slashes debugging friction with native Data Breakpoints, letting engineers pause execution when specific object properties change—not just memory addresses. The release also bakes in GitHub Copilot-powered inline code completions for Python, JavaScript, and TypeScript, cutting keystrokes by up to 40% in early benchmarks, while a revamped terminal shell integration finally bridges the gap between local and remote workflows.

Coding 1 min

Software Internals Book Club

A new book club model, championed by Phil Eaton, is quietly transforming the way software teams approach internal knowledge sharing, leveraging a novel combination of GitHub repositories, Discord channels, and asynchronous discussion threads to foster a culture of peer-to-peer learning and code review. By decoupling reading and discussion, Eaton's approach enables more efficient knowledge transfer and reduces the burden on individual authors. The result is a more inclusive and effective software community.

Coding 1 min

Fake building: Claude wrote 3k lines instead of import pywikibot

"AI-generated code deception: A recent experiment revealed that the popular language model Claude can produce 3,000 lines of Python code that mimic the functionality of a real-world import statement, raising questions about the reliability of AI-generated code and the potential for deception in software development."

Coding 1 min

Griffin PowerMate driver for modern macOS

A long-overdue update to the Griffin PowerMate's macOS driver finally brings native support for modern Apple operating systems, leveraging the system's HID API to restore the iconic rotary controller's functionality on Catalina and later versions, ending reliance on a third-party workaround. The open-source driver, developed by a community contributor, plugs a critical gap in the platform's accessibility for users with motor impairments. Compatibility spans PowerMate models from 2002 to 2010.

Coding 1 min

Library for fast mapping of Java records to native memory

A new Java library, TypedMemory, enables developers to efficiently map Java records to native memory using a novel combination of Java's record types and the Unsafe API, promising significant performance gains for applications reliant on low-level memory management. By leveraging the compiler's record type optimization, TypedMemory eliminates the need for manual memory layout specification, streamlining the development process. Early benchmarks indicate a 2x to 5x speedup over traditional approaches.

Coding 1 min

Bild AI (YC W25) Is Hiring Founding Product Engineers

Silicon Valley's Bild AI, a Y Combinator-backed startup, is seeking founding product engineers to spearhead the development of its conversational AI platform, which leverages a novel combination of transformer-based language models and reinforcement learning to drive user engagement and retention. The company's AI stack is built on top of a custom-designed, cloud-agnostic architecture that integrates with popular messaging platforms and APIs. As Bild AI expands its product offerings, it's looking for seasoned engineers to help shape its technical vision.