News

OpenAI Models Now Available on AWS Bedrock, Offering Enterprises New Avenues for AI Agents and Data Privacy

OpenAI Models Now Available on AWS Bedrock, Offering Enterprises New Avenues for AI Agents and Data Privacy

OpenAI's leading models are now officially accessible on Amazon Web Services' (AWS) Bedrock managed inference and agent platform. The collaboration, unveiled at an AWS event in San Francisco on Tuesday, offers an alternative pathway for enterprises to leverage OpenAI's growing suite of GPT models without directly exposing their data to OpenAI's APIs.

Amazon highlights that enterprises seeking to build agents and AI-augmented tools with OpenAI's models have often been hindered by security policies, data privacy, and sovereignty concerns. By making its models available through a trusted third party like AWS, OpenAI can mitigate many of these issues. This integration also simplifies the adoption process for customers, as Amazon has already integrated its services with Bedrock.

Beyond the managed inference service, OpenAI's models will also be available on Amazon's Bedrock Managed Agents and AgentCore platforms. These platforms provide tools and blueprints for constructing enterprise-grade agents and connecting them to organizational data and services. Concurrently, AWS introduced a range of new agentic AI tools for its end-customers, including Quick, a personalized assistant similar to Microsoft Copilot but compatible with applications from multiple vendors, and expanded versions of Connect, originally a hosted CRM product, now enhancing automation in HR, healthcare, and supply chain management.

Furthermore, enterprises will gain the ability to link OpenAI's Codex code agent to models running within AWS data centers, offering a degree of assurance that their codebases will not be incorporated into OpenAI's future models.

Currently, access to OpenAI's models on AWS remains in limited preview, with the LLM-maker's second-newest GPT-5.4 model available immediately; the more recent GPT-5.5 is slated for release within the next few weeks, as per AWS CEO Matt Garman's statements at the San Francisco event on Tuesday.

This announcement fulfills OpenAI's February pledge to make its models available on AWS in exchange for up to $35 billion in new financing. However, to secure the full amount, OpenAI will need to utilize two gigawatts of Amazon's Trainium accelerators.

Significantly, this expansion was largely enabled by Microsoft's willingness to open its relationship with OpenAI, thereby releasing it from certain revenue-sharing commitments. Under the revised terms, Microsoft continues as OpenAI's primary cloud provider and maintains access to its model development technology. However, OpenAI is now free to partner with other providers, including Amazon. These new terms suggest that OpenAI's collaboration with Amazon may not be an isolated event but rather a strategic blueprint for future infrastructure partnerships.

↗ Read original source