The two companies stated that OpenAI’s newest models will be available in a limited preview via Amazon Bedrock, alongside offerings from other companies such as Anthropic, Meta, Mistral, Cohere, and Amazon’s own model family, providing customers access to various AI systems on a single platform.
AWS has also introduced Codex on Amazon Bedrock, OpenAI’s AI-powered coding assistant, allowing enterprise developers to utilize AI-enhanced software development tools directly within AWS environments.
Additionally, a new feature, Amazon Bedrock Managed Agents powered by OpenAI, will enable customers to create production-ready AI agents using OpenAI models within AWS infrastructure.
“These launches provide customers with the choice and flexibility to select the best models for their specific needs, all on the most widely adopted cloud,” AWS stated.
The integration targets enterprises looking for frontier AI capabilities combined with robust security, governance, and operational controls.
OpenAI models on Bedrock will integrate seamlessly with AWS services, including IAM access management, PrivateLink connectivity, encryption, CloudTrail logging, and compliance frameworks.
AWS indicated that customers could access OpenAI models through existing Bedrock APIs without the need for infrastructure changes, consolidating AI workloads and expenditures within AWS environments.
OpenAI CEO Sam Altman, in a recorded message, expressed that enterprises require infrastructure that is secure, scalable, and compatible with existing systems.
“These systems must operate reliably and robustly. They need to be secure, scalable, and compatible with the environments where companies already conduct their business,” Altman remarked.
AWS mentioned that Codex is already being widely utilized for software development tasks such as code generation, debugging, and testing, and will now be accessible through AWS tools, including APIs, CLI, and Visual Studio Code integrations.
The company also introduced Bedrock Managed Agents powered by OpenAI, which are designed to facilitate multi-step AI workflows with integrated identity, logging, and governance controls for scalable enterprise deployment.
Enterprise software firm Box stated that the integration would assist organizations in deploying AI agents while ensuring maintenance of governance, auditability, and operational control.
This partnership comes as OpenAI expands its multi-cloud infrastructure efforts beyond Microsoft, which remains a critical partner.
Prior to this announcement, Microsoft and OpenAI had renegotiated their long-standing partnership on April 28, reducing exclusivity restrictions and allowing OpenAI to engage more freely with competing cloud providers, including Amazon, while maintaining Microsoft as its principal cloud provider until 2032.
The revised framework gives OpenAI more flexibility to broaden its enterprise presence while enabling cloud providers to compete more directly for access to its models.
AWS noted that the new offerings are currently in limited preview, with a broader rollout anticipated following enterprise testing and feedback.
Also Read: Elon Musk states that OpenAI was his concept, before executives took control