Rackspace Technology and AMD have signed a Memorandum of Understanding (MOU) to develop a managed enterprise AI cloud specifically for regulated industries and sovereign workloads. The partnership aims to create a new category of managed AI infrastructure that integrates AMD's Instinct GPUs and EPYC CPUs into a fully managed, compliant operational model controlled by Rackspace.
Overview
The current dominant model requires enterprises to rent GPU capacity by the hour and handle operational tasks such as integration, security, and accountability themselves. Rackspace and AMD propose to invert this model. Under the planned framework, dedicated AMD compute power would be embedded into a regulated, managed operating model where Rackspace controls the entire stack — from chip to outcome.
The four planned service tiers
The collaboration outlines four integrated functions that together form a comprehensive stack:
Enterprise AI Cloud: A fully managed, private and hybrid AI environment built on AMD Instinct GPUs, AMD EPYC CPUs, and Rackspace's managed operations model. Rackspace would assemble, integrate, and operate the entire stack for enterprises requiring sovereignty, compliance, and operational accountability.
Enterprise Inference Engine: A context-aware inference runtime that maintains domain knowledge, session history, and enterprise-specific data contexts across queries. This allows AI agents and large language models to operate with the consistency and institutional memory that production environments require. Rackspace would take responsibility for SLAs covering availability, scalability, and performance.
Inference as a Service: Dedicated, managed AMD Instinct GPUs with developer-friendly toolkits for inference and fine-tuning, offered as a regulated alternative to renting standard GPUs. The customer brings their own model and development team; Rackspace provides reliable bare-metal capacity with operational discipline, hardware-level support, and performance SLOs.
Bare Metal AMD Instinct: A proposed dedicated, high-performance AMD Instinct compute platform on bare metal for customers needing physical isolation, deterministic performance, and direct hardware access for demanding training and inference workloads.
Target audience and use case
The partnership targets regulated enterprises and sovereign workloads where security, governance, and accountability are non-negotiable. Gajen Kandiah, CEO of Rackspace Technology, stated that controlling AI infrastructure in regulated environments with clearly defined responsibilities must be integrated from the start, not added later. Dan McNamara from AMD noted that enterprise AI is moving from experimentation to production readiness, requiring compute infrastructure designed for performance and efficiency at scale.
Status and caveats
The MOU establishes a framework for a potential multi-year strategic partnership. No definitive agreements have been signed, and the discussions remain in early stages. The companies caution that there is no guarantee that final agreements will be reached, that the parties will agree on terms, or that the expected benefits will be realized. Any debt financing needed to implement the transactions depends on availability of financing on acceptable terms.
Bottom line
Rackspace and AMD are proposing a managed, regulated alternative to the current hourly GPU rental model for enterprise AI. The four-tier stack — from bare metal to fully managed cloud — is designed for organizations that cannot compromise on compliance or operational accountability. Whether this framework becomes a shipping product depends on the outcome of ongoing negotiations.