Tech

Rackspace Technology and AMD Sign Memorandum of Understanding to Establish New Category of Governed Enterprise AI Infrastructure

A seismic shift in enterprise AI infrastructure is underway, as Rackspace Technology and AMD collaborate to create a novel category of governed AI systems, leveraging AMD's EPYC processors and Rackspace's managed cloud services to deliver a secure, scalable, and compliant foundation for large-scale AI workloads. This partnership aims to bridge the gap between AI innovation and enterprise-grade governance, setting a new standard for AI infrastructure in the process. The joint effort promises to redefine the boundaries of AI adoption in the enterprise.

Rackspace Technology and AMD have signed a Memorandum of Understanding (MOU) to create a new category of managed enterprise AI infrastructure. The core idea: instead of renting generic GPU capacity by the hour and handling integration, security, and accountability yourself, you get a fully managed, governed stack built on dedicated AMD Instinct GPUs and EPYC CPUs. Rackspace would own the entire stack, from silicon to outcomes, with defined SLAs and accountability.

What it is

The proposed offering is called the Enterprise AI Cloud. It is a fully managed, private and hybrid AI environment where Rackspace assembles, integrates, and operates the full stack — accelerated compute, AI inference, and agents in production — for enterprises that require sovereignty, compliance, and operational accountability. The MOU framework includes four integrated capabilities:

  • Enterprise AI Cloud: A private/hybrid environment built on AMD Instinct GPUs and EPYC CPUs, operated under Rackspace's governed model.
  • Enterprise Inference Engine: A context-aware inference runtime that retains domain knowledge, session history, and enterprise-specific data context across queries. Rackspace would own the SLA for availability, scaling, and performance.
  • Inference as a Service: Dedicated, managed AMD Instinct GPUs with developer-ready inferencing and fine-tuning toolkits. The customer brings their own model and engineering team; Rackspace provides reliable bare metal capacity with operational discipline and performance SLOs.
  • Bare Metal AMD Instinct: Dedicated, high-performance bare metal AMD Instinct compute for customers requiring physical isolation, deterministic performance, and direct hardware access for demanding training and inference workloads.

Why it matters

Today's dominant model requires enterprises to rent GPU capacity by the hour and carry the operational burden themselves — integration, security, and accountability. This collaboration proposes to invert that model. The aim is to give enterprises a single operator accountable for every layer, calibrated to the sovereignty, performance, and compliance requirements of each workload.

Caveats

The MOU is a framework for potential collaboration, not a binding commitment. No definitive agreements have been reached, and discussions remain preliminary. There is no assurance that any such arrangements will be entered into or that the anticipated benefits will be realized. Any third-party financing required is subject to availability on acceptable terms.

Bottom line

Rackspace and AMD are proposing a governed alternative to commodity GPU rental for regulated enterprises and sovereign workloads. If executed, it would give enterprises a single operator accountable for the full AI stack, from silicon to outcomes, with defined SLAs and compliance built in from the start. The key question is whether the MOU will translate into a binding commercial agreement.

Similar Articles

More articles like this

Tech 1 min

5G NTN Market worth $45.55 billion by 2031 | MarketsandMarkets™

A $45.55 billion 5G market is forecast to emerge by 2031, driven by a 30.8% compound annual growth rate, as non-terrestrial networks (NTNs) begin to integrate satellite connectivity into the 5G ecosystem, leveraging low-Earth orbit (LEO) satellites and terrestrial networks to expand global coverage and enable new use cases. This convergence is expected to accelerate the adoption of satellite-based services, including IoT, remote healthcare, and emergency response.

Tech 1 min

LALAL.AI Wins People's Voice Webby Award for Best Use of AI & Machine Learning

LALAL.AI's AI-powered audio separation platform has taken the top public honor at the Webby Awards, outpacing established competitors like Adobe and Captions AI, with its innovative use of deep learning algorithms and convolutional neural networks to split complex audio signals into distinct tracks. This victory marks a significant milestone for the ZUG, Switzerland-based startup, which has been gaining traction in the music and audio post-production industries.

Tech 2 min

Choice Hotels International Unveils New Technologies and AI-Powered Solutions to Help Owners Capture More Demand and Operational Excellence

Choice Hotels is embedding AI agents directly into its franchise backbone, rolling out six new tools—including the AgentCore inference engine and AgentForce task-automation layer—that let owners auto-negotiate group rates, dispatch housekeeping bots, and re-route front-desk workflows in real time. By baking generative decision logic into its existing property-management stack, the chain is turning every hotel into a self-optimizing node, slashing overhead while locking in franchise loyalty ahead of Marriott and Hilton’s slower, bolt-on AI plays.

Tech 1 min

Integris Report Finds Law Firms Falling Behind Client Expectations on Technology, Security and AI Transparency

Corporate legal clients are now demanding AI-driven contract analysis, zero-trust security audits, and real-time data provenance—yet 68% of Am Law 200 firms still rely on on-prem document management systems and manual privilege logs, according to fresh benchmarking. The gap leaves $4.2B in annual client tech stipends at risk as in-house teams bypass outside counsel for cloud-native alternative legal service providers.

Tech 1 min

ASUS to Showcase Next-Generation NVIDIA-Powered AI Infrastructure at AI+ Expo 2026

At AI+ Expo 2026, ASUS will unveil a high-performance AI infrastructure stack built on NVIDIA's H100 Tensor Core GPUs, leveraging the latter's 80GB of memory and 5.2 TFLOPS of mixed-precision performance to accelerate large-scale AI workloads. This showcase promises to highlight the potential of PCIe 5.0 and NVLink 3.0 in driving AI compute efficiency. The demonstration will be centered at Booth #849.

Tech 1 min

Nexar's Real-World AI Platform Wins 2026 AI TechAward for Machine Learning Innovation

Nexar’s Vera platform just claimed the 2026 AI TechAward by turning 10 billion miles of dash-cam telemetry into BADAS 2.0, a vision transformer that out-scores Waymo’s perception stack on edge latency and false-positive rates—without lidar. The win signals that real-world, camera-first AI is now the benchmark for mass-market autonomy.