Tech

Rackspace Technology und AMD unterzeichnen Absichtserklärung zur Schaffung einer neuen Kategorie verwalteter Enterprise-AI-Infrastrukturen

A landmark partnership between Rackspace Technology and AMD is poised to birth a new category of managed enterprise AI infrastructure, with a cloud platform designed for business-critical workloads built on AMD's EPYC processors and Rackspace's managed services. The joint venture will integrate Rackspace's managed cloud offerings with AMD's high-performance computing capabilities to create a scalable, secure, and managed AI infrastructure. This strategic alliance is expected to redefine the boundaries of enterprise AI deployment.

Rackspace Technology and AMD have signed a Memorandum of Understanding (MOU) to develop a managed enterprise AI cloud specifically for regulated industries and sovereign workloads. The partnership aims to create a new category of managed AI infrastructure that integrates AMD's Instinct GPUs and EPYC CPUs into a fully managed, compliant operational model controlled by Rackspace.

Overview

The current dominant model requires enterprises to rent GPU capacity by the hour and handle operational tasks such as integration, security, and accountability themselves. Rackspace and AMD propose to invert this model. Under the planned framework, dedicated AMD compute power would be embedded into a regulated, managed operating model where Rackspace controls the entire stack — from chip to outcome.

The four planned service tiers

The collaboration outlines four integrated functions that together form a comprehensive stack:

  1. Enterprise AI Cloud: A fully managed, private and hybrid AI environment built on AMD Instinct GPUs, AMD EPYC CPUs, and Rackspace's managed operations model. Rackspace would assemble, integrate, and operate the entire stack for enterprises requiring sovereignty, compliance, and operational accountability.

  2. Enterprise Inference Engine: A context-aware inference runtime that maintains domain knowledge, session history, and enterprise-specific data contexts across queries. This allows AI agents and large language models to operate with the consistency and institutional memory that production environments require. Rackspace would take responsibility for SLAs covering availability, scalability, and performance.

  3. Inference as a Service: Dedicated, managed AMD Instinct GPUs with developer-friendly toolkits for inference and fine-tuning, offered as a regulated alternative to renting standard GPUs. The customer brings their own model and development team; Rackspace provides reliable bare-metal capacity with operational discipline, hardware-level support, and performance SLOs.

  4. Bare Metal AMD Instinct: A proposed dedicated, high-performance AMD Instinct compute platform on bare metal for customers needing physical isolation, deterministic performance, and direct hardware access for demanding training and inference workloads.

Target audience and use case

The partnership targets regulated enterprises and sovereign workloads where security, governance, and accountability are non-negotiable. Gajen Kandiah, CEO of Rackspace Technology, stated that controlling AI infrastructure in regulated environments with clearly defined responsibilities must be integrated from the start, not added later. Dan McNamara from AMD noted that enterprise AI is moving from experimentation to production readiness, requiring compute infrastructure designed for performance and efficiency at scale.

Status and caveats

The MOU establishes a framework for a potential multi-year strategic partnership. No definitive agreements have been signed, and the discussions remain in early stages. The companies caution that there is no guarantee that final agreements will be reached, that the parties will agree on terms, or that the expected benefits will be realized. Any debt financing needed to implement the transactions depends on availability of financing on acceptable terms.

Bottom line

Rackspace and AMD are proposing a managed, regulated alternative to the current hourly GPU rental model for enterprise AI. The four-tier stack — from bare metal to fully managed cloud — is designed for organizations that cannot compromise on compliance or operational accountability. Whether this framework becomes a shipping product depends on the outcome of ongoing negotiations.

Similar Articles

More articles like this

Tech 1 min

Frost & Sullivan Highlights Cooling as a Strategic Imperative in the AI Era in New Data Centre Whitepaper

AI data centers are now burning through 20–30 MW per rack, forcing operators to abandon air-cooled CRAC units in favor of direct-to-chip liquid loops and immersion tanks that slash PUE below 1.1. The shift is turning cooling from a facilities afterthought into a front-end design constraint, with NVIDIA’s Blackwell GPUs and AMD’s MI400 accelerators shipping only with factory-qualified cold-plate manifolds.

Tech 1 min

Sunseeker Robotics Unveils X Gen 2 Series Robot Mower at Spring Spectacular, Advancing Wire-Free Lawn Care for North America

At the Spring Spectacular, Sunseeker Robotics unveiled the X Gen 2 Series robot mower, a significant upgrade to wire-free lawn care in North America, leveraging VSLAM 2.0 for centimeter-level precision and 10 TOPS computing for seamless all-terrain navigation. The Vision AI 2.0 system enables minimal-effort lawn maintenance, while advanced mapping and obstacle avoidance capabilities ensure efficient and safe operation. This marks a major milestone in the evolution of autonomous lawn care.

Tech 1 min

HR Rebooted Launches MyCareer Navigator, an AI Platform Helping Organizations Guide People Through Career Disruption

AI-driven career platforms just crossed the enterprise Rubicon: MyCareer Navigator’s real-time skill-gap scoring and LLM-powered transition blueprints are now embedded in HR suites at 42 Fortune 500 firms, turning layoff triage into a continuous, data-rich workflow instead of a quarterly panic. The twist? Its resume and interview modules auto-negotiate with applicant-tracking systems, effectively reverse-engineering the black-box hiring pipeline.

Tech 1 min

IREN Expands AI Cloud Platform to Europe with Acquisition of Nostrum Group

IREN’s €1.2B acquisition of Nostrum Group plants its AI-optimized cloud platform in Europe’s high-voltage data corridor, securing 2.4 GW of hyperscale-ready capacity across Madrid and Barcelona—enough to power 800K H100 GPUs. The deal leapfrogs competitors by pairing IREN’s liquid-cooled infrastructure with Nostrum’s 100% renewable-powered sites, slashing latency for latency-sensitive inference workloads.

Tech 1 min

NVIDIA and IREN Announce Strategic Partnership to Accelerate Deployment of up to 5 Gigawatts of AI Infrastructure

"Utilities and tech giants converge as IREN and NVIDIA join forces to deploy a massive 5-gigawatt AI infrastructure backbone, leveraging NVIDIA's DSX architecture to power a new wave of large-scale AI applications and edge computing services across the globe."

Tech 1 min

New Cadillac Formula 1® Team Deploys 3D Systems’ SLA Technology To Achieve 2026 Debut

Cadillac's F1 debut hinges on 3D-printed wind tunnel models, with the team leveraging seven large-format SLA systems and proprietary software to rapidly produce accurate, high-strength parts in Accura Xtreme materials, a crucial advantage in the cutthroat world of Formula 1 where even milliseconds can decide victory. This additive manufacturing push enables the team to accelerate critical testing and development, a key factor in their 2026 season prospects.