Zyphra Cloud, a full-stack neocloud powered by AMD, has announced the availability of 15 megawatts of AMD Instinct MI355X GPU capacity. This expansion enables customers to run advanced AI workloads across pretraining, large-scale reinforcement learning, post-training, and agentic inference.
Overview
Zyphra Cloud is designed for AI-native startups, enterprises, and frontier AI hyperscalers. The platform offers two primary deployment models: on-demand bare-metal GPU clusters for flexible workloads and custom hyperscale AMD infrastructure for large-scale training and inference deployments.
What it does
The 15 megawatts of MI355X GPU capacity will accelerate compute-intensive workloads and AI applications. Zyphra Cloud brings together AI software with custom AMD infrastructure to deliver production AI systems. Customers can deploy and scale AI workloads on AMD with speed and reliability.
Zyphra Research has developed next-generation inference algorithms for production environments, including the release of ZAYA1-8B and ZAYA1-74B-Preview models pretrained end-to-end on AMD Instinct MI300X infrastructure. Zyphra Cloud also powers agentic inference for large-scale open models such as DeepSeek and Kimi on AMD Instinct MI355X GPUs through serverless endpoints.
Tradeoffs
The expansion of Zyphra Cloud with AMD Instinct MI355X GPU capacity underscores the growing demand for specialized cloud services. While the exact pricing and availability of reserved capacity and enterprise infrastructure are not specified, customers can contact sales at zyphra.com/contact-sales for more information.
Zyphra plans to expand support to next-generation AMD platforms, including the MI450 series and beyond, as part of its roadmap to scale software-driven infrastructure across future AMD architectures.
In practical terms, the availability of 15 megawatts of AMD Instinct MI355X GPU capacity through Zyphra Cloud means that customers can now deploy and scale AI workloads on AMD with increased speed and reliability. This is particularly useful for AI-native startups, enterprises, and frontier AI hyperscalers that require advanced AI workloads and customized infrastructure.