DMR News

Advancing Digital Conversations

Arm CPUs Get Major AI Boost Through Nvidia NVLink Partnership

ByJolyen

Nov 18, 2025

Arm CPUs Get Major AI Boost Through Nvidia NVLink Partnership

Arm said Monday that future CPUs built on its Neoverse platform will be able to connect directly with Nvidia GPUs through NVLink Fusion, a move designed to make custom data-center chips easier to pair with Nvidia’s dominant AI accelerators.

The announcement gives hyperscalers — companies like Amazon, Microsoft, and Google that increasingly design their own silicon — a more flexible path to integrate Arm-based CPUs with Nvidia hardware without relying on Nvidia’s own processors.

The partnership also shows Nvidia continuing its strategy of linking itself to nearly every major player in the AI supply chain. Rather than locking customers into its Grace CPU, Nvidia is opening up NVLink so custom Arm designs can slot into GPU-heavy AI servers.

Why this matters

The CPU has historically anchored server architecture. In AI systems, the GPU is now the center of gravity — often eight or more GPUs per machine — with the CPU acting as the high-bandwidth coordinator. Faster and more direct chip-to-chip communication is now critical for model training and inference at scale.

NVLink Fusion gives Arm Neoverse CPUs a protocol to move data efficiently between the CPU and Nvidia GPUs, a capability that previously required Nvidia’s own Grace processors or x86 chips from Intel and AMD.

What hyperscalers gain

Major cloud providers have been building or deploying Arm chips for years:

  • Amazon with Graviton
  • Google with Axion
  • Microsoft with Cobalt

Each wants to reduce costs and tailor performance for AI workloads. Being able to pair their in-house Arm CPUs directly with Nvidia GPUs removes a friction point and strengthens their ability to run custom infrastructure in their own data centers.

Context behind the partnership

Nvidia attempted to acquire Arm for $40 billion in 2020, but regulators in the U.S. and U.K. blocked the deal in 2022. The two companies have remained commercially aligned, with Nvidia investing $5 billion in Intel this year in part to ensure Intel CPUs can also integrate via NVLink.

Arm, owned by SoftBank, licenses both its instruction set and its CPU blueprints. The new NVLink-ready protocol will be built into upcoming Neoverse designs so partners can adopt it without having to engineer the connection layer themselves.

Meanwhile, SoftBank recently sold its Nvidia stake and is now backing OpenAI’s Stargate compute project — which plans to use a mix of Arm, Nvidia, and AMD chips.

Big picture

Nvidia wants NVLink to be the connective tissue of next-generation AI servers. Arm wants hyperscalers to keep adopting its architecture across the data center. Monday’s deal gives both companies what they need as AI infrastructure becomes increasingly specialized and CPU–GPU integration becomes the most important bottleneck to solve.


Featured image credits: Freepik

For more stories like it, click the +Follow button at the top of this page to follow us.

Jolyen

As a news editor, I bring stories to life through clear, impactful, and authentic writing. I believe every brand has something worth sharing. My job is to make sure it’s heard. With an eye for detail and a heart for storytelling, I shape messages that truly connect.

Leave a Reply

Your email address will not be published. Required fields are marked *