Patrick T. Fallon / AFP via Getty Images
- Meta partnered with Nvidia on AI data centers with GPUs and CPUs.
- The deal may reduce Meta’s reliance on other suppliers, such as Google and AMD.
- Nvidia’s role expands as demand for AI infrastructure remains robust across industries.
Meta is doubling down on its relationship with Nvidia in what the AI chip giant called a “multigenerational” deal.
The agreement, announced Tuesday, calls for Meta to build data centers powered by millions of Nvidia’s current and next-generation chips for AI training and inference.
The move underscores how Meta is deepening its reliance on Nvidia, even as the social networking giant develops its own in-house chips and works with competing suppliers like AMD. Reports also suggested Meta has explored using TPUs — chips designed by its rival, Google.
The Nvidia deal could cool speculation around Meta’s purported TPU talks, said Patrick Moorhead, chief analyst at Moor Insights & Strategy — though Big Tech companies often test several suppliers at the same time.
The deal arrives amid increased competition in AI infrastructure. While Nvidia leads the market, rivals including Google, AMD, and Broadcom are working to chip away at its dominance.
Crucially, the partnership will see Meta deploy not only Nvidia’s GPUs, but also CPUs.
CPUs, long dominated by Intel and AMD, are the central processors that work with GPUs inside data centers. They’re used for general computing tasks and are core to essentially all modern computing systems, whereas GPUs are used in specialized cases that require more compute power, such as AI training and graphics in gaming. By supplying both, Nvidia stands to capture even more spend and deepen its role within Meta’s AI stack.
While that increases competitive pressure, Moorhead said the demand for infrastructure has become so high that Nvidia’s rivals will unlikely see outright declines in the near term.
Nvidia has been making its CPU ambitions more explicit, Moorhead said, including marketing its forthcoming Vera CPU as a stand-alone product. This emphasis reflects how CPUs play a larger role as AI workloads move beyond model training and toward inference.
“CPUs tend to be cheaper and a bit more power-efficient for inference,” said Rob Enderle, principal analyst at Enderle Group.
Both Moorhead and Enderle said that Meta’s decision to source both GPUs and CPUs from a single vendor can also reduce complexity, with chief information officers often favoring a “one-throat-to-choke” approach to problem resolution.
In addition to GPUs and CPUs, Meta will use Nvidia’s networking equipment inside data centers as part of the deal, as well as its confidential computing technology to run AI features within WhatsApp.
The companies will also work together to deploy Nvidia’s next-generation Vera CPUs beyond the current Grace CPU model, Nvidia said.
Have a tip? Contact this reporter via email at gweiss@businessinsider.com or Signal at @geoffweiss.25. Use a personal email address, a nonwork WiFi network, and a nonwork device; here’s our guide to sharing information securely.
Â