Financial pressures of partnering with Nvidia for semiconductors capable of running hyperscale workloads have forced Google, Amazon, Meta and Microsoft into the processor market.
Shortages in the availability of semiconductors capable of running generative AI models in hyperscale computing scenarios are forcing cloud service providers like Amazon and Google to build their own in-house technology to meet growing processor demands, new research has found.
That combined with the cost pressures of working with Nvidia, which has emerged as the market leader for selling GPUs well-suited to AI-specific workloads, have pushed top tech giants to find new solutions for their processing needs, according to analysis released Monday by analyst firm Global Data.
“There is a significant imbalance between supply and demand when it comes to GPU processors,” noted Beatriz Valle, senior enterprise technology and services analyst at Global Data, in a press release.
That’s because genAI models in general, and particularly multimodal systems that produce images and videos, greatly benefit from the parallel processing capabilities of GPUs, she noted.
Proprietary AI chips
To counteract this trend, genAI companies – including not only Google and Amazon but also Meta and Microsoft – are charging to market with significant investments in their own proprietary chip technologies to run genAI-driven workloads, Valle said.
The first to market in 2015, Google continues to steam ahead with plans to develop specialized chips meant to accelerate machine learning applications from its tensor processing unit, Valle told Network World. Meanwhile, Amazon is developing its Inferentia and Trainium architectures to carry genAI workloads, a move it made in 2018 in response to Google’s plans.
Meta, parent company of Facebook and Instagram, also now is in the AI-focused processor game. The company recently unveiled the next generation of custom-made chips to help power AI-driven rankings and recommendation ads on social media platforms.
Microsoft lags behind but makes significant investment
Microsoft’s move into the microprocessor space is perhaps the most telling sign of the shift, Valle told Network World. The company lagged behind its competitors in making the shift but has since unveiled a processor strategy to match its genAI investment, Valle said.
Indeed, Microsoft is highly invested in genAI, having made its Copilot chatbot a cornerstone of its strategy and then integrating it in all of its main applications, she noted.
The Azure Maia 100 and Cobalt 100 chips – the first two custom silicon chips designed by Microsoft for its cloud infrastructure – are especially designed for AI workloads, she said. Microsoft plans to use the Cobalt 100 ARM server CPU for general purpose tasks, and the Maia 100 AI accelerator for Azure data centers to support services, including OpenAI and Copilot.
“For Microsoft to take this step, it means that the company is investing a lot of money in innovation to drive its ambitious AI plans,” Valle noted.
Changes ahead in the chip landscape
Overall, the investments by the four companies “are evidence that the market is extremely competitive and that hyperscalers are actively investing in new architectures to drive their ambitious AI plans,” Valle said. As it’s early in the game, there is still time for each of them to “establish an early competitive advantage,” she added.
These moves also are “bound to challenge Nvidia’s dominant position” in the genAI-focused chip market – but not right away, she added. The forthcoming “AI chip race” will mirror a similar path as the race to deliver large language models, and some of those companies – including OpenAI – also plan to join the fray.
There are myriad startups – including Cerebras, Groq, Mythic, Graphcore, Cambricon, and Horizon Robotics – also focused on “creating custom AI chips that operate faster, consume less power, and can be optimized for training neural nets and making inferences more effectively,” Valle noted.
These players all will put pressure on Nvidia and more traditional players as well as drive “unprecedented growth” in the semiconductor space going forward as each player – both traditional and novel – will face its own set of challenges, she said.
While genAI-specific chip technology will “offer competitive alternatives to the GPUs offered by Nvidia and AMD,” startups face several challenges in disrupting the strong market position and brand recognition of major players like Nvidia, AMD, and Intel, Valle said.
“Additionally, major players often have extensive resources and established relationships with customers,” and startups may face other competitive barriers in terms of manufacturing capabilities and access to capital, she added.