The next generation of processors from Ampere will have up to 512 cores and is designed to fit into existing air-cooled data centers. Credit: Shutterstock/Javier Pardina Arm server chip upstart Ampere Computing took the wraps off its next generation of products, which will reach 512 cores in the next few years and offer an AI call processing unit. But don’t be planning to place an order just yet. This next generation processor, dubbed Aurora, is not due until 2026. For now, the AmpereOne currently shipping has 192 cores, with the 256-core AmpereOne MX due next year. Ampere’s AI processor will also be a part of Aurora. “For the first time, using our own Ampere AI IP that we integrate directly into the SOC via our interconnect [and] also high bandwidth memory attached to this platform, we are really addressing those key critical AI use cases, starting with inference, but also scaling into training as well,” said Jeff Wittich, chief product officer at Ampere Computing, on a conference call with the press. Aurora also comes with a scalable AmpereOne Mesh, which the company claims allows for the seamless connection of all types of compute and a distributed coherence engine that supports coherency across all nodes. Ampere claims Aurora will deliver three times the performance per rack compared to the current flagship AmpereOne processors. Aurora offers powerful AI compute capabilities for workloads like RAG and vector databases, but Wittich said it will support all types of enterprise applications, not just cloud. “So it is easy to deploy everywhere, not just hyperscalers,” he said. Wittich also noted that Aurora can be air-cooled, making it deployable in any existing data center without requiring a retrofit to support liquid cooling. He emphasized the AmpereOne product line’s power efficiency, saying it is more power-friendly for existing data centers. “The fact is that 77% of all data centers in the world had a maximum per rack power draw of less than 20 kilowatts, and more than half the racks out there are less than 10 kilowatts. So that means that those really big solutions, like the Nvidia DGX box, can’t even go into over half of today’s data centers,” he said. “So, the vast majority of data centers really need efficient solutions that fit into their existing air-cooled environments. Otherwise, AI is only going to work in a few geographies with a few companies,” Wittich said. Read more CPU news Intel highlights new Xeons for AI at Hot Chips 2024 Nvidia fixes chip glitch, says production back on schedule for Q4 3 takeaways from AMD’s ZT Systems grab AWS launches Graviton4 instances F5 teams with Intel to boost AI delivery, security Everyone but Nvidia joins forces for new AI interconnect Related content news Billion-dollar fine against Intel annulled, says EU Court of Justice A 15-year-long roller coaster ride of appeals and counter-appeals over the European Commission’s antitrust ruling has ended in victory for the company. By Lynn Greiner Oct 25, 2024 1 min CPUs and Processors Cloud Computing news Intel, AMD forge x86 alliance The two competitors have become allies in a bid to stave off Arm. By Andy Patrizio Oct 22, 2024 3 mins CPUs and Processors Data Center news Vertiv and Nvidia define liquid cooling reference architecture Jointly designed architecture is intended for GPU-loaded AI factories, which will generate tremendous amounts of heat. By Andy Patrizio Oct 22, 2024 4 mins CPUs and Processors Energy Efficiency Data Center news analysis Nvidia contributes Blackwell rack design to Open Compute Project In addition to the rack architecture contributions, Nvidia is broadening its Spectrum-X networking support for OCP standards. By Lynn Greiner Oct 15, 2024 4 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe