New Epyc chip to offer scale-up performance or scale-out capacity for a wide range of data center workloads. Credit: AMD AMD announced its latest AI and high-performance computing processors at its Advancing AI event in San Francisco, including the fifth generation of its Epyc server processors and AMD Instinct MI325X accelerators. Commitments from leading customers and partners, including Microsoft, OpenAI, Meta, Oracle, and Google Cloud, rounded out the event. The fifth-generation Epyc CPUs come in two distinct configurations, both of which are part of the same 9005 family, also known as Turin. The scale-up version has a new “Zen 5” core architecture optimized for maximum performance, according to AMD. The scale-out models come with dense “Zen 5c” compact cores, a concept first introduced this year with the Zen 4c Bergamo line. Intel has a similar strategy with its Performance cores and Efficiency cores, but it achieves the E-cores by removing instructions, which can risk breaking apps. AMD got its compact cores by limiting cache size and clock speed, but it kept all the instructions. The scale-up CPUs come with up to 128 cores and 256 threads and are made with TSMC’s 4nm process. The scale-out Zen 5c version will offer up to 192 cores and 384 threads and is made on TSMC’s 3nm process. The Zen 5-based processors have a max power draw of 500 watts, while the 5c draws 390 watts. The new chips comes big claims. AMD says its flagship 192-core Epyc 9965 is 2.7 times faster than Intel’s competing Xeon, the Platinum 8952+. AMD claims four times faster video transcoding, 3.9 times faster performance in HPC applications, and up to 1.6 times the performance per core in virtualized environments. Instinct takes on Nvidia AMD is seeking a war on two fronts, with Intel and with Nvidia. It’s looking to take on Nvidia in the AI accelerator space with Instinct. While it has a fraction of the business Nvidia has, AMD does have some wins, like the world’s fastest supercomputer Frontier. To that end, AMD has launched the MI325X with greater memory capacity and bandwidth than the Instinct MI300X, which launched last December. The MI325X is based on the same CDNA 3 GPU architecture, compared with 192GB of HBM3 high-bandwidth memory and 5.3 TB/s in memory bandwidth in the MI300X. AMD said AI inference performance in the MI325X provides 40% faster throughput with an 8-group, 7-billion-parameter Mixtral model over Nvidia’s top-of-the-line Hopper H200, 30% lower latency with a 7-billion-parameter Mixtral model, and 20% lower latency with a 70-billion-parameter Llama 3.1 model. AMD is planning an eight-node platform for next year, similar to Nvidia’s DGX Pods. With eight MI325X GPUs connected over AMD’s Infinity Fabric, the platform will offer 2TB of HBM3e memory, 48 TB/s of total memory bandwidth, 20.8 petaflops of FP8 performance, and 10.4 petaflops of FP16 performance, AMD said. The MI325X will begin shipping in systems from Dell Technologies, Lenovo, Supermicro, Hewlett Packard Enterprise, Gigabyte, and several other server vendors starting in the first quarter of next year, the company said. Read more processor news Enfabrica looks to accelerate GPU communication: Enfabrica’s Accelerated Compute Fabric SuperNIC (ACF-S) silicon is designed to deliver higher bandwidth, greater resiliency, lower latency and greater programmatic control to data center operators running data-intensive AI and HPC. Nvidia claims efficiency gains of up to 100,000X: However, the chipmaker’s dramatic claim for the performance gains of its GPUs is over a 10-year span, and only applies to one type of calculation. Intel launches Xeon 6 processors and Gaudi 3 AI accelerators: Intel has formally launched its next Xeon 6 server processors as well as the Gaudi 3 AI accelerators, making some pretty big boasts in the process. Inflection AI shifts to Intel Gaudi 3, challenging Nvidia’s AI chip lead: The announcement follows IBM’s recent partnership with Intel, signaling a rising interest in Intel’s AI hardware. Intel’s Altera spinout launches FPGA products, software: Altera CEO Sandra Rivera shares ‘big, audacious, ambitious goal’ to dominate FPGA market. Related content news Supermicro unveils AI-optimized storage powered by Nvidia New storage system features multiple Nvidia GPUs for high-speed throughput. By Andy Patrizio Oct 24, 2024 3 mins Enterprise Storage Data Center news Nvidia to power India’s AI factories with tens of thousands of AI chips India’s cloud providers and server manufacturers plan to boost Nvidia GPU deployment nearly tenfold by the year’s end compared to 18 months ago. By Prasanth Aby Thomas Oct 24, 2024 5 mins GPUs Artificial Intelligence Data Center news Gartner: 13 AI insights for enterprise IT Costs, security, management and employee impact are among the core AI challenges that enterprises face. By Michael Cooney Oct 23, 2024 6 mins Generative AI Careers Data Center news Network jobs watch: Hiring, skills and certification trends What IT leaders need to know about expanding responsibilities, new titles and hot skills for network professionals and I&O teams. By Denise Dubie Oct 23, 2024 33 mins Careers Data Center Networking PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe