Nvidia is under the covers for a slew of the world's fastest supercomputers; Intel and AMD talk future products at supercomputing conference. Credit: MaxiPhoto / Getty Images Nearly 70% of the 500 fastest supercomputers in the world, as announced at the Supercomputing 20 conference this week, are powered by Nvidia, including eight of the top 10. Among them was one named Selene that Nvidia built itself and that debuted at Number 5 on the semi-annual TOP500 list of the fastest machines. With top-end systems requiring 10,000 or more CPUs and GPUs, they are enormously expensive, so government or research institutions own the majority of them. That makes Selene all the more rare. It was built by and is based at Nvidia’s Santa Clara, California, headquarters. (It’s widely believed there are many supercomputers in private industry that are not reported for competitive reasons.) Nvidia’s Big Showing Also significant is that another Nvidia supercomputer, the DGX SuperPOD, took the top spot on the GREEN500 list, which measures the energy efficiency of the TOP500 systems. Four of the top five systems had Nvidia’s A100 Ampere GPU. Fujitsu’s Fugaku prototype, with just Arm processors and no DRAM, fell from first place to sixth. This is big because GPUs have never been known for energy efficiency but now Nvidia has a new story to tell: performance and energy efficiency in one product. Nvidia also introduced its Mellanox NDR 400Gbps InfiniBand family of interconnect products, which will be available in Q2 of 2021. The new lineup includes adapters, data processing units (DPUs), what Nvidia calls smart NICs, switches and cables. This is not just a doubling the bandwidth per port. Mellanox is tripling the number of ports in a single device, which in theory will allow one switch platform to connect the entire data center. Mellanox said adopters of NDR 400 Gbps InfiniBand can see a network cost savings of 1.4x and power savings of up to 1.6x for datacenters. AMD Claws Back Good news and bad news for AMD. Its share of the top supercomputers that use its CPUs nearly doubled from 11 on the June TOP500 list to 21 on the current list. The growth came from new systems with second-generation EPYC processors, which come with an insane 64-cores. On the down side, it can’t get any traction against Nvidia on the GPU side. Just one of the top 500 used AMD Radeon GPUs. Even Intel’s Xeon Phi, which is discontinued, had a better showing with three systems on the list. But AMD is not giving up. On Monday it revealed its new Instinct MI100 server GPU, calling it the “world’s fastest HPC accelerator for scientific research,” with more than 10TFLOP for double-precision floating-point performance. AMD says it improves half-precision floating-point performance for AI training workloads by nearly seven times over the company’s previous generation of accelerators. MI100 comes with a technology called Matrix Core, a part of AMD’s new CDNA architecture that is designed for HPC and machine learning workloads. Future iterations of the architecture will be used for its next-generation Instinct GPUs. Intel’s Latest Try at GPUs Intel is hoping the third time will be the charm for GPUs. It hired Raja Koudri, the designer of AMD’s Radeon GPU, to be its chief architect this time around so it certainly has no excuse for technical failure. Its new GPU is called the Xe, proving once again Intel has the worst product branding department in the Silicon Valley. The biggest news regarding Xe was introduction of oneAPI Gold, the first productized version of Intel’s programming platform for the Xe GPU line. OneAPI Gold plays into Intel’s XPU strategy of heterogeneous processing. Servers are much more than x86 chips. They have GPUs, FPGAs, AI accelerators, and network processors, and Intel has products in every category. OneAPI Gold can rule them all, allowing developers to write one set of highly optimized code and have it run optimally on any processor. Intel is promoting oneAPI as an open standard but it’s made for Intel’s architecture. So I won’t hold my breath for AMD or Nvidia to adopt it any time soon. But for anyone all-in with Intel, it could do what CUDA did for Nvidia. Xe processors are still in the works, with the high-end version, codenamed Ponte Vecchio, due next year. OneAPI Gold is said to ship next month. Related content news Supermicro unveils AI-optimized storage powered by Nvidia New storage system features multiple Nvidia GPUs for high-speed throughput. By Andy Patrizio Oct 24, 2024 3 mins Enterprise Storage Data Center news Nvidia to power India’s AI factories with tens of thousands of AI chips India’s cloud providers and server manufacturers plan to boost Nvidia GPU deployment nearly tenfold by the year’s end compared to 18 months ago. By Prasanth Aby Thomas Oct 24, 2024 5 mins GPUs Artificial Intelligence Data Center news Gartner: 13 AI insights for enterprise IT Costs, security, management and employee impact are among the core AI challenges that enterprises face. By Michael Cooney Oct 23, 2024 6 mins Generative AI Careers Data Center news Network jobs watch: Hiring, skills and certification trends What IT leaders need to know about expanding responsibilities, new titles and hot skills for network professionals and I&O teams. By Denise Dubie Oct 23, 2024 33 mins Careers Data Center Networking PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe