Sierra -- the newest supercomputer and the second-fastest in the world -- runs Red Hat Enterprise Linux (RHEL), which isn't surprising given the role Linux plays in supercomputing in general. Credit: Randy Wong/LLNL On Oct. 26, the National Nuclear Security Administration (NNSA) — part of the Department of Energy — unveiled the latest supercomputer. It’s named Sierra and is now the second-fastest supercomputer in the world. Sierra runs at 125 petaflops (peak performance) and will primarily be used by the NNSA for modeling and simulations as part of its core mission of ensuring the safety, security, and effectiveness of the U.S.’s nuclear stockpile. It will be used by three separate nuclear security labs — Lawrence Livermore National Labs, Sandia National Laboratories, and Los Alamos National Laboratory. And it’s running none other than Red Hat Enterprise Linux (RHEL). Sierra is also NNSA’s first large-scale heterogenous system, referring to the fact that the system uses both CPUs and GPUs to accomplish its processing tasks. Sierra is the third-fastest supercomputer, according to the latest TOP500 list, and is expected to be six to 10 times more capable than LLNL’s 20 petaflop Sequoia. Each node in the system incorporates both IBM CPUs and Nvidia graphical processing units (GPUs). Designed for modeling and simulations, it is expected to start its use as a production system early in 2019. According to John Kelly, senior vice president, Cognitive Solutions and IBM Research. “IBM’s decades-long partnership with LLNL has allowed us to build Sierra from the ground up with the unique design and architecture needed for applying AI to massive data sets. The tremendous insights researchers are seeing will only accelerate high performance computing for research and business.” Why Linux? While Linux enthusiasts might find it encouraging that Sierra runs RHEL, they may be more excited to learn that Linux is now running on all of the supercomputers included in the TOP500 list mentioned above. If you find this surprising, consider the number of CPUs in use, the fact that Linux is (mostly) free, and the tremendous flexibility and security that’s derived from being able to access and, as needed, modify the source code. Other operating systems cannot begin to compete. How big is big? Sierra occupies 7,000 square feet of floor space with its 240 computing racks and 4,320 nodes. Each of those nodes consists of two IBM Power 9 CPUs and four Nvidia V100 GPUs with a Mellanox EDR InfiniBand interconnect. What’s next for supercomputers? The next goal is to build computers in the “exascale” class, according to Department of Energy Secretary Rick Perry. “In just a few short years, we expect to see exascale systems deployed at Lawrence Livermore, Argonne and Oak Ridge (national laboratories), ensuring our global superiority in this arena for years and decades to come,” Perry said. “Starting with Sierra, this new generation of supercomputers will be an absolute game-changer for the world.” Exascale refers to computing systems that are capable of performing at or in excess of one exaFLOP — a billion billion calculations per second — and would represent a thousand-fold increase over the petascale systems that went into operation only 10 years ago. gigaFLOPS GFLOPS 109 teraFLOPS TFLOPS 1012 petaFLOPS PFLOPS 1015 exaFLOPS EFLOPS 1018 Note that each increase in the units represents a thousand-fold increase over the previous one. NNSA hopes to step up to exascale with a system called El Capitan in 2023. In terms of FLOPs, that system should be about 10 times as powerful as Sierra. It is also expected to be another heterogenous system aimed at machine learning and artificial intelligence. Related content news Supermicro unveils AI-optimized storage powered by Nvidia New storage system features multiple Nvidia GPUs for high-speed throughput. By Andy Patrizio Oct 24, 2024 3 mins Enterprise Storage Data Center news Nvidia to power India’s AI factories with tens of thousands of AI chips India’s cloud providers and server manufacturers plan to boost Nvidia GPU deployment nearly tenfold by the year’s end compared to 18 months ago. By Prasanth Aby Thomas Oct 24, 2024 5 mins GPUs Artificial Intelligence Data Center news Gartner: 13 AI insights for enterprise IT Costs, security, management and employee impact are among the core AI challenges that enterprises face. By Michael Cooney Oct 23, 2024 6 mins Generative AI Careers Data Center news Network jobs watch: Hiring, skills and certification trends What IT leaders need to know about expanding responsibilities, new titles and hot skills for network professionals and I&O teams. By Denise Dubie Oct 23, 2024 33 mins Careers Data Center Networking PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe