Intel has announced infrastructure-processing units aimed at cloud providers to give customers full control of the CPUs they lease. Credit: Dell There was a time when Intel was all-x86, all the time, everywhere. Not anymore. Last week Intel held its annual Architecture Day with previews of multiple major upcoming architectures beyond x86. For once, it’s not hyperbole when they say these are some of the “biggest shifts in a generation.” And it’s not just architectures or just more and faster cores, it’s new designs, whole new ways of doing things. Instead of just packing more cores onto a smaller die, Intel is switching to a new hybrid architecture that adds low-energy-draw cores, similar to what some ARM chip makers have been doing for years on mobile devices. Intel’s announcements covered client and server but we’ll stick with the server stuff here. Sapphire Rapids is the codename for Intel’s next-generation of Xeon Scalable processors and the first to feature the company’s Performance Core microarchitecture. Performance is a future architecture with emphasis on low latency and single-threaded core performance. A smarter branch predictor improves the flow of code in the instruction pipeline, and eight decoders enable more parallelism of code processing. A wider back-end adds ports for more and faster parallel processing. Sapphire Rapids will also offer larger private and shared caches, increased core counts, and support for DDR5 memory, PCI Express Gen 5, the next-generation of Optane memory, CXL 1.1 (Compute Express Link), and on-package High Bandwidth Memory (HBM). Sapphire Rapids will add several new technologies not used in previous generations of the Xeon Scalable processor, such as Intel Accelerator Interfacing Architecture (AIA) to improve signaling to accelerators and devices; Intel Advanced Matrix Extensions (AMX), a workload acceleration engine specifically for tensor processing used in deep learning algorithms; and Intel Data Streaming Accelerator (DSA), which is meant to offload common data movement tasks from the CPU. Introducing the IPU Intel also announced a trio of new Infrastructure Processing Units (IPU), designed around data movement specifically for the cloud and communications services. These IPUs are a mix of Intel Xeon-D processor cores, Agilex FPGAs and Intel Ethernet technologies. All are meant to reduce network overhead and increase throughput. IPUs are also designed to separate the cloud infrastructure from tenant or guest software, so guests can fully control the CPU with their software, while service providers maintain control of the infrastructure and root-of-trust. The first of the three is Oak Springs Canyon, which features Intel Xeon-D cores, an Agilex FPGA, and dual 100G Ethernet network interfaces. It supports Intel’s Open vSwitch technology and enables the offloading of network virtualization and storage functions like NVMe over fabric and RoCE v2 to reduce CPU overhead. Second is the Intel N6000 Acceleration Development Platform, codenamed Arrow Creek, a 100G SmartNIC designed for use with Intel Xeon-based servers. It features an Intel Agilex FPGA and Intel Ethernet 800 Series controller for high-performance 100G network acceleration. Arrow Creek is geared toward Communication Service Providers (CoSPs). Finally there is a new ASIC IPU, codenamed Mount Evans, a first of its type from Intel. Intel says it designed Mount Evans in cooperation with a top cloud service partners. Mount Evans is based on Intel’s packet-processing engine, instantiated in an ASIC. This ASIC supports many use cases like vSwitch offload, firewalls, and virtual routing, and emulates NVMe devices at very high IOPS rates by extending the Optane NVMe controller. Mount Evans features up to 16 Arm Neoverse N1 cores, with a dedicated compute cache and up to three memory channels. The ASIC can support up to four host Xeons, with 200Gbps of full-duplex bandwidth between them. This is only the beginning of the news out of Architecture Day. More will come. Related content news Supermicro unveils AI-optimized storage powered by Nvidia New storage system features multiple Nvidia GPUs for high-speed throughput. By Andy Patrizio Oct 24, 2024 3 mins Enterprise Storage Data Center news Nvidia to power India’s AI factories with tens of thousands of AI chips India’s cloud providers and server manufacturers plan to boost Nvidia GPU deployment nearly tenfold by the year’s end compared to 18 months ago. By Prasanth Aby Thomas Oct 24, 2024 5 mins GPUs Artificial Intelligence Data Center news Gartner: 13 AI insights for enterprise IT Costs, security, management and employee impact are among the core AI challenges that enterprises face. By Michael Cooney Oct 23, 2024 6 mins Generative AI Careers Data Center news Network jobs watch: Hiring, skills and certification trends What IT leaders need to know about expanding responsibilities, new titles and hot skills for network professionals and I&O teams. By Denise Dubie Oct 23, 2024 33 mins Careers Data Center Networking PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe