HPE, Dell launch another round of AI servers

News
Oct 17, 20243 mins
Data CenterServers

HPE unveils one server, and Dell launches several compute and storage products.

datacenter
Credit: Shutterstock

The AI spin continues, with Hewlett Packard Enterprise and Dell Technologies both introducing new servers oriented towards training large language models (LLM).

HPE’s new HPE ProLiant Compute XD685 uses AMD’s newly launched 5th generation Epyc processors and Instinct MI325X accelerators to support natural language processing, LLMs, and multi-modal AI training. The XD685 has a modular 5U chassis, which supports a wide combination of GPUs, CPUs, and it offers both air cooling and direct liquid cooling. It supports up to eight Instinct MI325X accelerators with 6 Tbps of memory bandwidth.

The Instinct GPU features HBM3E shared memory, so all of the cards appear as one big memory pool. This means fewer cards are needed to achieve the same level of performance, thus reducing the total cost of ownership.

Another thing the XD685 has going for it is its suite of services, provided by HPE Services, for setting up large AI clusters. This includes configuration, validation, and testing assistance to reduce deployment times. Security is provided through HPE Integrated Lights-Out (iLO) technology, offering production-level security embedded into the silicon

The HPE ProLiant Compute XD685 is available to order through HPE and will be generally available in the first quarter of 2025. 

Dell expands compute and storage portfolio

Meanwhile, Dell Technologies continues to expand its broad portfolio of generative AI solutions with an array of products under the Dell AI Factory umbrella.

First up is a series of new PowerEdge servers. The PowerEdge XE9712 offers high-performance, dense acceleration for LLM training and real-time inferencing of large-scale AI deployments. It uses Nvidia’s GB200 NVL72, with up to 36 NVIDIA Grace CPUs with 72 NVIDIA Blackwell GPUs in a rack-scale design. The 72 GPUs are connected via NVLink domain, which acts as a single GPU for up to 30x faster real-time, trillion-parameter LLM inferencing.

The Dell PowerEdge M7725 is designed for high performance dense compute, which is ideal for research, government, fintech and higher education environments, according to Dell. The Dell PowerEdge M7725 scales between 24K-27K cores per rack, with 64 or 72 two-socket nodes using 5th generation AMD Epyc processors. It uses both direct liquid cooling and air cooling.

In addition to compute, Dell is offering unstructured storage and data management through its PowerScale storage devices to improve AI application performance and deliver simplified global data management.

The new PowerScale features faster metadata and the Dell Data Lakehouse discovery, while new 61TB drives increase capacity and efficiency and reduces data center storage footprint by half. PowerScale also adds InfiniBand capabilities and 200GbE Ethernet adapter support that delivers up to 63% faster throughput.

To mount all this hardware, Dell is introducing the Integrated Rack 7000 (IR7000), which handles accelerated computing demands with greater density, more sustainable power management, and advanced cooling technologies. It’s based on Open Compute Project (OCP) standards.

The IR7000 rack was built for liquid cooling natively and is capable of cooling future deployments of up to 480KW. It’s able to capture nearly 100% of heat created, according to Dell. It supports both Dell and off-the-shelf networking and is an integrated plug-and-play rack-scale system.

Exit mobile version