Andy Patrizio is a freelance technology writer based in Orange County, California. He's written for a variety of publications, ranging from Tom's Guide to Wired to Dr. Dobbs Journal, and has been on staff at IT publications like InternetNews, PC Week and InformationWeek.
NeuReality claims its AI appliance can significantly cut costs and energy use in AI data centers.
British chip designer Graphcore has been acquired by Japan's SoftBank Group, which retains significant control over Arm Holdings, its earlier semiconductor investment.
The Dell PowerEdge XE9680 ships with Instinct MI300X accelerators, AMD's high-end GPU accelerators that are designed to compete with Nvidia Hopper processors.
As high-density data centers continue to add AI workloads, there’s growing interest in liquid cooling, thanks to its ability to transfer heat more efficiently than air cooling.
AWS says Graviton4 is its most powerful and energy efficient processor, suited for a broad range of workloads running on Amazon EC2.
Updated infrastructure-as-code management capabilities and expanded SLAs are among the new features from Pure Storage.
‘I'm not sure yet whether I'm going to regret this or not,' said Nvidia CEO Jensen Huang as he revealed 2026 plans for the company’s Rubin GPU platform.
With the new generation chips, Intel is putting an emphasis on energy efficiency.
Unveiled at Computex 2024. the new AI processing card from AMD will come with much more high-bandwidth memory than its predecessor.
Hyperscalers and chip makers, including AMD, Broadcom, Cisco, Google, HPE, Intel and Microsoft, are partnering to develop a high-speed chip interconnect to rival Nvidia’s NVLink technology.
Ampere Computing's annual update on upcoming products and milestones included an increase in core count for its Arm-based AmpereOne server chips as well as a new working group for jointly developing AI SOCs.
x86 processor shipments finally realigned with typical seasonal trends for client and server processors, according to Mercury Research.
The highly scalable, low-power 400G PCIe Gen 5.0 Ethernet adapters are designed for AI in the data center.
The company adds new storage controller support as well as AWS.
The HyperCool direct-to-chip system from ZutaCore is designed to cool up to 120kW of rack power without requiring a facilities modification.
Federal agencies including the IRS and Pentagon will have access to the Nvidia DGX SuperPOD system through MITRE, a nonprofit organization that operates federally funded R&D centers.
While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires.
Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers.
The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers.
Lenovo Systems announced AI-centric systems using an all-AMD processor design, along with infrastructure for on-prem and Azure cloud.
NeuCool technology works with existing data center equipment and configuration.
Georgia Tech's dedicated AI supercomputer is a cluster of 20 Nvidia HGX H100s; the DOE's Venado is the first large-scale system with Nvidia Grace CPU superchips deployed in the U.S.
New edge-optimized processors and FPGAs will power AI-enabled devices in vertical industries including retail, industrial and healthcare.
Plus, Google unveils Axion, its custom Arm-based chip for data centers, at Google Cloud Next 2024.
Decades after some predicted its demise, the mainframe is as vital as ever, even in the era of AI.
The magnitude 7.4 earthquake struck fairly close to Taipei, which plays a vital role in the global chip supply chain.
More happened at the Nvidia GTC conference than the Blackwell announcement, including the launch of two new high-speed network platforms.
Stream Data Centers bought 55 homes in a 34-acre subdivision and plans to break ground on a 2 million square-foot data center campus in late 2024.
The DGX SuperPOD features eight or more DGX GB200 systems and can scale to tens of thousands of Nvidia Superchips.
The next-gen Blackwell architecture will offer a 4x performance boost over the current Hopper lineup, Nvidia claims.
New servers and storage services are targeted at high performance workloads, which means AI.
Once optional, GPUs are becoming mandatory in servers. Companies are prioritizing investment in highly configured server clusters for AI, research firm Omdia reports.
Supermicro and Lenovo are expanding their AI hardware offerings, Intel is previewing chips designed for 5G and AI workloads, and Dell is embracing telecom.
Almost a decade after it bought the FPGA maker, Intel spins it off as a standalone company with the old brand name.
Researchers disclosed multiple potential vulnerabilities that may impact some AMD processors, including Zen-based product lines across multiple generations.
Cloud service provider Lambda is working to build a GPU cloud for AI workloads.
Slow distribution of federal funds contributes to a delay in the construction of Intel's Ohio manufacturing facility.
‘Volt Typhoon,’ the China state-sponsored hacking group, targeted outdated switches with poor security as part of a wave of attacks against critical infrastructure.
New fabric aims to simplify the management of interconnection services so enterprise customers can more easily monitor and manage utilization and add or change network services as needed.
The National Science Foundation is leading a project to ensure the US continues to lead in AI research.
The deal is aimed at businesses that want to rapidly deploy generative AI applications but don’t have the infrastructure or in-house skills to do it alone.
IT executives are worried about public cloud security and cyberattacks, but in some instances, they’re not doing enough to prevent them.
Policies around remote work could add to the tension as Broadcom begins to integrate VMware.
Startup In Bold Print wants to make it as simple to calculate carbon emissions as it is to file your taxes.
Intel's Gaudi3 AI accelerator can be liquid- or air-cooled cooled thanks to a partnership with Vertiv
The companies are promising cloud-based AI services at a more affordable cost than the alternatives.
A rush to build AI capacity using expensive coprocessors is jacking up the prices of servers, says research firm Omdia.
The companies are extending their AI partnership, and one key initiative is a supercomputer that will be integrated with AWS services and used by Nvidia’s own R&D teams.
Layoffs and executive departures are expected after an acquisition, but there's also concern about VMware customer retention.
Low vacancies and the cost of AI have driven up colocation fees by 15%, DatacenterHawk reports.
Sponsored Links