Financial pressures of partnering with Nvidia for semiconductors capable of running hyperscale workloads have forced Google, Amazon, Meta and Microsoft into the processor market.
Trillium, the sixth iteration of Google’s Tensor Processing Unit (TPU), is nearly five times more efficient than its predecessor, TPUv5, in peak compute performance and memory bandwidth, Google said.
Enterprises and telco operators are preparing their networks for profound innovations to come.
Infrastructure enhancements targeting AI workloads include updates to compute hardware, new Nvidia GPU offerings, and storage optimization.
At the Intel Vision 2024 event, Intel unveiled the Gaudi 3 accelerator along with AI-optimized Ethernet networking tech, including an AI NIC and AI connectivity chiplets.
Other updates to the service includes new racks, Apigee integration, and survivability features.
Office environments need to change to foster collaboration, and employers need to close the AI skills gap, Cisco reports in its hybrid work study.
HPE Aruba is using proprietary LLMs to better understand questions posed in its Networking Central platform and generate more accurate, detailed responses.
The facility, with sites in the US and South Korea, will develop chips to support the processing demands of ‘artificial general intelligence,’ which refers to AI that can perform as well as or better than humans.
As part of its extended collaboration with AWS, GCP, Microsoft, IBM, and Oracle, the chip designer will share its new Blackwell GPU platform, foundational models, and integrate its software across platforms of hyperscalers.
Sponsored Links