michael_cooney
Senior Editor

Cisco, HPE, Dell announce support for Nvidia’s pretrained AI workflows

News
Sep 04, 20246 mins
Data CenterGenerative AINetworking

Enterprise vendors are lining up to bring Nvidia NIM Agent Blueprints to data center and edge deployments.

Beardless Construction Engineer or Contractor in a Helmet and Suit Examining Blueprints in a Blue Folder and Getting Frustrated over a Faulty Detail
Credit: Mahir KART / Shutterstock

Cisco, HPE, Dell and others are looking to use Nvidia’s new AI microservice application-development blueprints to help enterprises streamline the deployment of generative AI applications.

Nvidia recently announced its NIM Agent Blueprints, a catalogue of pretrained, customizable AI workflows that are designed to provide a jump-start for developers creating AI applications. NIM Agent Blueprints, some of which are available now, target a number of use cases, including customer service, virtual screening for computer-aided drug discovery, and a multimodal PDF data extraction workflow for retrieval-augmented generation (RAG) that can ingest vast quantities of business data. More applications are expected in the future.

[ RelatedMore Nvidia news and insights ]

With the RAG workflow, for example, customers can create digital humans, AI agents or customer service chatbots that can quickly become experts on any topic captured within their body of PDF data, Nvidia stated. “Using the workflow, enterprises can combine NeMo Retriever NIM microservices with community or custom models to build high-accuracy, multimodal retrieval pipelines that can be deployed wherever enterprise data resides,” according to a blog post by Justin Boitano, who leads the scale-out enterprise accelerated data center business at NVIDIA.

Developers can gain a head start on creating their own applications using NIM Agent Blueprints. The blueprints are designed to be modified and enhanced, and they allow developers to leverage both information retrieval and agent-based workflows capable of performing complex tasks, Boitano stated.

NIM Agent Blueprints can also help developers improve the AI application lifecycle. As users interact with AI applications, new data is generated, and “this data can be used to refine and enhance the models in a continuous learning cycle, creating a data-driven generative AI flywheel for emprises that can now tie applications that link models with their data,” Boitano stated.

Creating a NIM ecosystem is also part of Nvidia’s plan. In that role, Cisco, Dell Technologies, HPE and Lenovo all said they would offer NIM Agent Blueprints. In addition, Accenture, Deloitte, SoftServe and World Wide Technology are offering NIM Agent Blueprint support.

Cisco sees tie-in between its AI infrastructure and NIM Agent Blueprints

Cisco is among the enterprise tech vendors to announce its support for Nvidia NIM Agent Blueprints.

“By integrating Nvidia NIM Agent Blueprints with Cisco’s AI solutions, enterprises can gain a secure and scalable platform that accelerates their journey to create and implement AI solutions that drive business value by automating processes, enhancing decision-making and enabling the development of innovative products and services, ultimately driving efficiency and profitability,” wrote Jake Katz, vice president of product management for AI infrastructure strategy with the Cisco networking group, in a blog post.

For Cisco, integrating Nvidia NIM Agent Blueprints within its AI suite is a further extension of the two vendors’ recent partnership. In February, the companies said they would Cisco and offer integrated software and networking hardware that promises to help customers more easily spin up infrastructure to support AI applications.

In terms of specific products, Nvidia’s newest Tensor Core GPUs will be available in Cisco’s current M7 Unified computing System (UCS) rack and blade servers, including Cisco UCS X-Series and UCS X-Series Direct, to support AI and data-intensive workloads in the data center and at the edge, the companies stated. The integrated package, which will be available in the second quarter, will include Nvidia AI Enterprise software, which features pretrained models and development tools for production-ready AI.

“Jointly validated reference architectures through Cisco Validated Designs (CVDs) make it simple to deploy and manage AI clusters at any scale in a wide array of use cases spanning virtualized and containerized environments, with both converged and hyperconverged options. CVDs for FlexPod and FlashStack for Generative AI Inferencing with Nvidia AI Enterprise will be available this month, with more to follow,” Cisco stated.

HPE Private Cloud AI could benefit from NIM Agent Blueprints

HPE, too, expressed its support for Nvidia’s new blueprints.

“Nvidia NIM Agent Blueprints represent a great opportunity for us to enhance HPE Private Cloud AI, a full-stack, turnkey private cloud for production generative AI,” wrote Sylvia Hooks, HPE’s vice president of edge to cloud integrated marketing, in a blog post.

“In the future, developers will have a way to jump-start creating AI applications that use one or more AI agents. An AI agent is a program that can perform tasks autonomously and interact with its environment to achieve specific goals.”

In June, HPE upped its relationship with Nvidia by introducing a portfolio of AI products and services jointly developed with Nvidia that it hopes will help enterprises see the productivity benefits of generative AI easily and quickly — ideally by using its products on-premises or in hybrid clouds.

Its new HPE Private Cloud AI offering integrates Nvidia GPUs, networks, and software with HPE’s AI memory, AI computing power and GreenLake cloud to provide enterprises with an energy-efficient, fast and flexible way to sustainably develop and deploy generative AI applications, the companies said.

For example, Nvidia’s AI enterprise software accelerates data science pipelines and optimizes the development and deployment of production-ready copilots and other GenAI applications.

Meanwhile, technology services company WWT expects to utilize NIMs in its AI Proving Ground lab, which lets businesses test and validate different AI use cases.

“… you can think about these NIM microservices as AI LEGO bricks — composable building blocks that developers can leverage with NVIDIA AI Enterprise to accelerate the speed at which they can develop and scale GenAI applications, regardless of where that AI workload will run (e.g., in the cloud, in a data center, at the edge, etc.), wrote WWT’s managing director Tim Brooks in a blog post.

“NIM microservices make it easy to create AI enterprise applications that can be individualized with custom guardrails to meet specific business needs or requirements,” Brooks wrote.

“For large enterprise organizations, NIM microservices offer tools for developing and scaling AI in alignment with their AI vision, goals, maturity and budget.”

Exit mobile version