Americas

  • United States
Maria Korolov
Contributing writer

Red Hat delivers AI-optimized Linux platform

News
Sep 05, 20245 mins
Cloud ComputingLinux

Red Hat Enterprise Linux AI is aimed at making it easier and cheaper for enterprises to deploy generative AI technologies with open-source tools sets and models.

System administrator in data center running programming scripts on tablet, solving tasks. Experienced programmer manipulating lines of code for artificial intelligence applications development
Credit: DC Studio / Shutterstock

Open-source powerhouse Red Hat jumped into the generative AI space three months ago, announcing a new AI-focused vision for its Linux operating system at its annual summit. Today, that vision became a reality with the general availability of Red Hat Enterprise Linux AI.

RHEL AI is Red Hat’s solution to the problems enterprises have building and deploying AI across hybrid clouds and the high costs of training and inference.

“Enterprises are seeing sticker shock,” says Andy Thurai, vice president and principal analyst at Constellation Research. Not only is it expensive to train or fine-tune a model, but the cost of actually deploying and using the model in production – the inference costs – can rack up quickly, making the training costs seem trivial in comparison. “Companies are looking for cheaper ways to do it,” Thurai says.

Tushar Katarki, Red Hat’s senior director of product management for the hybrid platforms business unit, says that Red Hat is not making public its pricing information, other than to say that RHEL AI is available at “an attractive price.”

“But with that being said,” he adds, “RHEL AI can deliver up to 50% lower costs for similar or even slightly better performance.”

Closed-source AI platforms like OpenAI’s ChatGPT and Anthropic’s Claude are all delivered only on a SaaS model, he says, while RHEL AI, which supports multiple open-source models including IBM’s Granite, can be run on different clouds and on a wide variety of OEM servers.

At launch, RHEL AI includes support for the Granite 7-billion-parameter English language model. Another Granite model, the 8-billion-parameter coding model, is in preview and will be generally available at the end of this year or beginning of 2025.

RHEL AI also comes with Instruct Lab, an open-source project that helps enterprises fine tune and customize these Granite models, or other open-source AI models.

Finally, RHEL AI also comes with all the underlying platform infrastructure, Katarki says. That includes immediate support for Nvidia hardware. Support for AMD and Intel hardware is expected to arrive in the next few weeks.

Everything is packaged up as a container image, Katarki adds, so that enterprises can use their existing container management tools to administer it. On top of the software, there’s also support and legal indemnification for both the open-source software and the Granite model.

“Think of it as an appliance,” Katarki says. “It’s got everything. The Granite model, Instruct Lab, all the underlying platform software you need, and the operating system underneath it all.”

RHEL AI helps enterprises get away from the “one model to rule them all” approach to generative AI, which is not only expensive but can lock enterprises into a single vendor. There are now open-source large language models available that rival those available from the commercial vendors in performance.

“And there are smaller models,” Katarki adds, “which are truly aligned to your specific use cases and your data. They offer much better ROI and much better overall costs compared to large language models in general.”

And not only the models themselves but the tools needed to train them are also available from the open-source community. “The open-source ecosystem is really fueling generative AI, just like Linux and open source powered the cloud revolution,” Katarki says.

In addition to allowing enterprises to run generative AI on their own hardware, RHEL AI also supports a “bring your own subscription” for public cloud users. At launch, RHEL AI supports AWS and the IBM cloud. “We’ll be following that with Azure and GCP in the fourth quarter,” Katarki says.

RHEL AI also has guardrails and agentic AI on its roadmap. “Guardrails and safety are one of the value-adds of Instruct Lab and RHEL AI,” he says. “We have it as part of the training itself.”

The platform doesn’t currently support any specific safety frameworks, he says. “But we look forward, in the immediate term, to publish some collateral and reference architectures.”

RHEL AI also doesn’t come with out-of-the-box support for agentic AI, but, again, Katarki expects reference architectures to be released in the near future.

“The future is agentic workflows,” he says. “The future is agentic systems. This is no different than the microservices approach. We have tooling for people to customize it for different use cases.”

Small language models in particular fit well with the agentic approach, Katarki adds, “as opposed to one large model that does everything. A bunch of smaller models are easier to maintain, easier to train, easier to customize, and more cost effective for inference.”

Read more Red Hat news