Red Hat extends OpenShift for the era of generative AI

Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more

from IBM Red Hat The business unit is expanding its AI capabilities with new Red Hat OpenShift AI technology.

The OpenShift AI platform was announced today at the Red Hat Summit. For the past decade, OpenShift has been Red Hat’s premier application container offering based on the open source Kubernetes container orchestration platform. OpenShift AI is a version of the platform that (as the name implies) is optimized to help enable AI and machine learning (ML) deployments.

The new platform is an evolution of the Red Hat OpenShift data science platform with a focus on helping enable production deployment of AI models.

“We’ve spent a lot of our time and energy over the last 10 to 20 years building application platforms, and today it’s about uniting data workloads with the same platform we use to produce and run applications,” Red Hat CTO Chris Wright said in a press conference and analysts. “The challenges for companies to adopt AI/ML are enormous.”


transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they’ve integrated and optimized AI investments to achieve success and avoid common pitfalls.

Register now

IBM is already using OpenShift AI

Wright noted that the reality for many companies is that data science experiments often fail, with less than half reaching production.

Red Hat’s goal with OpenShift AI is to have a collection of tools that provide the ability to do all the training, servicing, and monitoring necessary for AI, and in a way that helps more models reach production. It’s an approach and technology that Red Hat has already tested through its parent company, IBM.

Wright commented that the cost and complexity of training extensive language models (LLMs) is particularly great. When IBM began developing its new basic Watsonx models, which were publicly announced earlier this month, it turned to Red Hat OpenShift.

“Our platform is the platform that IBM uses to build, train and manage their basic models just to show you the kind of scale and throughput capabilities that we’ve built into OpenShift AI,” he said.

The challenges of AI/ML implementations and the Red Hat solution

Red Hat is incorporating a number of enhanced capabilities into OpenShift AI. Among them are the performance capabilities of the model. Wright said OpenShift AI will continue to improve data scientists’ ability to manage monitoring and performance of a model deployed in production. Part of model performance is also about watching for possible model deviations and making sure that a model remains accurate.

Deployment pipelines for AI/ML workloads are also critical. To that end, Red Hat OpenShift AI is enabling organizations to create repeatable approaches to deployment and model building. There is also an effort to integrate custom runtimes to build AI/ML models.

“One of the things we’ve found is that data science teams spend a disproportionate amount of their time just putting their tools together,” Wright said. “Of course, we can produce a set of tools, but it may not be the exact set of tools a company is looking for, so they may need to customize the runtime environment.”

What is also needed to help bring AI/ML workloads to production is the ability to integrate AI quality metrics. Wright noted that many data science experiments fail because they are not aligned with business outcomes.

When that happens, “it’s hard to measure your success,” Wright said. “So making sure that we can build metrics into that whole process, I think is really critical.”

VentureBeat’s mission is to be a digital public square for technical decision makers to gain insights into transformative business technology and transact. Discover our informative sessions.


Scroll to Top