IBM, like almost all the tech giants these days, is betting big on AI.
At its annual Think conference, the company announced IBM Watsonx, a new platform that offers tools for building AI models and provides access to pre-trained models for generating computer code, text and more.
It’s like a slap in the face for IBM’s back-office managers, who were recently said that the company will stop hiring features that it believes could be replaced by AI in the coming years.
But IBM says the launch was motivated by the challenges many companies still experience when implementing AI in the workplace. 30% of business leaders responding to an IBM survey cite trust and transparency issues as barriers preventing them from adopting AI, while 42% cite privacy concerns, specifically around generative AI.
“AI may not replace managers, but managers who use AI will replace managers who don’t,” Rob Thomas, IBM’s chief commercial officer, said at a roundtable with reporters. “It really changes the way people work.”
Watsonx solves this, IBM says, by giving clients access to the toolset, infrastructure and consulting resources they need to create their own AI models or tune and adapt available AI models on their own data. Using Watsonx.ai, which IBM describes in fluent marketing parlance as an “AI developer enterprise studio,” users can also validate and deploy models, as well as monitor models post-deployment, seemingly consolidating their various workflows. .
But wait, you might say, don’t rivals like Google, Amazon, and Microsoft already offer this or something similar? The short answer is yes. Amazon’s comparable product is SageMaker Studio, while Google’s is Vertex AI. On the Azure side, there’s the Azure AI Platform.
However, IBM argues that Watsonx is the only Platform of AI tools in the market that provides a range of pre-trained and enterprise-developed models and “cost-effective infrastructure”.
“You still need a very large organization and team to be able to bring [AI] innovation in a way that companies can consume,” Dario Gil, SVP of IBM, told reporters during the roundtable. “And that’s a key element of the horizontal capability that IBM is bringing to the table.”
That remains to be seen. In any case, IBM offers seven pre-trained models to companies using Watsonx.ai, some of which are open source. It is also partnering with AI startup Hugging Face to include thousands of models, datasets and libraries developed by Hugging Face. (For its part, IBM is committed to contributing open source AI development software for Hugging Face and making several of its internal models accessible within the Hugging Face AI development platform.)
The three that the company highlights in Think are fm.model.code, which generates code; fm.model.NLP, a collection of large language models; and fm.model.geospatial, a model based on climate and remote sensing data from NASA. (Awkward naming scheme? You bet.)
Similar to code generation models like GitHub’s Copilot, fm.model.code allows the user to give a command in natural language and then creates the corresponding coding workflow. Fm.model.NLP comprises text generation models for specific, industry-relevant domains, such as organic chemistry. And fm.model.geospatial makes predictions to help plan for changes in natural disaster patterns, biodiversity and land use, as well as other geophysical processes.
These may not sound novel on your face. But IBM claims that the models are differentiated by a training dataset that contains “multiple types of business data, including code, time series data, tabular data, and geospatial data and IT event data.” We will have to take his word for it.
“We allow a company to use its own code to adapt [these] models of how they want to run their playbooks and their code,” Arvind Krishna, IBM CEO, said at the roundtable. “It’s for use cases where people want to have their own private instance, either in a public cloud or on their own premises.”
IBM is using the models itself, he says, in its suite of software products and services. For example, fm.model.code powers Watson Code Assistant, IBM’s answer to Copilot, which allows developers to generate code using plain English prompts in all programs, including Red Hat’s Ansible. As for fm.model.NLP, those models have been integrated with AIOps Insights, Watson Assistant and Watson Orchestrate (IBM’s AIOps toolkit, intelligent assistant and workflow automation technology, respectively) to provide a Greater visibility into performance across IT environments, resolve IT incidents more expediently and improve customer service experiences – or so IBM promises.
Meanwhile, FM.model.geospatial underpins IBM’s EIS Builder Edition, a product that enables organizations to build solutions that address environmental risks.
Along with Watsonx.ai, under the same Watsonx brand umbrella, IBM introduced Watsonx.data, a “fit for purpose” data warehouse designed for both governed data and AI workloads. Watsonx.data allows users to access data through a single entry point while applying query engines, IBM says, plus governance, automation and integrations with an organization’s existing databases and tools.
Complementing Watsonx.ai and Watsonx.data is Watsonx.governance, a set of tools that, in IBM’s rather vague words, provides mechanisms to protect client privacy, detect bias and pattern deviation, and help organizations organizations to comply with ethical standards.
New tools and infrastructure
In an announcement related to Watsonx, IBM introduced a new IBM cloud GPU offering optimized for compute-intensive workloads, specifically training and serving AI models.
The company also showcased the IBM Cloud Carbon Calculator, an “AI-informed” dashboard that enables customers to measure, track, manage and help report carbon emissions generated through their use of the cloud. IBM says it was developed in collaboration with Intel, based on technology from IBM’s research division, and can help visualize greenhouse gas emissions across all workloads down to the cloud service level.
Both products, plus the new Watsonx suite, could be said to represent something of a doubling down on AI for IBM. The company recently built an AI-optimized supercomputer, known as Vela, in the cloud. And it has announced collaborations with companies like Moderna and SAP Hana to investigate ways to apply generative AI at scale.
The company expects that AI could add $16 trillion to the global economy by 2030 and that 30% of administrative tasks will be automated in the next five years.
“When I think about classic administrative processes, not just customer service, whether it’s procurement, whether it’s elements of the supply chain [management]whether it’s IT operations elements or cybersecurity elements… we’re seeing AI easily take on 30-50% of that volume of tasks, and can perform them with far greater proficiency than even humans can perform”, Gil said.
Those may be optimistic predictions (or pessimistic, if you have a humanistic bent), but Wall Street has historically rewarded perspective. IBM’s automation solutions, part of the company’s software segment, grew revenue by 9% year-over-year in the fourth quarter of 2022. Meanwhile, revenue from data and artificial intelligence solutions, which are focused more in analytics, customer service and supply chain management, increased sales by 8%. .
But as a piece in Seeking Alpha grades, there are reasons to lower expectations. IBM has a rocky history with AI, having been forced to sell its Watson Health division at a substantial loss after technical problems led to the deterioration of high-profile client partnerships. And the rivalry in the AI space is heating up; IBM faces competition not only from tech giants like Microsoft and Google, but also from startups like Adhere and Anthropic who have massive capital backing.
Will they make a dent in IBM’s new applications, tools and services? IBM hopes so. But we will have to wait and see.