IBM recently launched a new “Lightweight Engine” for its WatsonX.ai service. While it’s primarily aimed at “enterprise,” it could serve as an on-ramp to secure, in-house generative AI deployment for smaller businesses looking to scale or mid-sized companies in burgeoning industries such as fintech.
The generative AI market is, inarguably, the primary catalyst behind the tech sector’s revenue growth in the first half of 2024. Just 10 years prior, few could have predicted the sheer size and scope of a sector largely driven by the explosive popularity of large language models such as OpenAI’s ChatGPT and Anthropic’s Claude.
Generative AI in financial services
Prior to the launch of ChatGPT, experts in the AI and finance communities widely noted that large language models such as GPT-3 simply weren’t reliable or accurate enough for use in the world of finance or anywhere else where there’s no margin for error.
Despite advances in the field since ChatGPT’s 2023, the same adage remains true: AI models trained for general use, on public data, are as unpredictable as the information they’re trained on. In order for a generative AI model to be more than just a chatbot that can perform some coding functions, models need to be specialized.
JPMorgan Chase, for example, recently purchased enterprise access to OpenAI’s ChatGPT for its entire workforce of 60,000 employees, which includes fine-tuning on internal data and bespoke guardrails. It’s clear that even the financial services industry is leaping aboard the generative AI train.
Beyond chatbots
While many popular public-facing AI services, such as ChatGPT, offer enterprise-level options, they tend to be entirely cloud-based. In industries where regulatory and fiduciary duties require certain types of data to be insulated from the possibility of external manipulation, such as the fintech and financial services industries, cloud-based AI solutions may not meet security requirements.
IBM’s WatsonX.ai works with both cloud-based and on-premises solutions, and with the Lightweight Engine addition, models can be run and deployed on-site with a reduced footprint.
Cointelegraph inquired about the service’s applications, and Savio Rodrigues, IBM’s vice president of ecosystem engineering and developer advocacy, responded:
“As businesses add on-premises, they want the lightest weight platform for the enterprise to deploy and run their generative AI use cases, so they are not wasting CPUs or GPUs. This is where watsonx.ai lightweight engine comes in, enabling ISVs and developers to scale enterprise GenAI solutions while optimizing costs.”
In fintech and other burgeoning industries, such as mining, blockchain and crypto-lending, where off-site AI solutions may not suit all of a company’s security needs, the flexibility of a cloud-based and on-premises capable solution could spell the difference between developing and deploying models internally or subscribing to another firm’s solution.
However, there are a number of competing services, with companies ranging from Microsoft, Google and Amazon to startups focused on building out bespoke AI solutions that provide similar services.
While a direct comparison of services is beyond the scope of this article, IBM’s Lightweight Engine appears to live up to its name. It’s reduced footprint and increased efficiency comes at the price of shedding some features only available in the full-weight version.
Related: Apple used Google’s chips to train its AI — where does that leave Nvidia?