Generative AI: Coming to an ERP Near You
March 22, 2023 Alex Woodie
Don’t look now, but generative AI technology like ChatGPT is spreading into every aspect of computing, including the large enterprise applications at the heart of business. The technology may not be on IBM i-based ERP suites yet, but it’s just a matter of time.
The sophistication of AI technology has been building steadily for the past 10 years, thanks to the adoption of massive neural networks that can be trained on huge amounts of unstructured data, such as text and images.
Dubbed deep learning, the brute-force approach paid handsome dividends for two computational problems in particular: computer vision and natural language processing (NLP). It took years for the concurrent neural networks (CNNs) and recurrent neural networks (RNNs) to improve, but eventually AI applications were matching and sometimes surpassing humans in areas like interpreting X-rays and speech recognition.
As one branch of the NLP tree, language models have been around for decades. In 2017, researchers at Google released Transformer, a novel neural network architecture designed to model language and provide language translation and question answering. This approach yielded a significant jump in accuracy of generated answers and would become the first of a new category dubbed large language models (LLMs).
Soon, the tech giants were embroiled in a full on LLM arms race. Powered by larger and more powerful GPUs and ever-bigger data sets to train upon, AI researchers devised bigger and bigger LLMs in a quest to increase the accuracy level of the responses and make the AIs ever more human-like. Google, Microsoft, OpenAI, Nvidia, and Facebook have invested billions of dollars to build bigger and better LLMs.
With each successive generation of LLM, the number of parameters has grown, and the capabilities have increased. The most common application of this technology has been conversational AI systems, or chatbots, which companies use to automatically answer questions for customers. Modern conversational AI systems powered by LLMs are much better than the rudimentary systems that many customers have encountered in the years past. The answers are generally accurate, and the conversation is less stilted and more natural with the LLM-based systems.
While the public was mostly unaware of the AI arms race taking place in the massive data centers of the Silicon Valley giants, LLMs began turning the heads of industry analysts a few years ago. As the models got bigger and they were trained on more data, their capabilities expanded considerably. They were not limited to translating between Japanese and German but could also generate JavaScript or SQL. They could create websites, and one even is said to have created its own programming language. At the same time, a similar growth in capabilities was happening with computer vision models. And in some cases, the same model could be used to generate a novel image just by telling the model to create it.
Training LLMs on huge amounts of information culled from the Internet enabled them to start displaying real signs of intelligence – artificial, of course, but still intelligence. The march toward a human-like intelligence as envisioned in 1950s science fiction novels, typically referred to as artificial general intelligence (AGI), suddenly wasn’t decades away, but perhaps just a few years away.
In June 2022, Google’s LaMDA LLM is said to have passed the Turing Test, the test named after famed English mathematician Alan Turing. While not everybody agrees that LaMDA passed the Turing Test – which is marked by a human not being able to tell whether he is conversing with a computer or a human – it’s clear that we have entered a new realm in AI, with AGI not out of the question anymore.
The public got its first big taste of these new AI features in late November, when OpenAI released ChatGPT. As a scaled-down version of its GPT-3.5 LLM, ChatGPT nevertheless caught the attention of the world thanks to its uncanny capability to generate mostly accurate responses to just about any question.
The unexpected reception by the public to ChatGPT has helped to kick the LLM arms race into overdrive. In just the past few weeks, Google has launched Bard, its ChatGPT competitor, and OpenAI has released GPT-4, its massive replacement for GPT-3.5 that mixes image and text capabilities together.
Microsoft announced that it pushed its partner OpenAI’s technology deeper into an array of products, from its Bing search engine to Office 365 and Dynamics CRM and ERP offerings. With its Copilot offering, Microsoft sees AI handling an array of tasks previously handled by humans. That includes some language-specific tasks, like writing emails and writing marketing pitches. But it also includes some tasks that are more reasoning-oriented, such as the Copilot offering for Microsoft’s Supply Chain application, which will proactively alert users when it spots things that could hurt logistics operations when it comes to weather, financials, or geography.
And earlier this month, Salesforce announced it is integrating OpenAI’s GPT technology into its suite of enterprise offerings, spanning CRM, marketing, analytics, and support products. Einstein GPT, as the new offering is called, will be used as a chatbot for customer service and to write emails and marketing pitches. But Einstein GPT will see action in other areas, like generating articles for the knowledge base, providing customer insights in Slack (a Salesforce property) and even generating code snippets, test cases, and comments with Einstein GPT for Developers.
Clearly, LLMs can already be used to generate chatbots and write emails. There are many front-office jobs that could be automated through LLMs, although companies would be wise not to turn these functions over entirely to the bots and lean on humans to supervise the LLMs at work.
But what other back-office ERP functions could LLM interfaces like ChatGPT and Google Bard handle? A human sitting behind a monitor with her hands on the keyboard and a mouse is basically a language-understanding and language-generating entity. Words appear on the screen, the worker reads and understands them, and then she does something in response.
While the words are representations for real-world things, they’re still just words, which LLMs can comprehend with something akin to human-level accuracy. The fact that LLMs tend to make things up when generating text, or “hallucinate” facts (as the AI researchers call it), is a barrier to wider adoption. But the hallucination rate (currently about 25 percent to 30 percent with ChatGPT) is dropping with each successive generation. There are also ways to counter this problem in production settings.
With chatbots and conversational AI systems, LLMs are already handling many lower-level tasks in the enterprise, freeing human operators to take on more urgent cases (although getting through to a human is still an exercise in extreme patience when dealing with many large corporations [AT&T, we’re looking at you]). With their capability to generate code, we’re seeing LLMs enter the world of programming. They’re beginning to automatically analyze data, providing another set of “eyes” for the tired data and business analysts.
For enterprise software vendors, the time would appear to be ripe to begin adopting this technology. Tasks with language-heavy workflows will clearly be the first to be automated, but even positions that work heavily with tabular data, such as payroll positions, would appear to be in AI’s crosshairs, too.
This shift to AI in enterprise systems may seem all a bit sudden, but we’ve actually been on this path for some time. You’ll remember six years ago the prediction made by Tom Siebel, the founder of the CRM giant Siebel Systems (now part of Oracle) and now the CEO of C3.ai, in these virtual pages.
“The next generation of CRM is CRM meets AI,” Siebel told us back in 2017. “It replaces the ERP market, it replaces the supply chain software market, and it replaces manufacturing automation, absolutely 100 percent replacement.”
Siebel may have been onto something.
RELATED STORIES
It Is Time To Have A Group Chat About AI