A Million Miles Away From Machine Learning
May 24, 2021 Timothy Prickett Morgan
The spring COMMON NAViGATE conference has not yet started, the IBM Think 2021 conference has just ended and so has Google I/O 2021, and only a month ago we participated in Nvidia’s GPU Technical Conference 2021. A whole lotta things are rattling around in our brains, and we are still thinking about some of the things people have been saying about artificial intelligence, the instantiation of which based on machine learning techniques seems to work quite well but no one really knows, in the same way a COBOL or RPG program is absolutely deterministic, why it works.
This is what happens when you let software write software – algorithms, really, for categorizing and transforming media formats that describe reality. This is just the first phase of AI, the one that IBM and its peers are still talking about. But there is a next-phase, an evolutionary jump that is coming, we think, and it will be an interesting one that will impact all IBM i shops and all of us on planet Earth as we live our lives. More than the relational database or the spreadsheet or the word processor or the Web browser or the iPhone ever did. That’s on our mind, so that is what we are going to talk to you about today.
Let’s start with the way IBM and others are thinking about AI in the more immediate future. Big Blue’s Institute for Business Value, a kind of internal and quasi-independent think tank that does what IBM founder Thomas Watson expounded upon when he admonished all of the company’s employees a century ago to THINK, put out a report called The Business Value of AI, which tries to quantify what is happening in the enterprise IT sector with regard to machine learning and other statistical analysis and action. You can read the report, which is interesting, but it can be summed up in two charts and that will save us on the order of 2,000 words. <Wink>
Here is the first chart:
We have said many times in this publication that recessions do not cause technology transitions, but rather accelerate them. And as we can see from the chart above, the coronavirus pandemic that hit more than a year ago changed some of the top priorities among enterprise IT shops and their upper management, with mobile, AI, and cloud jumping right up to the top, and all of these are part of a much broader and deeper digital transformation at enterprises of all sizes and stripes that we have seen. Just like the Y2K crisis and the dot-com boom were good excuses to do a lot of things and get them done quickly, the pandemic has given customers a chance to shift priorities and not take any heat for it.
If this first chart is the AI road, then the second one is where the budget rubber meets the AI road. Take a look:
We don’t know what the return on investment is for AI projects, but when you see pilot projects in AI driving a 4.3 percent average annual revenue increase and those operating and optimizing their AI projects seeing 10 percent or higher revenue growth, something real is happening. This is not cost reduction, as is often the argument for automation, but revenue increase, and by the way, we think there is a similar, companion chart that IBM did not draw – and will not draw – that shows how many people can be augmented by AI so the other people can be let go. IBM made the mistake of talking about how many people could be replaced in the back office in 1964 with the System/360, and shut up real quick about talking about cost cutting and replacing humans with machines and hasn’t really done it since because of the backlash. Imagine cancel culture today and how it would react to such statements?
Here is another interesting AI study, called the Global AI Adoption Index 2021, you should read, which IBM commissioned and which was done by Morning Consult on behalf of Big Blue. This is just to give you a baseline of where enterprises are at when it comes to AI adoption within their businesses. About a third of IT executives polled said their companies had not really explored AI, but the remainder are either exploring AI or have already deployed it and are ramping it up around their companies.
This is where we are at, like making the transformation from basic MRP systems to ERP systems back in the 1980s and 1990s. We are making a similar transformation in IT now, replacing the insight that used to come from people with insight generated by machine learning models that presumably are better and less costly than people over the long haul. Which is why we are still not doing accounting ledgers in pencil on greenbar paper with plastic visors on our heads.
I am impatient about the future sometimes, and want to jump ahead to an end state at a certain time five or ten years from now. Something that Jensen Huang, co-founder and chief executive officer of Nvidia, talked about and showed in his GTC 2021 keynote keeps ringing in my eyes and ears. It was for the factory of the future, and it doesn’t matter whose factory it was. The concept is profound and it will affect everything. This was an auto manufacturing plant, and the operator of the factory was talking about the difficulty of managing a factory with many products, which require different parts and processes, and a non-regular flow of orders. This causes constant reconfiguration of the factory, something we did not fully appreciate. Such reconfiguration can take days or weeks, and it is complex and sometimes even a little dangerous.
And so Nvidia is helping to create a digital twin of this factory, and then using AI techniques to optimize the factory reconfiguration flow to best match the pipeline of car orders and capacity planning (which is done by people, we presume, but probably not for long). I sat there and watched this and then mapped this onto the entire world, and then I realized that you could take all of the telemetry of the world, merge it with the digital twins created for every building and home, every road and highway, and then map it back onto actual reality with a smartphone or smart glasses or VR helmets or whatever.
I grew up, like many of you, watching Star Trek, and always wondered how the long range sensors and Spock’s tricorder (a kind of portable sensor array) might actually work. It can’t. It would take too much power to get telemetry that way. But if you built digital twins of things and then updated them with a kind of changelog of new data over time describing them, and then overlayed all kinds of telemetry – local weather, the dimensions of all buildings, the names of people within 100 feet, a map showing how to get places, whatever – into the digital twin and then overlayed that onto actual reality, you could create something like a personal tricorder for everyone that would absolutely blow away all the of things we know about the world around us – and what the world knows about us – thanks to our smartphones.
This is what I have been thinking about, and it is far more interesting than natural language processing, which seems like a parlor trick. AI is going to optimize a fake world and superimpose it on the real one. This is a lot bigger idea than what we were thinking about when we sat down to write this week, which was that IBM has to work to make AI not just integrated with the IBM i platform, but absolutely invisible and effortless. But I am convinced that this digital twin is the bigger issue, and that the hyperscalers and cloud builders of the world will do this largely for free for the insight and control that it gives them over the world.
This is about a million miles away from using machine learning to find cat images a decade ago, and about two million miles away from RPG on IBM i. But something like what I am talking about – a set of nested digital twins representing the world and overlayed on it – is going to be overlayed on top of you. And you are going to help the Digital Titans do it, and then you are going to pay to use it, like a data plan and a life plan. How truly strange, and dangerous, and useful, and invasive, and unstoppable. Like all information technologies have been for the past couple of millennia.