Thoroughly Modern: How IBM i Shops Can Navigate The AI Landscape In 2024
January 15, 2024 Marc Hunter
AI continues to have an influence on us in 2024, and beyond. Implementing AI in a tangible way will require IBM i leaders to navigate diverse problems, capabilities, opinions, and experiments. It will pave the way for evolving our strategies, and possibly even developing an entirely new set of approaches.
Think of it in terms of making pizza: We have many recipes that make good pizza, we know what toppings work together, we know how much seasoning we need. When a revolutionary new ingredient is introduced, we need to reevaluate every recipe going forward – is this an ingredient that would make these recipes better? Is there a new recipe we can create now with this new ingredient on the table? AI is a new ingredient that we need to carefully consider. It is still early days, so predicting what those recipes are will be a bit of fortune telling, but Large Language Models (LLMs) have introduced a capability that we need to consider in our future “pizzas.”
Anticipating the Impact of AI on IBM i Projects
The impact of AI on IBM i projects will vary considerably depending on the nature of the project, but some impacts will be broadly applicable. The first (seemingly obvious) thing to note is that a failure to try LLMs guarantees a failure to unlock any benefits. There is surprising skepticism from technical staff with respect to the value proposition. This is motivated by a subconscious fear or the unsettling variability in the quality of the results. This hesitancy will fade in the coming years as use of LLMs becomes more common, the quality of the results continues to improve, and developers gain intuition on which problems are most relevant to LLMs. Those who adamantly refuse to engage with AI will be at risk of falling behind.
One key use case, which spans many types of projects, is using an LLM as a personal (and endlessly patient) tutor alongside you as you move into a new technology domain. A frequent headwind faced by IBM i projects, particularly when modernizing, is the learning curve around the adoption of new technologies. If the technology being incorporated is a standard one and has a large presence on the internet in general (i.e.: the LLM training data set), then leveraging an LLM as a “personal trainer” can make a huge difference in the speed of learning and adoption.
Relatedly, LLMs have proven quite adept at assisting with debugging, again, as long as the technology being used is robustly present in the LLM training data. Pasting a code snippet and an error log into an LLM is a remarkably useful way to sort out thorny issues which historically might have taken a few hours of Google and/or time-consuming experiments. Aside: Make sure you are aware of data privacy and IP considerations (more on this later).
Technology never remains the same, and a developer’s success is substantially affected by their learning rate, and their ability to pivot quickly. Leveraging LLMs can accelerate learning – lowering the cost of change for all who embrace them.
Cultivating A Proactive Mindset As Developers
It bears repeating: The growth of AI skillsets within the IBM i space will require a willingness to embrace technology. Our community will need to overcome the initial awkwardness and resistance people encounter when evaluating any new technology or tool, and developers who are open to testing and considering the possibilities will tend to unlock the most benefits. It’s not hard to get started – you don’t have to master prompt engineering or other advanced techniques. In my experiments, I find that the better LLMs are quite forgiving. Rather than being nitpicky about the semantics of my input, they are good at inferring what I’m trying to accomplish, deriving meaning even from loosely structured queries. I anticipate that as these technologies continue to evolve, interactions will become more intuitive.
A word of caution however: Maintain a healthy skepticism of all AI output. While powerful, it can be “confidently inaccurate.” This skepticism shouldn’t really be exclusive to AI – you might just as likely get misleading answers from a colleague or an online forum. It’s important to approach AI output in the same way you would evaluate information from either of those (or any other) sources.
The Consistency Conundrum
Consistency (also perceived as “reliability”) is one of the central challenges with LLM adoption. The problems most amenable to LLMs are those which can accept a varying level of reliability/consistency. Keep in mind that the variability of results is generally considered a “feature” of an LLM, something that gives it a different value proposition than historical development approaches. In a sense, AI has gotten closer to how humans think and work, and it turns out we don’t do everything the same way every time we do it!
This creates one of the paradoxes of LLM usage: You can very quickly use it as a personal tutor, because your intuition will easily compensate for any “wrong answers” it gives you. But if you want to embed it into a crucial business process, it becomes much more complex.
Having said that – for quality of results, and consistency of the quality of those results, we have so far found that GPT-4 is still outpacing the other models we have tested. Its proprietary nature, however, makes navigating specific business cases a bit complicated. To what degree and in what way is your project able to be tethered to a particular vendor? Alternatively, there are many open source models, like Llama 2 from Meta Platforms and Mixtral from Mistral AI, which provide different trade-offs in terms of where they can be hosted, their consistency, and their general quality of results. This makes evaluating the overall effectiveness of any specific model very context specific.
If you’re going to work with some of the open source LLMs, be aware that their colossal size — for instance, Llama 2 has a 70 billion parameter variant — that the parameter size adds another layer of complexity. These massive files require enormous amounts of memory, not to mention specialized GPUs, which can make them tricky to work with, especially in the experimentation phase. Finding an LLM that works for you will involve a somewhat complex decision-making process of evaluating what tradeoffs (in terms of cost, usability, quality, and manageability) the business is comfortable with.
Navigating The Pitfalls And Challenges Of AI
Whichever approach you take, whatever your job function, don’t get trapped in either of these two schools of thought:
- Dismissing the value AI can generate in different capacities: This is a surefire way to get left behind.
- Drinking the Kool-Aid and thinking AI is the panacea to all problems: AI won’t replace your entire development team.
Another pitfall is data privacy, especially when dealing with sensitive IP or personal data. Be sure to understand whether your data is leaving your control, and if so, what data protections apply to it. For instance, currently GPT-3.5 will help itself to your query data, but GPT-4 (the paid version) will not. GitHub copilot will read chunks of your code but agrees not to use it for training purposes. Every vendor has a different policy, and those policies change over time.
The last but possibly the most important challenge is to make sure your LLM’s message is safe and on-brand, especially if you plan to expose it to your customer base. You don’t want it hallucinating content that is offensive or harmful to your corporate reputation. This is more complex than it sounds because of the inherently stochastic nature of LLMs – your chatbot may respond just fine in your internal testing and then react differently once it is live.
Advice For IBM i Shops Considering The Merits Of AI
Getting started with AI within the IBM i space presents a set of unique challenges. While it is important to engage with LLMs, the underlying reliability issue looms large. For example, if I ask it how to configure an IBM i system setting, it can lie through its teeth in a very convincing fashion, sending me down a rabbit hole trying to locate an imaginary interface. But if I’m asking it about topics well represented in the training data set, like Spring or Java, it gives me very high quality answers.
Despite these challenges, you should determine the value potential for yourself. Explore ways that it can improve productivity, or save time in different job functions, beyond the confines of IBM i-related tasks. The cost to engage with GPT or Llama is minimal to zero right now, so at this point, it is almost a no-brainer.
Embedding LLMs deeper within your organizational tech stack is a different proposition and will require its own set of experiments and considerations. In certain cases, for instance, there may be legal implications to the use of an LLM for a certain output – so you’ll want the right set of corporate stakeholders at the table. Another thing to consider is that despite the clear utility of LLMs, they are still in their ‘fad’ stage, and the market is flooded with miracle AI cures for baldness, weight loss, and whatever else ails you (I’m only half kidding). Due diligence, a tiny grain of skepticism, and a basic understanding of the strengths and weaknesses of LLMs will help guide you.
LLMs are here to stay. How we, as IT leaders, choose to engage with it today will determine how much they transform our operational landscape.
Navigating The Path Forward
Let’s revisit my initial analogy of AI being a new ingredient in the kitchen. The creative brainstorming phase – conjuring up ideas for a unique pizza – is exciting, but then comes the hands-on experimentation phase: discovering the perfect recipe for you. The deeper challenge lies in the concrete act of constructing and rigorously testing premises against the reality of your specific business scenarios and applications.
For organizations, this will be the time to employ a critical decision-making process – to determine what makes sense for the business, what will be the extent of the investment, what are the potential gains or losses.
I’ll emphasize the need for a thoughtful and measured approach to AI – the path forward needs to align with organizational/business goals. Once you have alignment and a healthy combination of courage and caution, AI stands to unlock many interesting possibilities and novel capabilities within your organization.
Marc Hunter is vice president of product innovation at Fresche Solutions. Marc cut his teeth over 30 years ago developing IBM i modernization tools on an old B10 above a garage. Today, he leads the Fresche innovation team from Sidney, BC, Canada. He’s passionate about software development and problem solving and has taken multiple products from conception to successful launch. As a serial tinkerer, Marc is always exploring new ways to bring value to customers and is constantly seeking to improve development processes. Outside of work, he and his wife manage a three-ring circus – life with five kids!
This content is sponsored by Fresche Solutions.
RELATED STORIES
Thoroughly Modern: Practical Ways IBM i Developers Can Use AI Today
Thoroughly Modern: How X-Analysis Transforms IBM i Challenges Into Solutions
Thoroughly Modern: What’s New In IBM i IT Planning
Thoroughly Modern: Top Things To Stop IBM i Hacks
Thoroughly Modern: Remote Managed Services Fill In For Retiring And Overburdened IT Staff
Thoroughly Modern: Proven Strategies For Innovating IT And IBM i In A Digital Age
Thoroughly Modern: Unlocking the Full Potential Of Your IBM i Applications
Thoroughly Modern: Why Modernizing IBM i Applications Is Important And Where to Start
Thoroughly Modern: What You Need to Know About IBM i Security
Thoroughly Modern: Flexible And Fractional Staffing Models That Deliver
Thoroughly Modern: How To Optimize IT In 2023
Thoroughly Modern: A Swiss Army Knife For IBM i Developers
Thoroughly Modern: Digital Solutions For IBM i And Beyond
Thoroughly Modern: Simplify IBM i Application Management and Extract Key Insights
Thoroughly Modern: Four Ways Staff Augmentation Is Helping IT Get Things Done
Thoroughly Modern: Bring Security, Speed, And Consistency To IT With Automation
Thoroughly Modern: Good Security Is Just As Important As Good Code
Thoroughly Modern: The Real Top 5 Challenges For IBM i Shops Today
Thoroughly Modern: Improving The Digital Experience With APIs
Thoroughly Modern: IBM i Security Is No Longer Set It And Forget It
Thoroughly Modern: Taking Charge of Your Hardware Refresh in 2022
Thoroughly Modern: Building Organizational Resilience in the Digital Age
Thoroughly Modern: Time To Develop Your IBM i HA/DR Plan For 2022
Thoroughly Modern: Infrastructure Challenges And Easing Into The Cloud
Thoroughly Modern: Talking IBM i System Management With Abacus
Thoroughly Modern: Making The Case For Code And Database Transformation
Thoroughly Modern: Making Quick Wins Part Of Your Modernization Strategy
Thoroughly Modern: Augmenting Your Programming Today, Solving Staffing Issues Tomorrow
Thoroughly Modern: Clearing Up Some Cloud And IBM i Computing Myths
Thoroughly Modern: IBM i Web Development Trends To Watch In the Second Half
Thoroughly Modern: Innovative And Realistic Approaches To IBM i Modernization
Thoroughly Modern: Running CA 2E Applications? It’s Time To Modernize The UI
Thoroughly Modern: Understanding Your IBM i Web Application Needs With Application Discovery
Thoroughly Modern: What’s New With PHP On IBM i?
Thoroughly Modern: A Wealth Of Funding Options Makes It Easier To Take On Modernization
Thoroughly Modern: Speed Up Application Development With Automated Testing
Thoroughly Modern: The Smart Approach to Modernization – Know Before You Go!
Thoroughly Modern: Strategic Things to Consider With APIs and IBM i
Thoroughly Modern: Why You Need An IT Strategy And Roadmap
Thoroughly Modern: Top Five Reasons To Go Paperless With IBM i Forms
Thoroughly Modern: Quick Digital Transformation Wins With Web And Mobile IBM i Apps
Thoroughly Modern: Digital Modernization, But Not At Any Cost
Thoroughly Modern: Digital Transformation Is More Important Than Ever
Thoroughly Modern: Giving IBM i Developers A Helping Hand
Thoroughly Modern: Resizing Application Fields Presents Big Challenges
Thoroughly Modern: Taking The Pulse Of IBM i Developers
Thoroughly Modern: More Than Just A Pretty Face
Thoroughly Modern: Driving Your Synon Applications Forward
Thoroughly Modern: What To Pack For The Digital Transformation Journey