Thoroughly Modern: Speed Up Application Development With Automated Testing
October 12, 2020 Timothy Prickett Morgan
Starting with the System/38 and even more so with the AS/400 and its progeny, the hallmark of the system, at least according to IBM, was its ease of use. This is not exactly the right idea. While the integrated nature of the platform means it is easy to deploy, and the self-management nature of tech support, database administration, and even system administration yields an ease of management, and the tight coupling of the relational database management system with the high-level compilers has definitely yielded an ease of programming.
It is that last bit that is the hallmark of the system, really. But just because programs are easy to create does not mean the logic of those programs, or their security, is up to snuff. Applications, however good they are, need to be tested as programmers tweak and tune the code. That’s why we sat down with Ray Everhart, senior product manager of X-Analysis at Fresche Solutions, which is adding some new tools to the application development toolbox to automate the testing of applications, which is a big headache for a lot of IBM i shops.
Timothy Prickett Morgan: Every market has its niche and every niche has interesting stuff going on. So what’s happing with programmers in general in the IBM i space and specifically with regard to application testing?
Ray Everhart: For IBM i programmers, the platform lends itself to an amazing amount of productivity. I can crank out a lot of code in a very short amount of time. But it hasn’t always been the most disciplined approach toward development. The code that we see has been out there for quite some time, it was written for something very specific, and may or may not have been improved over time. It may be modified, but the core structure of the program is probably not what it originally was meant to do.
As a result, logic gets inserted in different places in the program that wasn’t the optimal approach. If you were designing it today, you would probably do it differently. The challenge is that when the code was originally developed, the developer tested it the way they used it. Very rarely did they test something outside of how they intended it to be used.
There’s a whole discipline around testing that wasn’t utilized as much as it should have been in the development of IBM i applications. Most applications were developed quickly, written by someone who knew the business and could change it at will. Over time, other people might have added to it, and they might not be aware the original intent or ins and outs of the original code. Testing becomes complicated because these applications have grown to a size where all of the functional aspects of it are wrapped up into one bit of logic. That means that a test plan has to include all of the different ways someone might use that application, creating a complex and difficult application to test.
TPM: Knowing people as I do, and how busy they have become this year in particular, I suspect that a lot of companies pretty much have their testing processes documented on a Post-it note attached to their screen. Testing happens by the seat of their pants, both for new development as well for managing and maintaining existing applications – that’s my guess.
Ray Everhart: Exactly. To give you a little background, I joined Fresche six years ago after working as an independent consultant, doing RPG development and coding. Before that, I worked for an IBM business partner for about ten years. So altogether I’d say I’ve probably seen 300 or 400 different companies, and I have worked for them in different vertical industries, helping them with their IBM i midrange systems. I’ve seen a lot of different testing processes and strategies – and the lack thereof.
Earlier in my career, I worked for a place here in Dallas as a contractor. We were required to test and document it for Sarbanes-Oxley. The level of documentation that I saw depended on the developer. One developer had huge stacks of printed documentation, which made it easy to reproduce what he did. Another guy could fit his testing script fits on a napkin. It can really vary depending on the person or the organization.
TPM: It is my understanding that red tape makes application testing complicated, and you have to expect that given the historical big bang way that people used to do code releases. They were quite literally bet the company events.
Ray Everhart: For some, an hour of code changes could mean three months of testing because they have to go through all of the functional areas of the business before putting a change into production. In that case, having an automated testing process could drastically reduce the time to market. On the flip side, a developer might have a loose testing strategy and the organization might not realize that testing isn’t happening. That developer could make a change and put it live right away without testing.
TPM: What different types of testing are typical IBM i shops dealing with?
Ray Everhart: The approach depends on how disciplined they are with testing. If you don’t have a testing strategy, chances are the developer is accustomed to testing what they think they changed.
If a company has a more disciplined approach, they might use test-driven development as a methodology. In that case, you would design your code to be tested before it works. In that scenario you would write small bits of logic (units) that can be tested independently. Unit testing is all about testing one specific aspect at a time. You want to make sure that that happens the same way every time you execute it. You would then put all of the different pieces of functionality together and create an integration test to ensure everything works together.
User acceptance testing is where an end-user confirms that the application works the way they expect it to. There’s also stress or load testing, if you want to see how the application is going to work with a high volume of transactions.
TPM: Is there a difference in how you might test if you are modernizing your systems versus developing new applications?
Ray Everhart: If I’m modernizing existing code, the testing process depends on the quality of the code and how it was constructed. If I have a monolithic structure with no unit tests available, that’s going to be a different approach than if I have modular code. If it’s already been broken down into small pieces that I can test independently, it’s much easier to modernize or refactor because I can test it before my change, I can test it after my change, and I can very quickly see if my change had an unexpected impact.
Having small tests that you can run repeatedly is key in any kind of refactoring or modernizing. You need to be able to count on the same result and if you don’t have the same result, you need to know that very quickly. That’s not just the program itself: You need to make sure that the environment is exactly the same setup every time you execute the program.
TPM: Can the effort to test a change sometimes seem bigger than the change itself?
Ray Everhart: One of the biggest problems that I see with monolithic code is that the volume of testing is often prohibitive to making the change in the first place. A field resizing effort is a perfect example of that. I know of one company that had a loan amount field that was designed in the 1970s. Real estate prices have risen over the years and eventually, that column could no longer support today’s mortgage values.
The company needed to resize the field from seven positions to 11 and they were able to use our automated resizing tool to make those changes. However, the manual effort to test the changes would have been four times larger than the actual resizing effort unless they automated their testing process.
Automated testing is one way to make a modernization effort possible. You need to be able to set up for your test, run the test, and find out if there are any differences. You’re doing this dozens of times for a program, and unless that activity is repeatable and automated, it’s going to significantly extend the time it takes to complete the project.
If you need to enlarge a column like Loan Amount, the cascading impact to transaction tables, history tables and all of the reports, batch processes and interactive programs can lead to a testing effort that is larger than the coding effort. Our automated resizing tool enables us to achieve 80-85 % automation for the coding effort. By leveraging end-to-end testing tools, the effort put into testing can be reused over and over again.
If you need to enlarge a column like Loan Amount, the cascading impact to transaction tables, history tables and all of the reports, batch processes and interactive programs can lead to a testing effort that is larger than the coding effort. Our automated resizing tool enables us to achieve 80 percent to 85 percent automation for the coding effort. By leveraging end-to-end testing tools, the effort put into testing can be reused over and over again.
When modernizing RPG, COBOL, or Synon applications, the code may not be well structured for unit testing. How do you make sure that you can identify what’s different? You don’t want to write a program to test your program, so how do you rapidly iterate through the changes that you make when refactoring? The answer is using an automated testing tool.
You want a test plan that covers the majority of the ways you could use the program. If you’re relying on the user to test your application, they’re going to take what we call the “happy path” through the program. They’re going to do the same thing that they do in their normal day-to-day work, and they’re going to use it the way they normally use the system. That doesn’t guarantee that you’re adequately testing your program. It does give you a sense of security that you’re at least testing what you use most often, but when you bring in someone new, they might do it a little bit differently.
TPM: Is there a rule of thumb in the industry that tells you how much of your environment that you should test?
Ray Everhart: Unless you’re doing test-driven development, the ugly truth is the only testing you often do what you can fit in at the end of the project. Depending on how much time you’ve budgeted, the testing is often short changed. One of the benefits of test-driven development is that you develop with the mind of testing right away. Whereas, most people are trying to achieve the business objective, and then they figure out how to test it afterwards.
TPM: In modernizing a pre-existing application, is there an approach that will you save some time? Because time is not only money, it is also time, and we are all a little short of that these days.
Ray Everhart: For net-new development, I’m either going to do test-driven development or I’m going to develop units of code that can be individually tested. That means that when I write a piece of code, it’s going to be small, it’s going to be compact, and it’s going to be independent. I might some kind of unit testing framework, but I still have to write a program that verifies that my program does what it’s going to do.
All of this work, whether it’s test-driven development or creating these assertions and test programs, takes extra time and has to be accounted for. The benefit is that you pay for it upfront once instead of paying for it every time you make a change. If I’m modernizing, chances are I don’t have code that was written in a modular way. I don’t have code that has units of work that I can test independently, so then the only thing I can test is the result of that application.
What you want is a combination of test methodologies. You want to be able to do unit testing, but until you get to the point where you have units that you can test, you have to rely on a different method of testing. I call it end-state testing, and after the job has finished, I look at the contents of the tables to see what the result of running that program was. I’ll look at it after the program has finished to determine the result was. I can then compare that end state. After I reset all my tables back, I can run the program again, but I’ll run the new version and compare those two end states against each other to look for any differences. You can do that without test-driven development, without modularization, without unit tests. I highly recommend building the ability to detect those differences as a first step to a modernization project. As you begin to break things down into units, you can quickly identify what activity created that difference, and then you can focus on correcting that.
IBM has a code coverage tool that’s included with the OS and integrates with Fresche’s automated testing solutions. The wonderful thing about code coverage is that you can document how much of the application you’ve tested and what lines of code were changed.
TPM: Can you talk about the tools that companies used typically in the IBM i space and what Fresche is now offering?
Ray Everhart: Yes. The typical application is large and monolithic. Let’s say you have some kind of a batch day-end process that consists of hundreds of programs affecting 50 different tables, and different things happen depending on the time of the month. That’s very hard to test. You need to be able to quickly identify all of the objects that are used, and you can do that with our X-Analysis solution. This tells you what programs get called by a particular program and what tables they use. It helps you very quickly set up that starting point.
From there, every time you run your test, you have the same starting point because you know all of the objects you need and you can put those objects into a save file and restore them right before you run your test.
You can then use X-Datatest to create an automated test process. Now that you’ve identified all of the objects you need using X-Analysis, you can make a copy of them and you can always go back to that known starting point. You tell the test process what program you want to run and then you take the results and compare them against a known set of results to identify any differences. Automation removes the need remember how to follow a script that someone wrote three years ago. It also takes care of restoring to that known point, running the application and comparing the differences.
The nice thing is that these tools allow you to start where you are today and improve your testing as you modularize. You always want to drive towards modularity and unit testing, but you don’t have to wait until you have all of your code in units before you can introduce automation.
You can also use X-Analysis for test data management. If you don’t have good test data, you’re going to spend a bunch of time debugging your program trying to find out what’s wrong with your program when it could be the data that’s the issue.
Typically, people like to use production data because ultimately your programs are going to be operating over that data, but you don’t want to put sensitive data on an unsecured system. We have test data management features will anonymize that data. A name still looks like a name and an address still looks like an address, but it’s been randomized in a way that doesn’t confuse the tester.
TPM: Are there any other areas that a developer might want to focus on?
Ray Everhart: When you talk automated testing, there are two types of testing that you want to think about:
- Batch testing, which is very easily automated because there’s no user intervention or screen interaction.
- Interactive testing or 5250 testing, which presents a different challenge. I’ve worked for a number of different companies where they have entire teams setting up test scripts that record the navigation so they can play that back later and monitor the screen for differences.
In the same way that we can detect differences in a file, we can also detect differences on the screen. It’s important to validate what gets written to the table and what gets shown on the screen. The challenge with user interface testing is knowing which changes break the test. Sadly, I’ve seen a lot of companies invest in a testing solution that is very brittle. They might spend years setting up test scripts, but the first time you make a change to your application, those test scripts no longer work.
For example, if I’m looking at a value on the screen but I don’t know the underlying variable name and I don’t know where it starts and ends, I just know that there’s something different about the screen. Consider a resize project where I go from seven positions to nine positions. Because it’s a right-justified number, it’s going to shift to the right by two positions when I start. My screens will be different, but that’s an allowable difference because I expected that. I don’t need to know about that, but most 5250 testing tools rely on the field position. Some even come down to a pixel level, which means if I have my screen running at a different resolution than when I recorded it, everything will be different. You don’t also don’t want to have to constantly maintain all of those tests. You want to set them up and then get the benefit of that setup.
We just released our X-Replay solution, which can tolerate changes to the screens without breaking the test. That makes all the difference in your investment in testing because now you don’t need to keep maintaining those scripts. If you introduce a new screen, you can go in and edit it. X-Replay is really exciting because we not only know what’s on the screen, but we know what variable that came from. We can have a really much more robust test script that will work even if there have been changes.
TPM: We hear so much about DevOps these days. What role does testing have in that regard and where do Fresche’s products tie in?
Ray Everhart: The goal of DevOps is to shorten the cycle required to deliver high quality software by utilizing tools that automate the steps in the process. X-Analysis utilizes the extensive repository of information about your application to automate project planning, impact analysis, application understanding, test case development, test data management, testing and documentation.
TPM: Are there any words of wisdom and lessons learned from customers that you want to share to wrap this up?
Ray Everhart: You don’t have to wait. You can start testing with the goal to continually improve your processes, but you don’t have to redesign your whole development life cycle to start implementing good testing. Start with what you have now and use the feedback from code coverage to make it better. Automation is also key because you shouldn’t have to rebuild your testing every time.
We’re hosting a webinar on end-to-end testing and will demo the solutions I mentioned today. If any of your readers are thinking about how they might be able to improve their testing processes, I highly recommend they join us. They can register here.
This content is sponsored by Fresche Solutions.
Ray Everhart is senior product manager of X-Analysis at Fresche Solutions. Ray has spent years helping IBM i companies by assessing their RPG, COBOL and CA 2E (Synon) applications and processes to improve business outcomes. As product manager for X-Analysis, he works closely with IBM i customers to understand their business goals and technical needs in order to drive innovation within the product suite.
RELATED STORIES
Thoroughly Modern: The Smart Approach to Modernization – Know Before You Go!
Thoroughly Modern: Strategic Things to Consider With APIs and IBM i
Thoroughly Modern: Why You Need An IT Strategy And Roadmap
Thoroughly Modern: Top Five Reasons To Go Paperless With IBM i Forms
Thoroughly Modern: Quick Digital Transformation Wins With Web And Mobile IBM i Apps
Thoroughly Modern: Digital Modernization, But Not At Any Cost
Thoroughly Modern: Digital Transformation Is More Important Than Ever
Thoroughly Modern: Giving IBM i Developers A Helping Hand
Thoroughly Modern: Resizing Application Fields Presents Big Challenges
Thoroughly Modern: Taking The Pulse Of IBM i Developers
Thoroughly Modern: More Than Just A Pretty Face
Thoroughly Modern: Driving Your Synon Applications Forward
Thoroughly Modern: What To Pack For The Digital Transformation Journey