Kafka, JSON, DevOps: Future Proof Your IBM i With Secure, High Performance APIs
October 26, 2022 Daniel Magid
It is an exciting time to be working with an IBM i! The Rochester lab and IBM partners are rapidly pushing out new technology options that allow you to do anything with the IBM i that you can do on any other platforms. The latest in web and mobile user interfaces, the most modern languages, comprehensive security, machine learning, data visualization, internet of things, APIs – are all available to IBM i users. When combining all that technology with the unmatched reliability, ease of management and low cost of ownership of IBM i, you can be confident that your company can rely on IBM i as its core platform for many years to come.
The secret to making it easy to adopt all this technology is building an open connection strategy based on robust, resilient IBM i APIs.
IBM’s support for open source technologies like JavaScript, JSON, REST, OpenSSL, YUM, NPM, RPM and others make it possible to connect to any new technology that might arise. Resilient, high speed API infrastructure like Kafka make the IBM i the rival of any other platform for API performance and reliability.
The API-Driven Technology Revolution
APIs are the fundamental building blocks that allow IBM i users to quickly take advantage of emerging integration and modernization opportunities. Without requiring extensive changes or rewrites, APIs enable access to your existing RPG/COBOL/DB2 application functions and data from new technologies while providing the ability to reach out from the IBM i to outside applications and web services. They make it possible to combine the latest innovations with the proven RPG and COBOL applications upon which your business depends.
There are several things that make APIs so powerful:
“Loosely Coupled” Connections for Flexibility and Easy Maintenance
An API is an interface that operates on a “contract” basis. The agreement between the API producer and the API consumer is “if you send me this set of data, I will execute this business function with it and, if required, return to you this other set of data”. Neither the consumer nor the producer is constrained in what changes they make to their applications as long as they continue to support the contract. Since the API is independent of the business logic, the same APIs can be used by a wide variety of applications.
By connecting through an API layer, you eliminate the proliferation of individual application to application database connections. Your API server becomes the central switching station for all connections. You don’t need to worry about uncontrolled connections interfering with your ability to do database maintenance. Nor do you need to give direct access to your data tables – the API layer can ensure that users are only getting access to the data they absolutely need.
Real-Time Access
APIs allow outside applications to access data and functions in real time. Many IBM i users are replacing FTP file transfers, EDI and other batch-oriented techniques of sharing data because using those older methods means the user has out of date data. In addition, using file transfers requires that you have multiple copies of the same data which raises the possibility that the copies can get out of sync. Providing real-time access to the data means a single “source of truth” with up-to-date data for all users.
We recently worked with a company who wanted to allow their salespeople to create quotes and collect signed orders on a mobile tablet when they were onsite at a customer. They had dozens of salespeople in the field, all booking orders at the same time. The problem they faced was that multiple salespeople could accidentally sell the same inventory to different customers because they lacked real time access to their inventory data. By providing an API to their inventory system, they could see the status of the inventory in real time. When a salesperson books an order, the system immediately updates the status to reserved and no other salesperson can sell that inventory. In the real world, there are endless examples where this kind of real time data access is critical to avoiding errors and ensuring efficient business operations.
Data Transformations
For an API to operate, it must be able to share data between the API producer and consumer. In an IBM i environment, this can be challenging because most non-IBM i users are sharing data via non-native and loosely typed data formats like JSON, XML, Comma Delimited Files and others. The IBM i requires highly structured, rigidly typed data. The API layer must be able to rapidly extract the required data from complex open source data structures and turn them into data formats IBM i applications can digest. The APIs must be able to perform those transformations extremely quickly, so they do not slow down your business process. This is where open source technologies like JavaScript can dramatically speed up API performance (see JSON and JavaScript discussion below).
Security and Authentication
Since setting up APIs means putting up a sign that says “open for business,” it’s critical that IBM i users know exactly who is coming into their systems and what it is they are allowed to do. The API layer can take advantage of the latest security techniques like encrypted tokens and OAuth2 to secure your IBM i. (For more information on API security, see my last article. By using JavaScript in your API layer, you can integrate free, intensely vetted, continuously updated security modules into your APIs to keep your system safe.
Beyond authenticating potential users, the API layer can also limit access to the developers outside the IBM i to just the data they need to perform a specific operation. Too often, outside users are given direct SQL, ODBC or JDBC access to entire tables. Using APIs, you can control access at a very granular level. You do not need to grant them access to entire tables or views. For example, we worked with a customer who provided API access to their open orders. Customers accessing the system could only see their own orders, company salespeople could see all the orders for their customers and company executives could see all orders for all customers. The API layer controlled each user’s access.
Resilience And High Performance
As business becomes increasingly dependent on API communications, it becomes critical that those APIs remain up, running and available. Consumers must be able to get their data quickly (no one wants to wait for their web page to load or for responses to their API calls) and producers must be able to handle high volumes of transactions. IBM’s support for Kafka provides a great platform for high speed, highly reliable APIs.
Kafka for High Performance, Highly Reliable APIs
Resilience and high performance are two reasons that Kafka is seeing increasing adoption on the IBM i. Kafka is an open source-based, event driven, pub-sub messaging application for providing loosely coupled connections between a variety of message producers and consumers. If that sentence seems like a whole bunch of confusing jargon to you – read on!
According to the Apache Software Foundation, more than 80% of the Fortune 500 use Kafka. It is also in use at a large number of small to medium size business. They use it to ensure that their users are getting the best possible response time when accessing their applications and to handle the rapidly increasing volume of machine to machine communications.
So why use Kafka? The answer includes:
- Highly responsive User Experience – millisecond response time even for large numbers of simultaneous requests
- Capacity to handle high volumes of requests – potential to process billions of transactions per day
- Resilience – replicated servers ensure your systems never go down
- Simple maintenance – no need to build and maintain multiple application to application integrations
Let’s look at a common sample use case:
As an example, many customers are using Kafka to integrate their IBM i applications with their e-commerce systems. They want real time sharing of transaction data among a variety of systems.
Let say I am selling my products through my e-commerce website. I might want to take several actions when a customer is preparing to place an order (there certainly could be many more than these):
- Check the customer’s credit availability
- Check inventory for product availability
- Reserve the items in the order in my IBM i inventory immediately so the same inventory is not sold more than once.
- Get a shipping quote
- Generate a price quote
The traditional way to do this would be to write several direct integrations, like this:
With Kafka, you avoid creating these direct integrations. The e-commerce system would simply publish each order inquiry to Kafka and Kafka would make those records available to each subscribing application. This is why Kafka is called an event-drive, “pub/sub” (publisher/subscriber) application. When an event occurs (an order inquiry is submitted), Kafka publishes a message that is accessible to each subscriber. The publisher does not need to know the subscribers and the subscribers do not need to know the publisher. The API layer and Kafka control access and authentication.
That kind of loosely coupled architecture not only eliminates the need to create multiple individual integrations, it also means you can maintain the e-commerce applications and the back end applications separately without worrying about breaking the connection. This is how this same e-commerce integration would look in a Kafka environment:
To ensure reliability and high performance, Kafka can automatically replicate the incoming messages onto multiple Kafka brokers. This protects you from downtime and from losing data if a Kafka server has a problem. Incoming requests will simply be routed to a running Kafka server. And, since you can have these replicated brokers running on different systems, it also provides you with practically unlimited scale in the volume of messages you can handle. There are Kafka users who are handling billions of messages per day. (LinkedIn, the original developers of Kafka recently surpassed 7 trillion messages per day.)
So, the advantages of using Kafka are:
- It is extremely fast
- It can support many to many application connections without writing multiple integrations
- Kafka connections are easy to maintain
- It protects you from unexpected downtime
- It can handle extremely high volumes of requests
There is still one challenge to using Kafka on the IBM i. How do I get the messages from Kafka which are typically in open source formats to and from formats that the IBM i understands? To avoid losing all the performance and reliability advantages of Kafka, you need something that is scalable, resilient and can perform the data transformations at the speed of Kafka.
At Eradani, we’ve solved that problem by encapsulating the transformation code in a high-speed layer within Eradani Connect. This not only ensures extremely high speed transformations, it also avoids placing a huge transformation processing load on your IBM i.
Without Kafka and Eradani Connect, your IT staff has to build and maintain many individual system to system integrations. When you make a change, you must update all of the integrations. Programmers have to write the code to secure your APIs and they have to translate open source message formats like JSON, XML, Comma delimited files and others to formats the IBM i understands. With Kafka and Eradani Connect, most of that work is done for you.
JSON And JavaScript
IBM’s support for JSON is another great tool for your API toolbox. Most API communications being developed today are being implemented as REST services using JSON. Tools like YAJL and JSON_TABLE make it easy to work with JSON in RPG. But if you really want to supercharge your API performance consider processing JSON with JavaScript. JSON is JavaScript Object Notation so there is no better language for working with JSON than JavaScript.
Extracting data from a JSON payload in JavaScript can be literally tens of thousands of times faster than performing the same operation in RPG or Java. It can make the difference between responses measured in seconds versus responses measured in milliseconds. And the great news is that you can do that extraction regardless of how complex the JSON structure is with just one line of JavaScript code.
Using JavaScript also opens up the opportunity to use Software Development Kits (SDKs) from API providers. These powerful modules include much of the code you need to work with APIs from companies like Amazon, Google, Shopify, PayPal and many others. It can save you from the need to write thousands of lines of specialized API code.
Managing API Development With Open Source DevOps
Once you begin developing APIs and integrating applications built with different technologies, it can be useful to manage all of that development using a single set of management tools.
Open source DevOps is another great technology IBM is promoting for use by IBM i developers to address that challenge. Since APIs connect disparate applications and platforms, API development generally includes development of code in languages like JavaScript, Python, PHP, .Net, Java combined with development in RPG and COBOL. With support for using Git with your native IBM i code, you can manage all the application parts with one set of DevOps tools. Your RPG, COBOL, DDS/DDL, CL and other IBM i native sources can be in the same repository as your open source code. And you can use tools like Jenkins and Azure DevOps for automating your promotions and deployments.
Managing And Monitoring API Performance
Once you have implemented secure, high speed, resilient APIs, you should monitor them and ensure they are performing satisfactorily. That can be done through an API monitoring dashboard that tracks all API activity.
Future Proofing Your IBM i Platform
By combining your IBM i native applications with the latest API technology like Kafka, JSON, and JavaScript and managing everything with the most popular and powerful DevOps tools, you can be ready to take advantage of the latest innovations and emerging technologies. That environment will make it easier to recruit new developers to ensure that you always have the talent you need to keep your applications moving forward. A comprehensive API strategy can certainly help you “future-proof” your IBM i and ensure that it continues to serve your company well for years to come.
Daniel Magid is founder and chief executive officer at Eradani.
This content is sponsored by Eradani.
RELATED STORIES
Highly Secure API Enablement for IBM i
API Operations Management for Safe, Powerful, and High Performance APIs
Every IBM i Shop Must Have An API Integration Strategy
Modernize Your IBM i Using Other People’s Code
Calling All IBM i Platforms. . .
In The API World, Nobody Knows You Are An IBM i
In Search Of Next Gen IBM i Apps
Modernization Trumps Migration for IBM i and Mainframe, IDC Says