Contain Your IBM i Enthusiasm
May 1, 2017 Timothy Prickett Morgan
In the IT business, what is old is often new again. And so it is with the software containers that are taking certain datacenters by storm these days, and the virtual machines and hypervisors that run them that predate them as a volume product on X86 servers by a decade.
Let’s have some fun with history.
Virtual machines were invented for IBM mainframes in the VM operating system way back in the dawn of time, well, 1972 with the launch of Virtual Machine Facility/370, which ran a lightweight operating system called the Conversational Monitoring system and as it matured could do all kinds of crazy things, like running full-on mainframe operating systems like MVS or VSE and could even nest VM instances inside of VM instances acting like hypervisors running on hypervisors. IBM also cloned a logical partitioning technology called Multiple Domain Facility (MDF) from Amdahl mainframes in 1984 that created what is called a Type 1 or bare metal hypervisor, distinct from the hosted hypervisor approach used by VM on the mainframe. IBM’s bare metal hypervisor was called Processor Resource/System Manager, or PR/SM, which basically ripped most of the guts out of the underlying CP virtual machine layer inside VM.
The initial hosted or Type 2 brand of virtual machine hosting came to the OS/400 platform with V4R4 in 1998, about a year before anyone was talking about virtualization on Unix platforms, including AIX. The initial OS/400 logical partitions ran OS/400 partitions on an OS/400 host, like IBM did with VM on the mainframe a quarter century earlier. It always requires less resources to run a bare metal hypervisor than a hosted one, and it is also easier to secure because the “surface area” of the software is much smaller and easier to ruggedize and keep that way. IBM eventually trimmed down OS/400 to create the Virtualization Engine bare metal hypervisor, and then it added support for Linux and then AIX, then changed the name to Advanced Power Virtualization, and then to PowerVM. It is unclear how much OS/400 and AIX code is in the underlying bare metal hypervisor these days, but we can tell you for sure that the Virtual I/O Server that is used to virtualize peripheral access on IBM i, AIX, and Linux is based on AIX.
Both MVS on the IBM mainframe and CPF, SSP, and OS/400 on the System/38, the System/36, and the AS/400 in the IBM midrange had neat little features called subsystems, which were in effect sandboxes for programs or collections of programs that could be pinned with specific processor, memory, and I/O resources. The combination of VMs, logical partitions, and subsystems allowed for a kind of Russian doll scale of granularity for workloads, allowing for the machines to run many applications side by side on a single machine and providing a reasonably predictable level of performance for all of them. These IBM machines had subsystems for a long time, and VMs and LPARs offered a coarser isolation, often needed to mix operating systems from different customers or in different time zones or in different languages.
The ideas behind subsystems were co-opted and revitalized in the Unix and then Linux markets with various kinds of software container efforts, starting with FreeBSD Unix jails and then moving into Solaris containers, HP-UX Secure Resource Partitions, and AIX Workload Partitions (WPARs). In these cases, rather than having a hypervisor abstracting the underlying hardware, a single operating system kernel and a single file system is used to create virtual sand boxes that look and feel like complete operating system instances as far as applications and users are concerned, but they are not. It is an illusion of an operating system, much as a hypervisor is an illusion of a hardware system.
The Linux kernel was not as mature as those operating system cores on mainframes, AS/400s, and Unix boxes, and Google, the first and mightiest of the hyperscalers, started work in 2005 to virtualize the Linux kernel and create a native container environment so it could provide application isolation and security as well as finer-grained control of access to CPU, memory, and I/O within a system. Based on the ideas of FreeBSD jails and Solaris containers, Google created cgroups and namespaces for the Linux kernel and open sourced these technologies as they were being developed for in-house use, and they eventually were deployed in the Linux kernel as LXC containers. The Docker project took this implementation of containers and married it to a container runtime inspired by Google, spawning the Docker movement, and three years ago Google open sourced a container orchestration system called Kubernetes, inspired by its internal Borg cluster and container controller, and a revolution is underway in the datacenter akin to the wave that took it when virtualization started on Unix gear in the late 1990s and then to X86 iron (mostly thanks to VMware) in the mid-2000s.
The upshot is that enterprises now have access to myriad implementations of Docker container systems and can be a bit more like Google in using software containers to deploy, update, and retire their applications. It is like marrying virtual infrastructure management with an application build system and a software lifecycle manager.
Both Docker and Kubernetes are programmed in the Go programming language, which was also created by Google, and they mesh well together by design. But Docker requires a Linux substrate, no matter how skinny, to run. Even if you run Docker containers on Windows Server, as Microsoft most certainly wants to do, you need Linux in there somewhere. Because not everyone wants to run the same commercial Linux or even a full Linux distribution underneath Docker – the point of containers is to minimize the operating system substrate to the bare essentials and only put the elements of the operating system underneath or inside the container that you need to run the applications in the container –Docker has even gone so far as to allow companies to create their own containerized implementations of Linux through a project called LinuxKit, which was announced two weeks ago at DockerCon 2017. At that same container event, IBM announced that it would be supporting for full-on Docker Enterprise stack, which just came out in March, on Power Systems and System z machines (including its LinuxONE machine, which is a mainframe that can only run Linux). The Docker Enterprise suite includes the Compose container maker, the Datacenter management tool, and the Swarm container controller as well as the Docker runtime and a set of security enhancements that lock down containers and their applications from tampering. There is also a freebie Community Edition that doesn’t have all of the features if you want to play around on your own.
You don’t have to go to Docker, the company, to use Docker containers on Power Systems iron. Back in November 2015, IBM was showing off that it could fire up 10,000 containers on a single Power8 server, in this case running atop Canonical’s Ubuntu Server 25.04 and Red Hat’s Fedora 23 development release of Linux. IBM used the open source GCC Go compiler to create the Docker daemon and runtime. IBM also offers its own support contracts for Docker, where it does Level 1 and Level 2 support with backing from Docker itself for Level 3 support, including the core Docker Engine and the Docker Trusted Registry, a private version of the public Docker Hub container registry. Prices range from $750 to $2,000 per node per year for support.
Although the subsystems used in proprietary IBM midrange and mainframe gear bear some resemblance to software containers in general and Docker containers specifically, there are some differences and AIX WPARs are probably a closer analog in terms of the way things are abstracted and used. With AIX WPARs, you have to be on a modern AIX 7 release for the WPAR host, but the WPARs can run AIX 5.2, 5.3. or 7 instances. Docker has a host partition, and it can be any Linux that runs the Docker daemon or the broader Docker Engine runtime, which in turn can host a Linux operating system or a subset of it that is keyed specifically to an application or service. AIX WPARs are persistent, which means they are tied specifically to storage and when you turn them off, data does not go away and you can fire them up again. Docker containers are ephemeral. When you turn them off, their data disappears. Each AIX WPAR has a separate OS image, but multiple Docker containers can be spawned from a single Linux instance or different ones.
While IBM is supporting Docker Enterprise on Power Systems, it is not obvious how to make use of it in the IBM i environment. The Docker stack can be loaded on bare metal Linux running on the Power Systems LC machines (which are Linux-only machines with their own set of microcode that is distinct from that used on machines that support AIX or IBM i) as well as the standard Power Systems machines that support Linux, AIX, and IBM i atop the PowerVM hypervisor. So IBM i and AIX shops can run containerized applications on those Linux partitions. It is not obvious how to bring native Docker containers to an IBM i or AIX partition, which would be really useful. IBM could embed something akin to a Linux version of the PASE AIX runtime that could support the Docker daemon. The question then becomes: how on earth would you then containerize RPG or COBOL applications? Java, PHP, Node.js, and any other open source programming language could have its applications run in these quasi-native Docker containers. But RPG and COBOL present an interesting obstacle. IBM could create a clone runtime for RPG and COBOL that looks and smells like the Docker Engine but that runs on a baby IBM i kernel or passes directly through the microcode to the actual IBM i kernel. This way, IBM i and its applications would look and smell like a modern, containerized Linux platform that all the cool kids are using.
IBM has done a similar kind of encapsulation not only with PASE, allowing a baby AIX instance to run subsystems like the TCP/IP stack or open source databases like MySQL or the PHP engine as “native” applications within OS/400 and IBM i. PowerVC is similarly a variant of the OpenStack cloud controller that is hooked into the PowerVM hypervisor and that allows it to control IBM i, AIX, and Linux instances like other virtual machines are controlled on Linux clusters. IBM could take the Docker Enterprise stack or the raw Docker runtime and the Kubernetes container orchestrator and make something modern and interesting.
It’s an idea.
Can I use Docker to get a RPGLE (AS400) onto my Mac to use as a unit test environment? (I’m not joking.)
No. Docker runs atop Linux, and it provides a Linux runtime. IBM could create a baby IBM i kernel and let it be managed in exactly the same way as a Docker container, if it chose, and then that would work provided your Mac supported the PowerPC AS instructions.
Thank you for the great article,Timothy. Do you know if there any options available to IBM i developers, for running containers for their Node micro services?
I suspect that the answer in this case is to run a Linux partition on the IBM i machine and putting Docker on it. As far as I know, there is not a PASE-like Docker environment that runs “inside” of IBM i atop the AIX runtime, but that would probably be a good idea. Docker needs a Linux kernel, so you have to do something funky like this. Even on Windows Server, the Docker runtime is not, strictly speaking, native. It is running in a Linux virtual machine that in turn runs Docker, and VMware had to create its own Linux distro, called Photon, to the the same on its ESXi hypervisor.