Drilling Down Into Db2 Mirror for IBM i
May 6, 2019 Timothy Prickett Morgan
We did our initial coverage of the new Db2 Mirror for the IBM i operating system’s integrated database two weeks ago, and now it is time to dig in a little deeper. Elsewhere in this week’s issue of The Four Hundred, we have gotten feedback from the suppliers of high availability clustering and disaster recovery software for the IBM i platform as to how Db2 Mirror compete with as well as complements their wares. And in this story, we will be digging a little deeper into Db2 Mirror itself.
As we explained, Db2 Mirror creates an active-active database cluster running on databases installed on two unique logical partitions with two unique copies of IBM i 7.4 running atop the PowerVM hypervisor. While those logical partitions could be on the same physical machine, the idea is to use Db2 Mirror to create an active-active database such that two copies of the database tables used by applications is synchronously updated over an 100 Gb/sec Ethernet link that supports Remote Direct Memory Access (RDMA), which is a low latency protocol that has been used in the supercomputer business for two decades now as well as in the High Speed Link (HSL) interconnects lashing peripherals to IBM i hardware platforms for many years. If you want to have continuous availability, putting the two mirrored partitions on distinct physical machines means that you can always keep one running and therefore applications do not have to come down, even if machines need to be rebooted after applying patches. You just do a rolling upgrade, upgrading one machine and rebooting it and then upgrading the other one when the initial machine has had all of the database information resynchronized after its reboot. The active-active setup can also allow for a roll back of an upgraded or updated system or logical partition in the event something goes awry, again without downtime.
The two nodes in the active-active Db2 Mirror cluster write to and read from the same database files housed in those logical partitions; data is not committed until it is copied into both sets of database tables. The server nodes can be at different IBM i operating system levels going forward, but initially require the new IBM i 7.4 release as a baseline. The hardware can be different, too, either a Power8 or Power9 server and they can be radically different configurations. Applications can access the database using 5250 protocols and native record-level access methods and store physical and logical files or access set-based data using SQL; JDBC access methods are also allowed.
What was not immediately obvious on announcement day or through the initial briefings was that Db2 Mirror requires that the pair of logical partitions store their data on external storage arrays rather than on the internal storage arrays that are still commonly used by many IBM i shops, particularly those at the lower end of the hardware spectrum. Those external storage arrays have to have copy features, like FlashCopy, to replicate the database files between the two systems at the initial setup. Obviously, if two physical machines on each side of the active-active cluster are sharing one storage array, this is even faster, but it does leave the storage as a possible single point of failure.
Just to clarify: There was some initial confusion (not mine) of the maximum distance that the Db2 Mirror nodes in the cluster can be separated. I was initially told it was 1 kilometer, but it actually is under 200 meters. The latencies get to be too high for synchronous replication if the machines are further apart.
If you are thinking of trying to pitch Db2 Mirror to the bean counters, here is a useful table that IBM put together that encapsulates the three different ways clustering can be done:
As we pointed out in our initial coverage of Db2 Mirror, this setup can also be run in an active-passive mode where the two halves of the clustered system are doing different jobs against the database. This is a useful graphic that you might find helpful and illustrative:
If you want to get into the real nitty gritty of how Db2 Mirror works, then I suggest you read this file here.
There is no question that Db2 Mirror is going to be useful, and for large enterprises, a price of $20,000 per core this is not exorbitant compared to the $44,000 per core list price for a P20 or $55,000 per core for a P30 tier. But compared to the $2,295 per core price of the P05 tier or the $14,995 per core for the P10 tier, this is, respectively, crazy expensive or very expensive. It seems odd that IBM did not offer tiered pricing for Db2 Mirror that was proportional to the pricing of the IBM i tiered license fees. Perhaps at some time in the future it will – particularly if enough of us complain about it.
Moreover, there has to be some way to make use of internal storage with Db2 Mirror. When VMware initially did VMotion teleportation of virtual machines, it required expensive storage area networks because it relied on their inherent flash copy capabilities and the shared network reaching out to servers from the SAN. But eventually it allowed VMotion to work on machines that used internal storage, and eventually created hyperconverged storage, called Virtual SAN, that actually competed against SANs and that was based on clustered servers with a mix of disk and flash. IBM’s push toward SANs is going against the grain of the market, and the quicker it realized that, the healthier its storage business will be, even if it can’t call it out in a line item.
RELATED STORIES
How IBM i 7.4 Improves Security
Deep Dive On IBM i 7.4 And IBM i 7.3 TR6 Hardware Limits
Power Systems Refreshes Flash Drives, Promises NVM-Express For IBM i
IBM i 7.4 Rolled Out, And IBM i 7.3 Tech Refresh Rolled Up
Great explanation. We run a transaction processing gateway and this could be a good option…