IBM Creates a Performance-Based Pricing Scheme for Software
July 31, 2006 Timothy Prickett Morgan
In an effort to make its software pricing methods more rational in a world where there are many different kinds of server processor architectures, IBM has put a stake in the ground and marked out a new pricing methodology for software that is more closely aligned to the performance of various processors than current per-socket or per-core pricing methods. The key words here are “more closely.” It is not a one-to-one ratio. Here’s the deal. Effective November 10, IBM is withdrawing its current per-processor (by which IBM means a processor core) licensing charges on some 350 products and 1,300 part numbers that it sells through Software Group. This includes DB2 databases, WebSphere middleware, Lotus groupware, Rational development tools, Tivoli systems management programs, and bunch of other things. (If you want to see the full list, go to this announcement letter from IBM.) Instead of this per-core pricing, IBM is moving to a new method that is based on a metric called the Processor Value Unit (PVU). PVUs are not, strictly speaking, like MIPS on the mainframe, which are a very precise (if sometimes argued about) performance metric. IBM is trying to create processor performance tiers, rather than precise performance metrics, on which to base prices, mainly because it can’t to a lot of testing and it can’t create a system that is too hard to explain to customers, salespeople, and partners. Basically, starting last week, all processor cores in popular servers are being given a rating that allows all of those 1,300 part numbers to have the same price for 100 PVUs of capacity. The PVU ratings take into account the exceptions IBM has made in the past year for dual-core AMD Opteron, Intel Xeon, and IBM PowerPC 970 processors as well as multiple-core Sun Microsystems “Niagara” Sparc T1 processors. These machines had core-neutral pricing, although IBM would not put it that way, since it has to believe that a core is a processor since that has been its talk for the past five years. Under the new scheme, each Power5, PA-RISC, Itanium, UltraSparc-IV processor will be given a PVU rating of 100. Opteron, Xeon, PowerPC processors, as well as quad-core Power5 modules (which run at slower clock speeds than regular Power5 chips) are rated at 50 PVUs. (Which means that per socket, these chips are also rated at 100 PVUs.) Sun’s T1 chip is rated at 30 PVUs per core, and the chip can have four, six, or eight cores. All single-core chips have a PVU rating of 100, and any other chip not expressly mentioned by name is rated at 100 PVUs. The PVU ratings only affect Linux-based engines on System z mainframes, and do not affect the way operating systems or databases are bundled or priced on mainframes; similarly, operating system pricing is based on a tiered, per engine charge for AIX, and i5/OS and its integrated DB2/400 database are bundled on hardware (with varying processor activations) and have per-engine charges for further activation. This will not change–at least for now. So where do these PVU ratings come from? IBM says that it will use an array of benchmark tests, such as the TPC-C online transaction processing test, the SPECint CPU test, and the SPECjbb Java transaction processing test, to reckon, in a very rough way, what the PVU ratings of future processors will be. The first chip that will be formally tested and given a PVU rating in this manner will be the quad-core “Cloverton” kicker to the “Woodcrest” Xeon 5100 from Intel, which was just pulled into the fourth quarter of this year. If the quad-core chip does 65 percent more work than the dual-core chip, as it should, then I would guess it would have a PVU rating of around 165. And that would mean that IBM’s software should cost, in theory, 65 percent more on machines using this chip compared to ones using the Woodcrest chips. But, not so fast. Apparently, IBM will not just be doing straight multiplication to figure out the PVU ratings. In fact, IBM sources tell me that the company will see what the initial performance point of a new chip architecture will be, and then try to gauge how it will play out as it evolves, and then reckon some round number that is broadly representative of the chip family. And then, IBM will go to a third party to audit these numbers–the most obvious choice would be Ideas International, which has a very sophisticated performance database for servers across all kinds of architectures. To my way of thinking, it would have been a lot cleaner if IBM did something that was mathematically and irrefutably logical, and you might even agree with that sentiment–particularly if you are not the one in the software deal negotiations. First, one might assign relative performance to all current chips at their highest frequencies. Then, set a price per PVU for each chip for each of the IBM software products. As processors improve in performance–either through the addition of cores, an increase in clock speed, or a combination of both–the PVU ratings for each processor get higher. So, for instance, a Power5+ chip running at 2.3 GHz might be rated at 100 PVUs, and then, in 2007 when the Power6 chips come to market at maybe the 4 GHz to 5 GHz range and delivering about twice the performance (as is expected), then each Power6 chip would be rated at 200 PVUs. This way, the price of the software and the performance of the server have a relatively tight coupling. However, there are problems with this approach, as Oracle learned with its “universal power units” scheme, which was launched in 2000 and withdrawn a year later. Most companies that had been used to per-server or per-processor pricing schemes on software revolted, and said that Oracle was just getting more money for no additional value. This thinking comes about because people view software as a string of 1s and 0s, not as a string of 1s and 0s doing a certain amount of work on a specific machine. According to sources, IBM has done lots of surveys with customers, and knows it cannot do a strict PVU rating and then charge a constant price for a piece of software, just counting up the PVUs. Customers want to get price decreases on software. But, as I have said in past articles, there is no way that software can scale down along the Moore’s Law curve of improving electronics, which says processors will double performance in maybe 12, 18, or 24 months, depending on the architecture. No software maker can cut its revenue in half every two years and survive. This is why I said that core neutrality for dual-core chips was generous on the part of those software makers who adopted this stance, but it would not hold for chips with four cores per socket–or even more. Well, if IBM wanted to give customers a price break over time, as it does with hardware, it could simply cut the price per PVU for software. When I suggested this, the IBMers I spoke with said that the company could not do that, since it would mean companies with older iron would be able to get the software cheaper, too. To which I would counter that IBM could put a price break in for each new processor generation from each type of processor. For instance, DB2 UDB 8.2 Enterprise Edition currently costs $33,125 per processor for Unix, Windows, and Linux servers; if you take out the 12-month Software Maintenance piece of that price (which provides updates and tech support), it costs $26,500. On a dual-core Power5 machine, which has been given a rating of 100 PVUs, that works out to $265 per PVU. If you were just going to do this in a straight line way, then a Power6 chip next year would be given a rating of 200 PVUs and the then-current DB2 would cost $265 per PVU, yielding a cost per core of $53,000. As you might imagine, such a pricing scheme will make many customers freak, as did Oracle’s Universal Power Units, but, they could move to newer machines and have fewer cores. Still, IBM could set DB2 prices on Power6 boxes 25 or 50 percent lower, and leave them the same on the Power5 machines. Assume IBM does a 1.5 multiple for PVUs in the jump from Power5 to Power6, giving the Power6 chip a rating of maybe 150 PVUs–which is about what I think Big Blue will do based on my conversations last week–the numbers can all work out to the same. IBM wants to hold software pricing constant and then rejigger PVUs in such a way that over time, they drift away from actual performance numbers. IBM is pretending some of the performance is not in the box, which it has been doing at the rate of about 10 percent per mainframe announcement in its zSeries and System z line. (Over time, the gap between MIPS and Metered Service Units, or MSUs, which are used to price some of its mainframe software, has been growing.) IBM doesn’t want to discount software on older gear, so forget that idea. The real problem is that software vendors are used to raising their prices over time and are also used to companies buying lots of excess capacity for peaks, which they do not typically use in production. In a world of virtualized, highly efficient servers, where companies want to pay for only what they use, there is no easy way to reconcile a software vendor’s need to make profits and raise revenues and a company’s desire to buy more powerful machines but get a free ride on the software charges. Something has got to give. Maybe we should think of software as a feature, like an Ethernet card, and vendors are not entitled to extra money when you use it more efficiently or when you just plain use it more. Maybe software is like beer, and if you fill up larger glasses with it, you need to pay more for it. Maybe Richard Stallman is right, and software is like air–it smells bad, but it is free and if you want someone to help you clean the air, then you should pay for that. Or maybe we should only be able to rent software, and never get a perpetual license at all, so as we shift from machine to machine, the rent changes based on how big the apartment is that the software is living in. I still think that memory-based pricing–where vendors charge regardless of architecture and based on how much memory a program uses as it runs over time–is the best, cleanest, cross-architectural answer. We’ll see. RELATED STORIES The X Factor: Is Memory-Based Software Pricing the Answer? VMware Goes for Per-Socket Pricing, But Can It Hold? Oracle’s Multicore Pricing: Right Direction, Not Far Enough |