Lean, Mean Green Machines
February 17, 2004 Timothy Prickett Morgan
In the simplest terms, a computer takes energy, in the form of electricity, uses it to store and manipulate information, and releases the vast majority of the energy as heat, noise, and light. The computers we love so dearly are burning far too much electricity and creating far too much heat. They are among the most inefficient devices ever invented, and the industry has had very little incentive to make them more efficient. The Information Age has not yet learned from the mistakes of the Industrial Age.
The heat of computers comes from chips and mechanical components, the noise comes from fans and disks, and light comes in the form of blinking lights and monitors. Once any kind of computer makes its heat–particularly in dense data center environments, but also in cubicle environments, where users sit at PCs and next to servers–the energy cycle doesn’t end there. That heat has to be removed so the computers and the people near them can continue functioning properly. This, ironically, takes more fans and air conditioners, and therefore more electricity.
Anyone who has been in a computer room for an extended period of time knows that servers are loud and hot. (I’ve used my own server room, which has a half-rack of Windows and Linux machines in a cluster, to dry fruit and dry wet clothes.) And while the electricity bills for running and cooling computers are generally not part of an IT budget, a company with lots of computers has to pay for all that juice.
High electricity use has another annoying face: As I type this essay, I am on an airplane heading from New York to San Francisco, and I have nearly burned through my second battery in my laptop. Computing is only as effective as the power available to the computer. I am about ready to lose power. In this regard, I am more like users in developing countries, who can’t count on consistent power generation. In many countries, computing is only possible through the use of off-grid generators. What is an emergency for us is every day life for a lot of businesses. This limits the usefulness of computing. Millions of computer burning less power will put less of a strain on the grid, making computing, in the aggregate, more useful.
IT professionals usually don’t like to mix politics with shop talk, but for this essay, such mixing is not only necessary, it is exactly the point. That electricity that we waste on our computers is not free. Every time we have a blackout or a brownout on the antiquated electrical grids in the United States or Europe, which were not designed to handle such loads as modern conveniences require, we pay in lost business and disruptions to our lives. Every week, month, and year, the tax man takes some of our salaries so the various militaries of the West can secure oil supplies in the Middle East, still used to generate a small portion of our electricity. To be fair, oil is used mostly for transportation, but only because we have so many cars. If we had fewer cars and mass transportation in Western economies, we’d be burning oil, which is cheaper and cleaner than coal, to make electricity. Since the OPEC oil embargo in 1974, Western economies have adopted coal as a fuel for generating electricity, which is cheaper and more readily available in the West, but it is also dirtier, so we pay by having a more polluted environment. Every time we upgrade to that larger monitor, faster processor, or peppier disk drive that we end up using very inefficiently most of the time, we consume far more electricity than is necessary. In many other ways, we pay time and again for the power that we consume so that we can have computing resources, which are very fussy about the quality of their juice. The situation is much larger than the electric bill that IT shops have, by and large, been able to dodge.
So just how much electricity are we talking about? On a global basis, it is really hard to say for sure. According to eTForecasts, a market research firm that tracks computer and Internet technology adoption, there were approximately 757 million PCs in use worldwide by the end of 2003, with about 275 million of those in the United States. The installed base of PCs worldwide will hit nearly 1.1 billion by 2006, eTForecasts predicts. How much electricity these devices consume depends on your assumptions.
An often-cited article written by Peter Huber and Mark Mills in Forbes that ran in May 1999 tried to pin down the cost of running all those PCs. (I ran across it again in a new book by Jeremy Rifkin called The Hydrogen Economy, which discusses how power distribution may go the way of the Internet: mass distribution instead of centralized control.) Based on data in the United States, Huber and Mills reckoned that it took a pound of coal to create, package, store, and move 2 MB of data. They also explained that while processors and other circuits were getting smaller and more efficient, demand for ever faster circuits (which are anything but efficient) was growing at a much higher clip. They figured five years ago that a PC required about 1,000 watts of power to operate (and this was using 1999’s slower chips and smaller screens). At the time, the average home Internet user was online about 12 hours a week, which worked out to 624 kilowatt-hours a year. If you assume that Internet and PC use was up in the past five years, you’re probably talking about 1,000 kilowatt-hours per PC. Back in 1999, consumers in the United States accounted for about 50 million PCs, with the remainder being business PCs. The ratio is probably not 1:4 consumer-to-business PCs, as it was in 1999, but is probably closer to 1:2. That ratio is important because business PCs run 40 or more hours a week instead of a dozen. That means that a business PC could be using as much as 2,000 kilowatt-hours to operate a year. If you extrapolate these ratios and power consumptions worldwide, that’s 250 billion kilowatt-hours for home PCs and 1 trillion kilowatt-hours for business PC users. You heard that right: 1.25 trillion kilowatt-hours a year. That’s how much energy goes into the PC, and in the summer months, that is how much energy must be removed from the office and home environment in warm climates. (To be fair, in winter months in temperate climates, all those PCs actually help save on heating bills. There are probably more efficient ways to heat offices, however.)
Now let’s talk about servers. They’re bigger, their badder, and they use a lot more power than a PC. Thank heavens there are far fewer of them. While the various (and better known) consultancies like IDC and Gartner do an admirable job of tracking server shipments each quarter, they do not talk much about installed bases. By my estimates, there are probably around 18 million servers in the world still running; many more have been thrown into dumps in the past two decades. About 500,000 of them are AS/400 and iSeries machines. (That gives an average of 42 PCs per server, which is reasonable.)
Reckoning how much electricity these servers use is very difficult, but we do know a few things. Servers must use a lot more juice than PCs, they run continuously, and they do not generally have features that allow them to consume less electricity if they are only using a portion of their full computing capacity. With the average server is only being utilized at 10, 15, or 20 percent of total CPU capacity according to various industry estimates, the remaining 90, 85, or 80 percent of the juice that servers eat and then exhale as heat does absolutely nothing useful. Add on top of this the cost of keeping the data center cool, and perhaps you begin to see that there is, indeed, a problem.
But the question at hand is how much electrical power these machines consume. While most PCs are uniprocessor machines, the average server probably has around two processors. Most servers have a few disk drives, and they tend to be high-end SCSI devices with high rotational speeds. They tend to have lots of fans, particularly in dense rack-mounted environments. And since the power required to run a fan goes up with the cube of the rotational speed of the fan, the small whirring fans used in most servers themselves actually burn a lot more power than a sensible larger fan with more airflow would consume. For the sake of argument, let’s assume that the average server in the world consumes only twice as much power as a top-end PC that burns 1,000 kilowatt-hours a year. So if a server consumes 2,000 kilowatt-hours on average per year and there are 18 million servers in the world, a rough estimate is that there are 36 billion kilowatt-hours of electricity consumed in the world’s servers each year. And because such a small portion of that electricity is used to do anything useful, this is a staggering waste. And nearly the same amount of energy–maybe 60 to 70 percent–has to be consumed to get rid of the heat these servers generate.
To be sure, there is a lot of wiggle in these numbers. But the magnitudes in these estimates are real. While PCs and servers have come down dramatically in price, their inefficient design and over-powered components make them very costly tools. Assume that electricity costs 10 cents per kilowatt-hour (a very generous price, at least based on the 15 cents I pay in New York City). Electricity prices range from 5 cents to 15 cents per kilowatt-hour in the U.S. on average, and can be double or triple that in Europe and Asia. Also assume assuming a 1.6 ratio for total power consumption (which server blade maker RLX Technologies uses to calculate the secondary costs for air conditioning), the world’s PCs and servers together consume 2.5 trillion kilowatt-hours of energy (for themselves and for related environmentals) in a year or $250 billion in hard, cold cash a year. Assuming that a server or PC is only used to do real work about 15 percent of the time, that means about $213 billion of that was absolutely wasted. If you were fair and added in the cost of coal mining, nuclear power plant maintenance and disposal of nuclear wastes, and pollution caused by electricity generation, these numbers would explode further.
This kind of talk will make most business people lose their minds, but it is honest and an economist would have to reckon these costs into PC and server designs. Such an intrepid economist would also examine the enormous amounts of electricity that go into producing microprocessors and electronic components as well as the metals and plastics that go into these machines. While it is hard to quantify, the moving of parts in the PC and server supply chains by planes, trains, trucks, and ships and the manufacturing of components into subassemblies and then finished machines probably consumes a staggering amount of energy. Customers pay these costs with the initial purchase price of their machines, and efficiencies wrung from these manufacturing and distribution process that go beyond installing supply chain management software and into the very structure of the elements of a PC or server could save tremendous amounts of energy and therefore lots of money. Imagine, for instance, the energy savings if PCs came in standard sizes and shapes made or recycled materials. Just like it is ten times as hard to get a new customer as it is to keep an existing one, it is orders of magnitude more expensive to create new plastic or forge new steel than it is to recycle existing materials. But I digress from the mere tally of electricity usage and costs for the world’s PCs and servers.
Even if the estimate for the base $250 billion in electricity costs relating directly to computers were off by an order of magnitude–which I do not think they are–there would nonetheless still be a tremendous motive in trying to make computers more efficient. The estimates above were just for PCs and servers alone. Now add in storage arrays, printers, routers, switches, hubs, embedded controllers in factories and buildings, and countless other computers that are weaving their ways into all kinds of devices and places where you wouldn’t have dreamed of putting a computer before. Do the economics on all of these computers now.
How many power plants could be closed if PCs and servers were not designed to maximize profits for Intel, Microsoft, Hewlett-Packard, IBM, Dell, Sun Microsystems, and others, but to minimize energy use as performance is maintained at adequate levels? How much energy and money could be saved if machines were designed from the get-go to be misers and only used juice when they needed it? How longer would battery life be if engineering of electronic devices tried to minimize their power profile ahead of all other features and functions? These are good questions that are well worth asking. People are beginning to ask them.
Why Green PCs and Servers Are Needed
There are important repercussions to more energy-efficient PC and server designs. In the Western economies, there is a perception of relatively cheap electricity among people and corporations. They have bigger issues to wrestle with, and the ubiquity and transparency of electricity and spending several hundred bucks per machine per year are just not an issue.
This is certainly not the case for the world’s poor, for whom every dollar counts, and just the electricity–forget about a PC–is too expensive and may not be available through a power grid, as we in the West are used to. Many villages in the world would be lucky to have electric power enough to light a few bulbs at night and to run a refrigerator. You can’t bring the information highway to people who don’t have juice. If you want to bring the wonders of the Internet and computing to a larger percentage of the world’s population, PCs and servers have to be less expensive and they have to have a lower energy profile. Ideally, PCs and servers and their associated peripherals would have such low power usage that solar, wind, and other renewable energy sources could be used to keep a whole bunch of them going–and still leave enough juice for remote villages to do other useful things like provide light for children to study or a replacement for cooking fuel. It may come as a shock to you, but billions of people on this planet are still using wood or dung as their primary fuel. Electricity, natural gas, and gasoline are not even options. They are dreams.
As you can see, a green approach to PC and server engineering and manufacturing isn’t just about saving rich cultures money, although this is a commendable thing. It is about encouraging people to think before they make something and making consumers think before they buy something. But it is also about making poor cultures rich with information and energy. To put it bluntly, the less coal and oil Western countries use to make electricity, the more the developing world can use or the less we all can use. The World Wide Web is only as wide as the energy grid and the availability of fuel, and only as deep as people’s pockets. Some pockets are a lot deeper than others. However, with a sensible approach, the Web can be a lot more and make a difference in the lives of billions of people who are isolated by their geography and their poverty. No one is so poor that they should not be able to participate in the Information Age.
Green Engineering
What this means, of course, is that the whole PC and server industry has to shift from figuring out how many SPECints or flops they can cram into a box and start designing machines that gauge the flops per watt of energy used to power and cool the box, and weigh that against the aggregate amount of power that an application actually requires. By the way, that means that chips from clone X86 makers Transmeta and VIA Technologies are a lot greener than they get credit for. Samsung’s ARM processor is also a very green component. IBM’s Power line of processors offer about as much oomph as an Intel Xeon or Itanium X86 processor in terms of SPECints or flops, but they do so with chips that are much smaller and consume a lot less power. There’s a reason why the 64-bit PowerPC chips are being chosen by game console makers as well as by academic and government installations installing some of the world’s high-end supercomputers.
Some IT revolutions start in academia, some in the server room, and some on the desktop. The idea of green machines is not yet revolution. It may not ever become one. But it got its start with thin clients in he mid-1990s. Thin clients were the first computers made by vendors who wanted to talk less about creeping featurism and more about providing only the functions a very specific set of customers need at a low price with fewest possible components. Only incidentally did this also result in low energy usage, but green engineering works that way. If you start optimizing for any set of functions, you can also lower energy use because the resulting device leaves no room for waste. In a sense, the very nature of a general purpose, commodity computer based on a single Pentium 4 ecosystem is an anathema and very wasteful. If all you want to do is hammer nails, you don’t design a sledge.
Small, relatively powerful thin clients based on embedded processors have been around for nearly a decade, and advances in processor and memory technology have continued apace and now these small machines are, believe it or not, suitable platforms for creating modestly powerful commodity servers that would have a very low profile. However, these thin client motherboards are severely I/O constrained.
Today, thin clients have proved their usefulness, and now desktop PC design is moving in this direction. Desktops are definitely greener than servers right now, at least from an engineering angle, and this mainly because desktop vendors want to bring down the number of components in a system and the total cost of the system to drive up sales volumes. Many of the entry PCs that HP and Dell deliver today are based on so-called Mini-ITX motherboards, which measure 17 cm by 17 cm and which pack all of the components of a reasonably powerful machine onto that single small board while consuming no more than 60 watts for a fully loaded system with 1 GB or 2 GB of main memory and a decently powerful 3.5-inch ATA disk drive. With a lower capacity memory card (128 MB or 256 MB of main memory is sufficient for most desktops and infrastructure servers), no PCI peripherals (do you really need that modem when you have dual LAN ports?), and using 2.5-inch laptop drives that have a similar duty cycle and are better at handling shock than fatter and hotter 3.5-inch drives, you can probably cut the power consumption by half or less. Best of all, these machines need only small fans to cool themselves, and with modestly powerful C3 “Eden” 600 MHz processors from VIA Technologies, they do not need a fan at all.
The VIA C3 X86-compatible processors are aimed at embedded markets, but they make decent PCs and servers. The “Nehemiah” C3 core runs at 1 GHz. It has a 16-stage pipeline, branch prediction, 64 KB of L2 cache on chip as well as a data encryption unit and a floating point unit. This core consumes about 15 watts maximum and using Via’s PowerSaver 1.0 technology, which dials down the clock multiplier when a machine is not been pushed hard by applications, average power consumption can drop down to 11 watts. A variant of the C3 chip designed for laptops called “Antaur” was announced last summer using new PowerSaver 2.0 circuitry that dropped peak power usage of the C3 chip to 11 watts, with average consumption in the range of 8 watts. PowerSaver 2.0 can dial down both the voltage of the chip and its multiplier when the Antaur chip not being stressed. Eventually, the Antaur chip will make its way onto Mini-ITX motherboards, and it may even be the main chip used on the forthcoming Nano-ITX boards, which are due any day now from VIA and which can cram a complete system onto a 12 cm by 12 cm board. If VIA is smart enough to put dual Gigabit Ethernet ports on this thing, it will be a great green server.
There are literally hundreds of small and, for now, obscure companies creating Mini-ITX PCs. Countless others are forgoing established or whitebox vendors and just building their own machines, putting whole systems inside shoe boxes, lunch boxes, and even more surprising places. However, a few companies are quite serious about making real business machines from these baby motherboards. For instance, EmergeCore, a small company based in Boise, Idaho, launched a server based on a small-footprint server called the IT100 and nicknamed “IT-in-a-box.” The machine uses a 533 MHz “Crusoe” Transmeta TM5600 processor, supports 128 MB of main memory, a 20 GB 2.5-inch hard disk and 32 MB PCMCIA flash disk for OS and data storage. The box has a four-port 10/100 Mbit Ethernet switch and one 10/100 Mbit WAN link embedded as well as a wireless link. It runs an enhanced version of Linux made by EmergeCore called CoreVista, and includes all the application software to do email, Web, print, and file serving. It only needs a 60 watt power supply and it does not require a fan at all to cool. It is no larger than a thin client and is a suitable box for a small business or for use as a departmental server for larger companies.
Even Transmeta itself has joined the Mini-ITX bandwagon with the launch of its latest Crusoe and “Efficeon” processors. (The Efficeon is the kicker to the Crusoe chips, which were elegant in their design but which did not provide as much computing power as similar priced Intel Pentium chips.) In January, Transmeta announced two new Crusoe chips and at the same time announced that it would deliver Mini-ITX boards for developers based on the Crusoe processors. Clearly, Transmeta thinks there is something to all of this Mini-ITX talk, and it probably won’t be long before Efficeon processors end up on Mini-ITX boards.
The new Crusoe chips are dubbed the TM5700 and TM5900, and they are kickers to the TM5800 that made its debut in June 2001 running at between 700 MHz and 800 MHz using a 130 nanometer process at its foundry partner, Taiwan Semiconductor Manufacturing Company. Transmeta eventually cranked up the clock on the TM5800 to 1 GHz, but the Crusoe processor has been plagued by a lack of enthusiastic vendor support among the big players in the workstation and server markets, who are more interested in selling Pentium and Xeon solutions that customers are familiar with than taking many risks. Transmeta is pressing on with the innovation, and even though it killed off the TM6000 follow-on to the TM5000 series of chips, it jumped right to the Efficeon TM8000, which will be shipping soon, and continues to improve the TM5000 series.
The TM5700 and TM5900 processors are based on the same Crusoe core that was used in the TM5600 (the first Transmeta chip that came out to much fanfare in January 2000) and the existing TM5800. These processors use a 128-bit VLIW (very long instruction word) processing technology that allows up to four 32-bit instructions to be processed per clock. (This VLIW approach is distinct from the multiple pipelines that Intel’s X86 and various RISC processors use to make a chip process more than one instruction per clock cycle). The TM5700 and TM5900 chips run at up to 1 GHz and have on-chip integer and floating point units, 64 KB each of data and instruction L1 cache, and either 256 KB (TM5700) or 512 KB (TM5900) of on-chip L2 cache, a 64-bit DDR SDRAM main memory controller, and a 32-bit PCI controller. The TM5700 and TM5900 also differ from the TM5800 in that they include an on-chip northbridge, an I/O feature that is usually part of an external chipset. The TM5700 and TM5900 come in a 21mm by 21mm package, which is 50 percent smaller than the packaging on the 1 GHz TM5800.
The new Crusoes also have the LongRun power management power management technology that allows the processor to automatically adjust its voltage and clock speed to meet the needs of applications running on the chip rather than just running full-out like most other processors sold today do. This is exactly what VIA is doing with the Antaur chip and what Intel has been adding to its Pentium-M processors, which are now branded Centrino. These power saving technologies mean that, on average, a computer using them needs a lot less electricity and generates a lot less heat.
It’s hard to say how much the TM5700s and TM5900s will help Transmeta break into the big time, but the Efficeon probably has a better shot at least for gigahertz hungry users who don’t want slow PCs and servers. (Slow is a relative term, particularly when you consider how little of their raw processing power most customers actually use.) The Efficeon TM8000 processor has VLIW units that, at 256 bits, are twice as long as those used in the Crusoe chips, and that means they can processor up to eight 32-bit instructions per clock cycle. The Efficeons have an integrated DDR SDRAM memory controller, but can also support ECC main memory, something that is a requirement for servers. The Efficeon also has a larger 1 MB L2 cache on chip, and an integrated HyperTransport I/O interconnect that delivers up to 1.6 GB/sec of throughput (twelve times more bandwidth than the PCI interface on the Crusoe chip). That I/O bandwidth is going to make boards based on the Efficeons powerful enough (in terms of performance) for real server applications.
Clock for clock, the Efficeon will yield about twice the performance of a Crusoe. The chips are made using the same 130 nanometer process from TSMC and will run at between 1 GHz and 1.3 GHz, with a power range of 5 watts to 14 watts. (A fanless notebook needs a processor that eats 7 watts or less, which Pentium-M and Celeron-M processors can do when running at 800 MHz.) Transmeta has chosen chip maker Fujitsu to help it move to a 90 nanometer process in the second half of this year that will allow it to jack up the clock speed to 2 GHz and still only consume 25 watts of power. In 2005, the Efficeons will move to a 65 nanometer process, and clock speeds could double yet again. Transmeta could also, if this green idea takes off, just keep making smaller and smaller Efficeons and keep the clock speed the same at around 1.5 GHz and reduce power consumption radically. This may make as much sense as ramping up the clock for certain applications where power and heat are issues.
Hewlett-Packard is using the Efficeons in a modified ProLiant BL blade server that it is selling as a “blade PC,” and Sharp and Fujitsu have said they would use the chip as well. But thus far, server makers have by and large steered clear. When Transmeta delivers a Mini-ITX board with the Efficeon, which was designed for real server workloads, the situation could get very interesting. (The word on the street is that Transmeta itself will deliver a Mini-ITX board, but if it doesn’t, you can bet another board supplier will.) Still, even with all of this, the market for lean, green servers could simply not materialize.
When RLX Technologies debuted its Crusoe-based blade servers in 2001, both Transmeta and RLX were sure that they were going to change the world with dense, rack-mounted, blade servers based on energy efficient Crusoe processors. In a May 2001 whitepaper, RLX showed that it could pack 336 servers, each using a 633 MHz Crusoe, in an industry standard 42U rack. Getting the same number of processors stacked up using uniprocessor, 1U servers based on the 800 MHz Pentium-III processor from Intel would have taken eight racks. While the Pentium III machines offered a significant performance boost compared to the Crusoe’s, all Web servers are wickedly under utilized. It almost doesn’t matter–at least until you throw server virtualization into the equation. In any event, an RLX blade server consumed 15 watts of power under peak load, and dropped down to 7 watts when idling. The Pentium III 1U server consumed 76 watts of power whether it was idling or not. Assuming electricity costs 7 cents a kilowatt-hour and assuming that cooling takes an additional 60 percent of power on top of the juice used to run the blade or uniprocessor server, then the annual power costs of 100 RLX blade servers was about $1,470 compared to $7,456 for the 1U, 1P Pentium servers.
The IT world yawned, because they didn’t care about electricity. It wasn’t part of the IT budget. More frequent power outages in America and Europe, an increasingly fragile electric grid, and CEOs, presidents, and company owners who do worry about the electric bill could start looking at this issue more. RLX doesn’t talk about power consumption much any more, and has shifted to Xeon-based blade servers because educating customers about the Transmeta chip and its flops per watt benefits is a lot harder than just selling Xeon boxes based on a flops per box ratio.
Grid And Virtualization Are Green
Cutting down the size and power consumption of a PC or server board, but not sacrificing potential peak processing performance (something that the initial Transmeta, VIA, and Intel mobile chips certainly did) is important, and it might turn out that these and other vendors bring chips and boards to market that can create green PCs and servers. This, of course, will not be enough.
Average utilization, even on a more modestly powerful (in terms of computing capacity) machine, is still going to be very low, particularly for servers that are used for small businesses and communities. A green machine is going to need to make use of two other technologies to drive up its utilization: grid computing and virtualization.
There’s been a lot of buzz around grid computing in the past few years. Every PC sold today should come with vendor-authorized grid software installed and allowing end users to pick the research organizations and charities to which they can donate their excess computing power for free; alternatively, vendors should be compelled to set up an open CPU cycle exchange that would allow end users to sell their excess capacity on an open market. If most PCs and servers are running full-out and do not have sophisticated power management features, something useful should be done with all that excess capacity.
Imagine if rich Western nations could sell their excess computing capacity to developing nations at a fraction of the cost of actually having these nations invest in their own IT infrastructure to do sophisticated number crunching. Provided that countries did not engage in the development of weapons, giving this computing capacity away or charging a modest fee for it would be the decent thing to do. This approach would probably not make IT vendors happy, since they are relying on developing economies for revenue and profit growth, and almost by definition their unhappiness with such an idea would only serve to prove what a good one it is.
The software and standards exist for such a universal grid connection for PCs and servers. Someone just has to see that it is necessary and then set about to getting it done. The open source Globus Toolkit or Sun Microsystems’ open source Grid Engine Project are two good places to start in terms of laying down the core technology to enable this.
While IT managers and virtualization hardware and software makers are looking at virtualization to get the most out of their hardware investment, no one is talking about getting the most out of the electricity these machines burn first and then virtualizing after they are efficient. We need to do both, not one or the other.
Virtual machine partitioning helps cut down on the electric bill in a big way. If a server is running at 15 percent capacity and burning 2,000 kilowatt-hours a year, consolidating five servers onto that one physical machine drives up utilization to around 80 percent, eliminates 8,000 kilowatt-hours of needed power per year, and makes a much more flexible machine that can respond to changing workloads better. Even modestly powered Mini-ITX or Nano-ITX servers could use virtualization to drive up usage, particularly as they get more powerful (in terms of performance).
What we really need is a standard virtualization environment for servers–and maybe for PCs, too. And maybe it should be an open source implementation. At a cost of $2,500 per server, VMware’s well regarded GSX Server virtualization software for Windows and Linux servers is too pricey for such green servers, particularly when a single server costs maybe $750 to $1,000. The math works out that even this is a decent price, but the odds favor people behaving as if virtual machine partitioning should be free or nearly so, just like symmetric multiprocessing is on servers today.
While most people don’t know this, the University of Cambridge is developing an open source virtual machine partitioning hypervisor called Xen for X86 machines that is distributed under the GNU General public license. Right now, this hypervisor, which supports multiple guest operating systems on a single machine, supports Linux 2.4. A port to Windows XP is being done in conjunction with Microsoft, and a port for FreeBSD 4.8 is being planned.
IBM could create very efficient OS/400 and Unix servers, even using its current Power processors, if it made some changes to the design. The Power5 and Power6 generations of processors are said to be much more energy efficient than the current Power4 machines, something that is made possible through the advent of new chip technologies that will allow IBM to make its Power processors very small, which means it can create low-power versions with modest clock speeds as well as very powerful versions with very high clock speeds. IBM is working on a Power-derived chip called “Cell” for Nintendo, Sony, and Microsoft game consoles that will be very powerful and that could be used to create a very efficient line of servers. IBM has to ensure now, before it is too late, that these machines can run OS/400 if it wants to create low-power iSeries boxes.
While Unix is well known in the Western economies, and Linux is coming on strong the world over, the developing world needs a simpler, less costly, integrated solution. OS/400 is a perfect fit for such customers. However, given the high initial cost of the iSeries and its rather high power consumption profile, the prospects seem remote for small businesses in China, India, Asia, Indonesia, and other developing areas, where the consistency of power and its high costs are issues. Call me crazy, but I can envision a very-low-cost iSeries based on the Cell processors, with sophisticated power management features, Power processors that can scale back their consumption based on the availability of electricity, and complete systems that might even run off a combination of solar cell and wind power. This is the kind of computer that the developing world needs in order to do business. It might even be something Western businesses would take a shining to as well.