Don’t Be A Blowhard
March 22, 2021 Timothy Prickett Morgan
One of the things that made the AS/400 a great system, as well as the System/36 and the System/32 and System/34 before it, was that there were entry machines that had enough oomph to support the data processing and storage needs of small businesses within a reasonable budget and in a system that didn’t need a datacenter or even a data closet. They could be tucked under a desk, or left to run beside them.
Way back in the dawn of time, there were special machines, even smaller than the original AS/400-B10 and AS/400 -B20, that were even smaller and quieter. Think of the AS/400 C04 and C06 from 1990, the D02 and D04 and D06 from 1991, the E02, E04, and E06 from 1992, and the F02, F04, and F06 from 1993. The AS/400-150 from 1996. The AS/400 170 from 1998, and the AS/400-250 and AS/400-270 from 2000. The Model 10s and the Model 20s, which have more expansion, are also often used as deskside or office environment machines.
These machines persist because they were well made and they do what is necessary for a small business that thinks very, very little about IT. They probably don’t even know what the state of the art in information technology is – and that is the beautiful thing about the System/36 and AS/400 and early iSeries business. Big Blue really understood that, and was able to better compete against Unix boxes and X86 boxes and the OS/400 ecosystem did pretty well for itself and its customers.
We were talking to third party maintainer the other day about a customer it has in South Carolina, which makes specialty cast iron and grille products. This customer has an AS/400 Model F20 that is still running, and running just fine, that has been in the field for 28 years. I am lucky to get seven years out of a deskside PC workstation or a laptop, and even when I had my own servers running infrastructure at IT Jungle, which I did from 2003 through 2008, they were three years old when I got them from a failed dot-com through Hewlett Packard’s refurbished systems organization and they only lasted another five before the disk drives and then elements of the system boards died. That’s only eight years. I gave them to a friend of mine who ran an IT operation in Minnesota – the shipping cost me a fortune – but I just could not bear the idea of them being tossed in the dump because the processors and network cards and memory still worked.
I was talking to another reseller last week, who had a customer with one of the “Invader” AS/400-270 machines, and this customer just decided to get all modern and go in for the long haul – as if this customer understood anything else but the long haul given the fact that this legendary Invader server – Remember these? The ones that kicked an X86 boxes ass down the street on real business workloads? Sent them to bed spanked with no dinner? Remember that, IBM? – has been in this customer’s office for 21 years now. There is no “back office” for these small and medium customers; there is just “the office.” And here is the problem: These modern Power9 machines can get loud. Drive you crazy loud.
I know a thing or two about this, and my experience with my HP ProLiant cluster – one DL380 database and email server and a pair of DL360 Web servers, one DL360 load balancer, plus an MSA 1000 RAID 6 disk array – led me immediately down a path of making my own lean, mean X86 green machines that could drive our site and not make so much damned noise.
Back in 2003, I used to run IT Jungle out of my New York City apartment. I got the iron from HP in exchange for advertising contracts – yes, I asked IBM first, and they wanted me to spend $60,000 on some used System x gear with financing and the iSeries people would not even contemplate making me a reasonable deal on an iSeries machine because politically and emotionally I wanted to run the IT Jungle business on an AS/400. Well, iSeries. Even though it is not the mainstream Web infrastructure platform by a longshot. In any event, I worked with my techie teenager at the time to set up the T1 lines and routers from AT&T that we ran up the fire escape outside of the building, and set up a data closet in my office, which was the former kitchenette in a studio apartment that had 20 amp power lines and an exhaust fan that could dump heat into the elevator shaft. (No other windows or other ventilation, which would be a problem.) We added a switch and a firewall, linked all the machines up, and I never had a prouder moment then when I saw all of those blinky lights on my half rack of gear, which included a pair of uninterruptible power supplies now that I think on it. And after two minutes, I had to turn off the MSA 1000 because it was so damned loud. I could not use it in that office environment, and we moved our Web files to the database and email server. And after a few days of hearing the hum of the servers all night long (my bedroom was also in the studio apartment at the time, linked to another one bedroom apartment beside it where the rest of the living space was for my family), I literally started to dream of throwing the servers out the window and daydream about how to make low-powered, quiet servers I could use to replace some of that iron. Which I did and used for many years before I moved to a hosted environment in 2008 when the servers started failing on Thanksgiving weekend. (The timing was very good on that. Whew!) I was already at break point before then because people used to try to spam bomb my email servers (presumably in an attempt to drive me out of business), which caused the fans on the core DL380 server to rev up one, then two, then three, then four notches in the middle of the night. Which drove me more than a little crazy for a while there.
So I know how this customer feels after she put her shiny new Power9 system into a closet and still can’t live with the noise and is contemplating sending the machine back unless there is some way to make it more quiet.
IBM does not provide a lot of thermal detail on its systems, but here’s the gist of it. Our guess is that the Power9 processor, with 10 cores running in SMT8 mode is hovering around 300 watts at heavy load. Even if you go down to one core in a Power S922 system, the memory is running at more than 10 watts a stick and if you go with NVM-Express flash drives, you are generating a lot of heat. Add in fast networking cards, or a cache-backed RAID disk card, or other peripherals, and the whole shebang is a lot hotter than the deskside machines of days gone by.
How much heat? You can only infer it until your system is under load and you start hearing the fans stepping up, driving your blood pressure along with it and making you deaf.
The Power S914 has one 900 watt power supply and another 1400 power supply, for a combine 2,300 watts and is rated at a maximum of 1,600 watts of power draw and a 5,461 BTU/hour maximum thermal output. The Power S924, which is a fatter box with more expansion room, has two 1,400 watt power supplies, a 2,750 watt maximum power draw, and a 9,386 BTU/hour thermal output.
All that heat has to be pushed and pulled out of the system, and it also has to go somewhere. Roasting yourself is a secondary effect that no one talks about, but I wore shorts most of the time in my office. . . .
According to the engineering reports I have seen from IBM, a whole slew of devices installed will make the fans in an entry Power Systems machine rev up to 6,000 RPM. (That’s loud, I can assure you.) And it is perfectly normal for a machine that is heavily configured and running heavy workloads. As it turns out, there is a bug in the PCI-Express SAS RAID adapter with two 3 Gb/sec ports that makes the fans rev to 6,000 RPM, and there is a firmware fix for this. What IBM recommends in other situations is to unplug adapters sequentially to see which one is causing the fans to rev, and then change placement of the adapter cards if possible to give them more breathing room.
But in some cases, this might not be possible.
So what do you do? I think that IBM should offer a different, quiet deskside server configuration. Here is how you might do it. First, you have to go with the fatter S924 deskside chassis. Now, add direct water cooling to the CPU and memory that gets that heat out of the chassis immediately and dumps it without the need for chassis cooling fans. That will lower the thermal loads on those fans in the system and they won’t have to rev so high. As it turns out, Lenovo has a pretty neat set of cooling technologies, which is known as Project Neptune and which I covered in detail here over at The Next Platform, that IBM could partner to get. There are other suppliers of such technology.
Additionally, small muffin fans suck. IT vendors use them because they are cheap and they allow for flatter servers. And that is stupid, especially when they literally bank them up two or three deep to increase airflow. A fan with a larger area moves as much air as a bunch of smaller fans, but does so with less energy and less noise. That’s just physics and thermodynamics, man. Build a chassis that has lots of big fans that don’t hum do damned loud!
Here is what you can’t do, even if it seems obvious: You can’t take the covers off. The airflow inside of a dense, high performance computer is precisely designed, even if inadequately quiet. You will screw that airflow up massively – generally moving from the front of the machine to the back of the machine – if you take the covers off. What you can do is get more cool air going into the servers. I did this, even though it sounds crazy. My ProLiant servers had some pretty rough summers in that New York City apartment, where the temps were sometimes as high as 100 degrees with crazy high humidity. I got a portable air conditioner, set it in front of the servers and did a kind of cold aisle containment in the closet and hot aisle containment behind the racks to dump the heat into the elevator shaft. The servers lived better than I did, and this was not an inexpensive proposition, either. But on the worst days, when I still had to keep my business going, I did what I had to do to keep those fans from going nuts.
Simply saying that revving fans are normal, deal with it – which is what the IBM engineering document basically says – is not good business. This is an engineering problem, and it can be solved, and IBM needs to help customers solve it. And if not, then get out of the way, IBM, and let us do it.
Re: Don’t Be A Blowhard. What a great great writeup, TPM.I can add some similar experiences. I have had an iseries (haven’t got to an IBM i model yet) running in my apartment dinette area for over 12 years now (270, 810, 520, 720). The 520 with twinax and real 5250 terminals in addition to ethernet I can’t bear to part with, but I don’t have room and power to run it.
The 720 came with the HMC control PC which I felt I should run alongside the 720 at first. The HMC sounds like a small jet taking off. It’s just a little PC board in an enclosure running Linux. I have a super Linux 28TB RAID5 PC across from it with water cooling and I have to look at an LED to see if it’s on.
So I quit running HMC after a few months and the only sound is the air cleaner fan I run next to 720. If IBM did anything like they did with that HMC then only a sound proofed room would be enough to hold it. Fortunately the 720 is still running strong and silent.
Thanks, Ralph
I got this comment from James Sparkman by email as well, which may be a help to everyone given the RPQ he mentions:
“I’m replying to your ‘Don’t be a blowhard BlowHard’ article on Power9 fan noise. Yes, they can be quite loud especially IPL’ing from a standing start. It does make me nervous to install a deskside unit in an office environment. However, from my experience with the deskside models, they aren’t as loud as you might think. One thing I definitely do is to use RPQ 8A2495 which removes two 900w power supplies. It’s still loud on initial startup but quiets to a moderate humming after it settles down. The fans will ramp up if they feel the heat building up and it’s a good sign that your AC isn’t working optimally…LOL. Putting a system in a closet without AC and ventilation is a sure way to exercise those fans and to shorten the life of the system. Rack mounted units should be in some sort of server room properly cooled and ventilated, ensuring that the fan noise stays in the room and is moderated.”
Been down the same path. I have a P02 in the garage, haven’t fired it up for about 15 years, but it used to sit beside me. (and yes, you left the “Portables” out of the “small” list in your intro). My 250 gave me 16+ years reliable service, even after being dropped a couple of times when relocating, and once down a flight of stairs. Only ever had to replace a couple of disks, but on its last IPL there was a lot of SRC codes reported. (That was a month ago). It also sat beside me on the floor. Now the 720, which sounds like a 747 when it (720) starts up, till the fans drop back to operating speed, which is still too loud to sit beside me. In it’s defense, it does compile an ILE PGM in 2sec that took the P02 20min, but < 10sec on the 250. While I haven't replaced any disks in the 720, I have replaced both power supplies, the NIC, and the VRM, and the back-plane (aka motherboard). OUCH. So yes IBM, how about bringing back a real grass roots box?
We just installed a small Power9 to replace our Power7 plus. I could not believe how loud this “jet engine” is. Under zero load (we are not using it). Our locked rack sat in our upstairs office. People in cubicles couldn’t hear on the phone!
We had to Pay a contractor to build walls and a door around our entire rack (i.e. a server room). And then we had to install a portable AC unit to run in the room (temps were exceeding 80 degrees with the door open, in the winter)
Needless to say this was an expense that was not planned for by our small company.
All I can say is I feel your pain, and IBM needs to help here before there is a mad dash to upgrade and the problem affects 10X more customers.
“All I can say is I feel your pain, and IBM needs to help here before there is a mad dash to upgrade and the problem affects 10X more customers.”
I cannot emphasize how critical your statement is. The reality is we are accustomed to nothing being done about critical statements, but we will not survive if IBM does nothing about this. I have an Enterprise model 720 with 6 CPU’s, and it is quiet unless the afternoon sun heats up and my air conditioning isn’t cranked up enough and temperature gets to 83 or 84. Then another fan kicks in and it still isn’t noisy but I can hear it.
I have an IBM HMC for the 720, literally a long flat PC board and some communications hardware. The fan for that thing from the moment it is turned on sounds like the Power9 experience of the above comment. An entirely unacceptable experience compared to their previous Power7 plus like mine.
IBM could have carried forward the 720 fans or the HMC fans to the Power9, and they chose the HMC. Very few will tolerate that. Not when they can buy large Linux servers like I have next to the 720 with water cooling and no noise.
I am shocked because I have had such a good experience with the 720 these past few years and such a lousy experience with the HMC, and now find out that the next IBM i would be an HMC experience. Entirely unacceptable. I’m not talking about aesthetics. I’m talking about earplugs. IBM did something very wrong in losing the 720 experience for small businesses.