Overall, the BladeCenter is a remarkable bit of engineering.
IBM recently has begun rallying around two major server systems for Linux deployment on the x86 architecture. On the scale-up front, IBM offers the eServer x440 line, which boasts scalability up to 16 CPUs in a single system. More interesting to the average user with a cluster of smaller servers is IBM's scale-out offering, the IBM eServer BladeCenter system, a 7U rackmount chassis supporting 14 blades.
The chassis itself consists of only a passive midplane, a CD-ROM and floppy drive and two 10" blowers. Two redundant 230V power supply modules ship with the unit, and two more are required if the system is running more than five blades. A single management module with keyboard, video and mouse (KVM) and Ethernet connections is included, and support for a redundant management module recently was added.
Redundancy is a key design goal of the BladeCenter and little has been overlooked. Four switch-module bays are in the rear of the chassis; the second and fourth bays are redundant bays for the first and third bays, respectively. For example, if you install an Ethernet switch in the first bay, you can install a second Ethernet switch module in the third bay but not a SAN switch module. Although allowing full redundancy, this means that only two types of switch modules ever can be installed at a time.
Currently, IBM offers gigabit Ethernet and Fibre Channel SAN switch modules. Internally, these switch modules connect to each individual blade, and externally the gigabit switch offers four copper Ethernet ports while the SAN switch offers four Fibre Channel ports. With redundant modules installed, each blade has access to two Ethernet ports and two SAN ports. The Ethernet switch is a full-function layer-3 switch with a powerful Web-based interface. The SAN switch is a Qlogic SAN switch, which is configured through either a telnet session or the Java-based management software available for both Linux and Microsoft Windows.
The front of the BladeCenter chassis contains a single CD-ROM, floppy and USB port that is connected by an internal USB hub/switch to each blade. This switch is controlled by a button on each blade, and only one blade can access the CD/floppy/USB port at a time. The keyboard, video and mouse are connected through a separate internal switch that is controlled by a second button on each blade. Control of the KVM and USB peripherals also can be manipulated by using the BladeCenter's Web-based management interface, which allows the KVM output to be controlled remotely over the Web. Unfortunately, although redundant KVM/management modules may be installed, only one blade can be assigned to the console at a time, even through the Web interface. This probably is the most disappointing limitation of the BladeCenter, but it may be less of an issue for those using the blades in a true cluster environment.
At the time of this writing, only the HS20 server blade has been released for the BladeCenter. One might assume these blades simply would be a modified ThinkPad on a card, but in reality the blade is a miniaturized, enterprise-class server. Although it ships with a single CPU and 512MB of RAM, the HS20 has a 533MHz front side bus and can support two Pentium 4 Xeon CPUs (up to 2.8GHz) and four PC2100 DDR DIMMs (up to a total of 8GB). Thanks to the hypertheading feature of the Xeon CPUs, modern Linux kernels can see two logical CPUs for each actual CPU installed.
Each blade also has one 8MB Rage Video and two Broadcom gigabit Ethernet controllers. Each Ethernet controller is hard-wired to a switch-module bay, and the driver can be configured to allow these interfaces to be used individually or in a failover fashion. Two notebook-sized IDE hard drives can be mounted in the blade, but the optional SAN controller card overlaps with the second IDE drive, so you can use only one IDE drive if you also are connecting to a SAN. The SAN card is a custom Q-logic SAN card, which provides redundant pathways to the SAN switch modules. IBM also offers a companion blade that allows two SCSI drives to be attached to a blade, but it consumes another slot in the chassis.
The blades are hot-swappable and generally easy to upgrade. If a blade suffers catastrophic failure, the blade can be removed and an onboard diagnostic system flashes an LED next to the faulty component. I did find that installing the heatsink on the second CPU required a disturbing amount of force due to the spring-loaded screws used to secure it.
An interesting design characteristic of the BladeCenter is that IBM promises they eventually will release blades based on their pSeries and iSeries server lines. This means you can mix and match both Intel and mainframe-class blades in the same chassis. At the time of this writing, no such products have been announced formally, though.
As mentioned earlier, the BladeCenter has a powerful Web-based interface. Blades can be powered on and off, the CD/floppy or console port can be switched and firmware can be updated over the Web. At this time, user names and passwords must be defined within the management console, which is bothersome because those user names then must be defined again in each Ethernet and SAN switch module. In addition, security controls are coarse. Users cannot be granted access to only certain blades or limited to performing only certain tasks. Although users can be limited to having only view access, this limit also prevents those users from utilizing the remote console.
In addition to the Web interface, the BladeCenter includes licenses for the powerful IBM Director 4.1 management software, which is so featureful it deserves its own review. It is worth mentioning that the client, agent and server components of Director 4.1 all run under Linux and allow you to perform diagnostics, start and stop processes and receive automated alerts about all IBM servers from a single console.
Support for Linux on the BladeCenter is far better than I have found from any other server vendor. Linux driver CDs are supplied in the box, and all general management tools are written in Java with complete Linux support. In fact, I have yet to find any management, monitoring or configuration tools related to the BladeCenter that are Windows-only. This is a refreshing change from past experiences with even the most dedicated Linux hardware vendors. Although official support currently is limited to Red Hat Linux 7.3, Red Hat Advanced Server 2.1 and SuSE Linux 8, I had no trouble running Red Hat 8.0 and 9.
If you have need of any custom peripherals, cards or interfaces on your server, the BladeCenter is not an option as there are no PCI or per-blade I/O ports. The addition of a single USB port dedicated to each blade would have made the BladeCenter a more versatile system.
Another important consideration is that if you do not have a dedicated server room, the BladeCenter can be uncomfortably loud. In fact, it is so loud that IBM sells an Acoustic Attenuation Unit (AAU), which essentially is a large foam muffler. Even with the AAU in place, it is possible to hear the BladeCenter running through an office wall.
IBM claims the BladeCenter is priced so that it is a lower cost alternative to standalone servers when buying seven or more blades. It is uncommon to find standalone servers with the level of redundancy that the BladeCenter includes, but I believe the cost claim to be true even compared to servers with fewer features. Overall, the BladeCenter is a remarkable bit of engineering. It is probably best suited for those building high-density clusters, but when coupled with a SAN, it makes a great option for general-purpose servers as well.