Read all about it—SARS, clusters and the blackout of August 2003.
Visit the Computer History Museum in Mountain View, California, and you'll get close enough to smell the machines that were the fastest computers of their time. Control Data Corporation's CDC 6600 and 7600, designed by Seymour Cray, are two historic systems at the museum. If you go at the right time you might even run into Linux Journal's Michael Baxter, who can explain almost everything about Cray's designs except maybe the fake wood grain on the 7600.
CDC products served their time in the US national laboratories and other sites that buy the fastest machines, regardless of little details like backward compatibility. When CDC tried to enter the business computing market, it sank without a splash. Cray himself went on to found Cray Research, but as long as there has been a computer hardware business, high-performance computing (HPC) success has spelled failure in the business computing market.
As we go to press, the latest hot system on order for a national lab is the Lightning cluster from Linux NetworX, which will be put to work on tasks vaguely described as having to do with “safety and reliability of the nation's nuclear weapons stockpile” at Los Alamos.
Will Linux clusters stay in the HPC niche? Big vendors are putting their money on “no”. Oracle is dropping UNIX boxes for cheap racks of generic machines. Penguin Computing acquired Beowulf-originator Donald Becker's cluster company, Scyld. Dell and IBM will sell you turnkey clusters with service contracts—maybe not with one click from the Web site, but close.
Linux supercomputers already wallow in the bargain basement of price-performance, using technologies on the commodity market or intended for the commodity market, such as x86 and AMD64 processors, Gigabit Ethernet and Infiniband.
Martin Krzywinski and Yaron Butterfield give us an inspiring story of how a lab with Linux infrastructure got the first sequence of the SARS virus, under time pressure. Catch Linux bioinformatics fever on page 44.
Back in the day, cluster managers had to write their own network drivers and walk to the data center in the snow, but Steve Jones got help from his cluster vendor in bringing up a TOP500-class cluster at Stanford University (see page 72).
The more you learn about clusters, the more you might be tempted to order a whole bunch of boxes and integrate them yourself. That could be either the most money you ever saved or the biggest mistake you ever made. Make your homebrew cluster a successful one by putting some sample nodes through John Goebel's cluster hardware torture tests on page 62.
Finally, Reuven M. Lerner was a little late with his monthly column, thanks to a huge blackout on the east coast of the US and Canada. Find out how to prepare for one in his “Server Migration and Disasters” on page 14.