Links to Useful Cluster Resources Elsewhere


Note that this list of links does not pretend to be either exhaustive or current. Still, if you find a dead link or have a site that just has to be added, feel free to send it in via the site maintenance link at the bottom of the page.

Beowulf/Cluster Sites

The Beowulf Home Page
This is THE primary point of reference for learning about beowulfery, and has links to all "registered" sites. The website is sponsored by Scyld (a maker of core beowulf software, cofounded by Don Becker).
MIRROR of the original Beowulf website, last updated in mid-2002.
This image is frozen in time, as it were, from it was still at NASA-Goddard where it began. In the early days, Brahma ran one of the only mirrors of For a brief period of time this mirror on brahma was the only operational website when NASA underwent something of an internal convulsion and pulled the plug temporarily on the beowulf site. An interesting historical reference.
Kragen Sitaker's Beowulf FAQ
Kragen worked hard during the early days of the list to compile a FAQ and thereby reduce, by some measure, newbie FAQs on the list. It was fairly successful. My book on beowulfery actually had a similar origin -- I found myself explaining both introductory and intermediate points a lot, and figured writing a book might actually save time. Hah.
The Beowulf HOWTO by Jacek Radajewski and Douglas Eadline.
Excellent resource, probably needs updating at this point.
The Beowulf Underground
Described as the freshmeat site for beowulfers, this site is run by the Clemson University's Parallel Architecture Research Laboratory (founded, I believe, by Walter B. Ligon III), which is one of the oldest and best University research groups devoted entirely to cluster computing and the beowulf model.
Parallel Architecture Research Laboratory
This obviously deserves a link in its own right and not just because they host the beowulf underground site. This group is perhaps best known for PVFS (Parallel Virtual File System) but a mere glance at the PARL page should convince you that they have much more to offer for the serious cluster student or builder.
Duke Cluster Lab
Duke also has a longstanding operation conducting cluster research in the computer science department (brahma is a "working cluster" in the physics department and only does hobby-level work on clustering per se), headed by Jeff Chase. This is the home of the trapeze project. It is also the home of the new Cluster on Demand project being set up by Justin Moore.
The Ames Scalable Computing Laboratory
At Iowa State University, the Ames SCL is home to a number of other useful projects and tools as well as netpipes (see below under networking links). Another venerable and important University cluster computing group.
Doug Eadline's Cluster Cookbook
This "cookbook" for clusters is provided by Paralogic, a longtime turnkey beowulf company. Even though dated, still an excellent resource. Doug was going to help me with the online beowulf book here, but he is busy at Paralogic. I just pretend to be busy here at Duke:-).
The Pondermatic IV
Another cluster cookbook site, this presents a very useful step by step introduction to ``homebrew beowulfery'' following what might be called the ``standard recipe'', produced by Rick Bono. Takes you from a pile of parts through running the povray demo in parallel.
OSCAR (Open Source Clustering Application Resources)
"...a fully integrated and easy to install software bundle designed for high performance cluster computing. Everything needed to install, build, maintain, and use a modest sized Linux cluster is included in the suite, making it unnecessary to download or even install any individual software packages on your cluster."
ROCKS (A High Performance Linux Cluster Solution that Rocks, I guess)
Rocks is "...based on Redhat 7.3 (x86), and RedHat AW 2.1 (ia64). Both architectures include the identical cluster middleware and differ only at the OS level. Sun used to call this bug-for-bug compatible, we prefer feature-for-feature compatible. The Itanium version has onced again been built from the publicly available RedHat SRPMs for Advanced Workstation." As of 9/19/03, anyway.
Linux High Performance Computing,
An online magazine for cluster/HPC-volken run by Kenneth Farmer. Very nice forum, Kenneth does a good job of pulling and posting useful notes and articles.
Linux Clustering Information Center
A "central repository of links and information regarding linux clustering, in all its forms". What more can I say? Perhaps that this is fairly close to the original aims of Extreme Linux -- the union of HA and HPC linux clustering (and more). One of the few clustering sites that does have HA linux clustering links AND HPC clustering links. HPC still dominates, but well, it dominates!
Linux Magazine's Extreme Linux Feature
While we're linking online magazines, it seems worthwhile to link in Linux Magazine and Forrest Hoffman's excellent series. I've known Forrest for some years now (both on the beowulf list and for the last few years in human person) and he writes an useful and knowledgeable cluster series for LM using the old "Extreme Linux" heading. Alas, the Extreme Linux website and organization is, as far as I can tell, no more.
The ALINKA Linux Clustering Letter
A free emailed weekly summary of the beowulf mailing list (which you can join on the primary beowulf website, linked above). Frankly, the beowulf list has a remarkably high signal to noise ratio (speaking as a major source of the noise:-) but it IS a high volume list. The ALINKA "executive summary" of list activity keeps many individuals in touch with what's going on without eating them alive.
FAI (Fully Automated Installation).
FAI is an automated system to install a Debian GNU/Linux operating system on a Linux Cluster. The manual of FAI includes a chapter on how to build a Beowulf cluster using FAI.
Not to be outdone, Mandrakesoft has a high performance computing initiative designed to install a full suite of High Performance Computing packages onto a cluster via an automated point-and-click GUI interface. The suite is thoughtfully done. In addition to lots of more or less "standard" clustering tools (sshd, maui, ganglia, and more) it has some scripts that have long been discussed on the list -- a "discovery" script that gathers MAC addresses from a cluster on a preinstall boot (turn on each PXE-configured node, wait for the PXE/DHCP server to gather the MAC addresses, run the script to glean the addresses out of the DHCP logs and -- presumably -- sets up everything else required to boot up the nodes into installation or operation in the future), and more.

It does have an attractive suite of tools to choose from, including some simple cluster shell variants (gexec, pcp, pconsole). Some of these don't use ssh, and newbies should be warned to very carefully assess their security requirements before using them EVEN in a cluster that is fully isolated behind a firewall.

The only tools I see "missing" are SGE (which is a pain to build and a REAL pain to rpm-ify, so encapsulating it for RPM builds would be a real service) and perhaps Condor, to provide an alternative policy tool. Choice is good.

Scyld is a company founded by some of the creators of the original beowulf, notably Gordon H. Bell prize-winner Don Becker (also known for writing virtually all of the original ethernet device drivers). The product they sell is a heavily specialized and customized linux distribution that is a beowulf in a box. It has its own specialized methodologies for head node and worker node installation that are extremely efficient and scalable by design. It creates a true Beowulf with a single unified process id space and single node control over the running parallelized task, as opposed to the more generic "cluster" produced by the approaches above (where the cluster nodes are essentially workstations, specialized in simple ways to a greater or lesser degree).
Clustermatic is based on Eric Hendriks' bproc program (also the basis of the Scyld approach) and creates a simple, scalable, "true Beowulf" type cluster with a single unified process id space so that jobs are run from a single front end "head node". The actual booting of the cluster/node image is accomplished with a two-step boot process, the first of which can be initiated by any mix of LinuxBIOS, DHCP/PXE, a boot floppy, or a boot CD. It can be configured to run diskless or from a local image. Eric Hendriks is also an ex-member of the NASA Goddard beowulf team and the clustermatic solution is in some sense an open source non-commercial beowulf in a box where Scyld is open source but commercial. Clustermatic is highly scalable, as is demonstrated by its use on Pink, a 2048 processor linux cluster running a Los Alamos National Labs.
(CD-based) Diskless Linux
Diskless Linux (using a CD) is another highly scalable approach to node scaling. It has the advantage of saving money on node hardware (you don't need a disk at all, which typically costs order of $100, costs (at an assumed 15W average power consumption rate) another $15 a year to run, an is one of the node components most likely to fail and thereby cost hours of administrator time identifying and replacing the broken component. It has disadvantages in terms of speed and core memory consumed, it can be considerably more time intensive to upgrade and keep current, and of course, you need a CD drive in each system and at least one burner.
Fully Diskless Linux
This Linux NetMag article describes a different way of building completely diskless systems that is very similar to the general approach outlined above, except that instead of running just the install program (anaconda) after booting, the systems boots up all of the way and mounts all of its key components (/, /usr, /home) from a remote server. As is the case for general node installation, it can easily be initiated either from a suitable boot floppy, or on a system with a PXE-equipped ethernet device, with no disks of any sort at all. The hardware and maintenance cost savings in the latter case are signficant, although the additional load on the network may or may not be acceptable for all cluster designs or parallel applications
These are the folks that run the famous RC5 challenge and more. This is the ultimate extension of gridware -- the anonymous client running all over the Internet. The only real catch that I can see is that surely there must be better (more useful) projects out there than busting keys. It was fun for a while, but time to move on. In fact, I have just the problem for them...:-)
Well, I asked for it. Here is another "infinite grid" problem of famous origin. Contribute the unused cycles on your system(s) to Searching for ExtraTerrerestrial Intelligence. If they find it, it will be (have been) worth it, of course. If they don't find it, the thing about null results in science means that they can NEVER conclude it ain't there and quit. I expect my kids might well be donating cycles to SETI@home when I'm in a "home" myself... Still, I'm only jealous. I could have invented OnSpin3d@home years ago, and have half the world running my Monte Carlo code.

Parallel Programming Support and Beowulf Tools

Parallel Processing using Linux.
Note also the Parallel Processing HOWTO maintained on this site. Needs an update, but still quite useful.
Designing Parallel Programs
This is an online book by Ian Foster at Argonne National Labs. There are a lot of very nice parallel programming resources linked to this parent site. An essential resource for the would-be parallel programmer, worth buying in hard copy.
PVM: Parallel Virtual Machine.
This is one of the two major parallel programming libraries, originally developed for network based clusters. In a sense, this project enabled the entire COTS cluster revolution that includes various pre-beowulf clusters (in 1993-1995 I turned roughly 150 Sun workstations all over Duke's campus into a massive compute cluster and did GFLOP years of work both with PVM and without it) the beowulf, the GRID, and numerous clusters in between.
MPICH - A Portable MPI Implementation.
One of the original open source MPI's, still one of the best (or at least most popular and most commonly discussed on the beowulf list -- it is actually hard for me to judge for myself as I'm a PVM person and only rarely have used MPI at all).
Another open source MPI, generally distributed with Red Hat these days, and the default MPI in use in the Duke Physics Department. This site contains a list of most of the other MPI's, both open source and commercial.
The Sun Grid Engine is a tool that makes a network of compute resources into a "grid" where users can submit jobs and have them queued and redistributed to run on resources anywhere on the grid as they become available. It contains at least provisional abilities to manage policy on grid resources that are also interactive workstation -- to stop grid tasks "instantly" if the interactive users types or moves a mouse. It also contains provisional abilities to handle code migration with checkpoints. It operates in userspace (requires no kernel modifications or modules) via daemons.

Parenthetical Aside With all those advantages, it is nearly ideal for clusters devoted to embarrassingly parallel tasks. The only catch is that it is SO powerful, and intended to build and run on SO many architectures, that one gets the definite feeling that it would build more simply and work better if most of those architectures were trashed and energy focused on linux and gcc-based systems. aimk, for example, is something I once played with and even hacked for my own extensive use back when I was managing a highly heterogeneous Unix network. Ah, what a child I was.

Eventually I sobered up and realized that heterogeneity is the root of a lot of evil where systems management and application developement are concerned, and that I really wanted NOT to EVER AGAIN have to screw around with where this particular flavor of Unix keeps that particular include file and how to hack and patch the code if the file (and its associated library) were different and possibly incompatible. Especially in a complex application with many contributing developers and nonlinear constraints (like support by Sun, making it impolitic for a Sun build ever to break) where somebody working on architecture X can break the hell out of architecture Y and not have the fact revealed until extensive testing has occurred on all the A-Z architectures "supported", requiring yet another #ifdef instrumentation.

Still, SGE (like PVM) will undoubtedly prove to be worth the hassle in the long run. In the meantime, one can only ask WHY the developers put four or five steps into the README.BUILD instead of providing a single script entitled "" or (better yet) a toplevel makefile with autodocumenting targets? So fine, maybe aimk is great, but normal humans have never used it so hide it behind regular old make. Or why they use the incredibly arcane aimk at all instead of Gnu's autoconf (intended to satisfy the same purpose, but a lot more modern and in common enough use to actually be functional)? Or (being the linux bigot that I am:-) they don't just scrap even this and focus on linux/gcc builds only with a Makefile a child could understand?:-p

Our site REQUIRES rpm's (ideally built from src rpm's so they can easily be REbuilt) for scalable management purposes. Building SGE into portable RPM form looks like it will be a truly joyful process. Especially given that following the strict instuctions in README.BUILD on a clean checkout from the CVS tree has failed every time I've tried it, most recently on a perfectly updated RH 7.3 system. For that reason I've been slow to get enthused, although this summer I may have the time required to get over the initial build blues.

MOSIX Scalable Cluster Computing for Linux.
Mosix really makes your network into a multiprocessor computer. One complaint -- it requires modifying your kernel -- no userspace version of the program exists. This makes it relatively complicated to integrate with the kind of aggressive kernel update program necessary to remain secure, which in turn inhibits its application on open LANs. A second complaint relevant to SOME applications is that transparent doesn't necessarily mean lightweight. For both of these reasons, I think SGE will prove a more desireable solution.
a load balancing/task migration package that runs under linux. Condor is one possible way to build a Grid out of systems that do multiple duty as e.g. desktop computers by day while fully respecting the prerogatives of the "owners" of those systems.
The Netlib Repository
This has a lot of useful stuff including ATLAS, LAPACK, BLAS and PVM sources. It is well worth browsing. Only flaw as far as I can see is a preponderance of fortran sources compared to C. This is a site for serious numerical analysts and professionals, maintained by some of the best in the business (like Jack Dongarra and Eric Grosse and other luminaries).
BLAS Frequently Asked Questions.
Yet another specific example of useful stuff on netlib. Don't miss ATLAS, as well, if you happen to need or use LAPACK and BLAS.
In the words of its creator, "The goal of this website is to provide with the most up-to-date links to the chemical software running on linux. Because the field is still under an intensive development, the website will also be continuously under construction and you may even find some not-up-to-date URLs there for this same reason. In that case contact me, please. I hope it will be helpful to you. You are very welcome to send me (Nikodem Kuznik) your comments, new URLs and so on."

Network Support

Network Drivers and Diagnostics from Scyld Computing Corporation.
This is Don Becker's company, listed one way or another several times in this listing of useful links, which should tell you something. Since Don Becker wrote most of the original network device drivers for used in at least the early linux kernels and continues this work today, Scyld hosts a number of linux device drivers and diagnostic tools.
Dan Kegel's Fast Ethernet Page.
A perennially useful site with lots of hardware links.
Charles Spurgeon's Ethernet Page.
Charles Spurgeon has converted a lot of the information on this site (which I've used for years now to learn about ethernet) into an O'Reilly book, but the site is still wildly useful to people seeking a quick overview of Ethernet but not blessed with access to IEEE documents or eager to buy an entire book to answer a single question or two. (As an editorial aside, we should all complain to our congress persons about this -- IEEE documents should all be OPEN STANDARDS published and made available for free to all citizens. Basically, we paid for them already.)
Myrinet home page.
Myrinet is one of the premier high performance beowulf cluster networks. If you wish to engineer a cluster that can run fine grained parallel code with a need for high bandwidth and low latency, you can get it with Myrinet -- for a price. Native drivers for most popular parallel libraries bypass the expensive TCP/IP stack used by most ethernet networking and are necessary to get the best performance.
Dolphin Interconnect Solutions, home of the Scalable Coherent Interface (SCI) adapter.
SCI is a (perhaps even "the") major competitor of Myrinet and also markets a high end low latency high bandwidth node interconnect for use in serious clusters designed to solve fine grained problems. SCI and Myrinet have very distinct architectures (and both require custom parallel library drivers to avoid the TCP/IP stack in e.g. MPI) and should be carefully compared for cost/benefit in any given planned application.
Introduction to the Administration of an Internet Based Local Network by Charles L. Hedrick.
Truly a classic, this work (and it's companion, next) were my original "online guides" in TCP/IP networking (and networking in general). I seem to have one of the few copies that have been preserved and offered up on the web. I used these about sixteen years ago to learn about TCP/IP networking myself. You'd think they'd be dated, sixteen years later, but of course TCP/IP itself hasn't changed much, ethernet hasn't fundamentally changed -- amazingly, these are still some of the best simple introductions available anywhere on or off the net. Thank you, Charles L. Hedrick, whereever you might be!
Introduction to the Internet Protocols by Charles L. Hedrick.
The companion work to the local area networking guide above, this work is just incredible. In addition to beautiful and tediously typed pre-web pre-portable-graphics tty renditions of TCP, IP and ethernet packets showing most of the relevant details of their headers, this work reviews ports, RFC's, and more.
A network performance benchmarking tool. I was ready to yank this link as netperf hadn't substantively changed in years and it looked like it was abandoned, but a patch update was released in February of 2003 (just a couple of months ago as I type this) so it must be still under care.
Another network performance benchmarking tool. Netpipe was very, very useful under the previous, basically frozen version 2.4, but has been taken over and is now under active development and is becoming even more useful in version 3.x and counting. I highly recommend it to anyone engineering a cluster as it integrates with most of the parallel libraries you might be interested in using as well as with most of the network hardware you might be interested in testing and comparing. The Ames Scalable Computing Laboratory at Iowa State University is home to a number of other useful projects and tools as well.

General Linux Sites

The Linux Documentation Project
This is THE key site for ANYBODY interested in learning new linux tools or skills. This is effectively a library shelf full of open documentation for just about all the important open source tools and components and architectures and methodologies used in linux, in a clear and consistent format. An awesome resource, and one of the things that makes linux almost trancendently well-documented, ONCE you know where and how to look. The whole thing can generally be installed (as a snapshot) on any linux host for local access, but since it is a dynamic and changing resource, it is better to visit the website itself if you are connected.
Master Linux Kernel Repository.
These days, most people will likely get their kernels via a distribution, but in cluster computing people still not infrequently need to build their own, or meddle with drivers. This is where you get hot, fresh sources including the latest snapshots of the current development series. Not for wimps -- there is no better way to deeply, abidingly, break the hell out of your system (usually not permanently, fortunately) than by tweaking kernel source without any clue as to what you are doing. I say this as one who has broken the hell out of my system on numerous occasions trying to tweak or "fix" the kernel without really knowing what I was doing.
ibiblio linux archives.
It used to be called Sunsite, then the Metalab Linux Repository -- now ibiblio is a fairly ambitious project that (as far as I can tell) seeks to become a universal, indexed archive of all human knowledge -- or at least the fraction of it that lives in that growing area that is free by fiat of the authors, which will eventually be the only part that really matters. One of many interesting things about this site is that it was originally set up and continues to be run by a poet. A poet that happens to love linux and systems stuff and so forth, but a poet. A man after me own heart, that is, as I write a bit myself.
This HOWTO is obviously for people interested on using SMP systems in clusters. Actually, SMP is now so well-supported and stable in most linux kernels and distributions that it is almost "transparently easy", but there can still be some SMP issues that arise in exotic applications (which can include clusters) and for some device drivers. This was not true in the early days of 2.0.x; I spent almost as much time mucking about with kernel problems then as I did doing physics!

Miscellaneous Stuff

CERT Home Page
CERT is one of the primary internet security entities, responsible (among other things) for issuing warnings and recommendations as security exploits are discovered. If you are a systems manager, you should be on their Advisory mailing list already. If not, join it now.
SuperMongo Web Page.
Supermongo is a very nice graphics program. Site licenses are very inexpensive, and it comes in source form (so it is "open source" but not "free"). Obviously I personally like it better then the obvious GPL alternatives or I wouldn't link it here, right? ;-) I also have hopes that Robert Lupton and Patricia Monger (a pair of astrophysicists) will one day GPL it anyway. Feel free to gently suggest it, should you visit their website and decide to try it.
SPEC (CPU benchmark) Repository.
SPEC is one of the primary high end benchmarks for systems. The good news is that they are one fairly understandable metric (set) for sytems performance. The bad news is that they are neither free nor open. Does this make them Evil? You decide.
This is Larry Mcvoy's microbenchmark suite, now extended and maintained by Carl Staelin. I am pleased to say that I was one of the (probably many) people who urged Larry to GPL lmbench, and he did. So this suite IS both free and open, and is in fact extensively used by people doing serious systems engineering, Linus Torvalds being one of many excellent examples. It inspired me to write my own GPL microbenchmarking program, which continues to waddle towards a state where it will be (eventually, I hope) be "easy" to add almost any code fragment and run a fairly accurate benchmark on it. Larry's company, bitmover, also provides a few other open clustering tools as well as a proprietary, very high end revision management tool.

To conclude if you've actually scrolled down this far, I hope you find some of the links above useful. On the other hand, if you have a link that you know if that IS useful and not on the list, feel free to send it to me via the mailto link below.