Table of contents:
- What exactly is considered a "heterogeneous" cluster?
- Does LAM/MPI work on heterogeneous clusters?
- Do different versions of LAM/MPI constitute heterogeneous clusters?
- How do I install LAM on a heterogeneous cluster?
- How do I de<lambootde< across a heterogeneous cluster?
- How do I execute the right binary on each node for each architecture in a heterogeneous system?
- Can I mix 32 and 64 bit executables in a single parallel MPI job?
[ Return to FAQ ]
- Operating system (to include the same OS version)
- Key component libraries, such as de<libcde< or de<glibcde< on Linux and freeware BSD operating systems (as above, including the same library versions)
The first requirement — same architecture — has a bit of leeway. For example, two Pentium III machines with different amounts of RAM or a different CPU speed would still be considered homogeneous. In general, homogeneity is determined by whether the software compiled on one machine can run natively on another. In the case of the same CPU but different amounts of RAM or a different CPU speed, this is most likely true. This is not necessarily true between a Pentium II and a Pentium III, for example.
- 32 Pentium III machines, each running a stock Red Hat 7.1 installation updated with all the most recent patches from Red Hat.
- 16 Pentium III nodes running Red Hat 7.1, 16 Pentium III nodes running Red Hat 7.0. Yes, even a minor difference in operating system constitutes being "different enough" to be heterogeneous.
- 16 Pentium III nodes running Red Hat 7.1, 16 Pentium III nodes running Mandrake 8.0. This one is questionable, since Mandrake professes to be compatible with Red Hat. So to be safe, call it heterogeneous.
- 16 Pentium III nodes running Red Hat 7.1, 16 Pentium III nodes running SuSE 7.2. This is most likely heterogeneous since the linux distributions are different; it is possible that the Linux kernel versions are different, different versions of the GNU compilers are installed, and/or different versions of de<glibcde< are used, etc.
- 16 Pentium III nodes running Red Hat 7.1, 16 Pentium III nodes running OpenBSD 2.9. These are clearly two different operating systems.
- 16 Pentium II nodes and 16 Pentium III nodes all running Red Hat 7.1. You could play some tricks and treat this as a homogeneous cluster, but it is probably safer (and more efficient) to treat this as a heterogeneous cluster.
- 16 SunBlade 1000 nodes running Solaris 8, 16 SunBlade nodes running Solaris 9. The operating system difference makes this heterogeneous.
- 16 SubBlade 1000 nodes running Solaris 8, 16 Pentium III nodes running Red Hat 7.1. The architecture difference makes this heterogeneous.
LAM/MPI will work between just about any flavor of POSIX (with a few restrictions). That is, you can have two completely different machines (e.g., a Sun machine and an Intel-based machine), and LAM will run on both of them. More importantly, you can run a single parallel job that spans both of them.
An important restriction is that LAM does not currently support systems that have datatypes that are different sizes. For example, if an integer is 64 bits on one machine and is 32 bits on another, LAM’s behavior is undefined. Also, LAM requires that floating point formats be the same. That is, endianness can be different, but the same general format must be obeyed by all participating machines (e.g., older Alpha machines do not adhere to IEEE floating point standard by default — such machines can be used in parallel jobs with other similar machines, but to use them in a heterogeneous situation would require adherence to the IEEE floating point standards so that all nodes in the parallel job understand the same floating point formats).
Indeed, what is the Right Thing for an MPI to do in these kinds of situations, anyway? There really is no good answer — having MPI truncate when 64 bit integers are sent to 32 bit integers is not desirable, nor is having the MPI translate from one floating point format to another (for similar loss of precision reasons).
BUT different versions of LAM will not work together. In order to successfully de<lambootde< and de<mpirunde<, you must use the same version of LAM/MPI on all nodes, regardless of their operating system, architecture, etc.
- Install LAM on one node, and make the directory tree that LAM was installed to available to all nodes via a networked filesystem (such as NFS)
- Physically install LAM on each node in the cluster
Both of these methods are possible for heterogeneous clusters as well. Physically installing LAM on each node in the cluster is the safest, least complicated way to do this. However, it is potentially the most labor intensive, and most difficult to maintain over time.
In most cases, there will be multiple nodes of each kind in a heterogeneous cluster. As such, it may be useful to consider a heterogeneous cluster to be a group of homogeneous clusters. So although local policies and requirements may vary, the LAM Team recommends that LAM is installed on a networked filesystem in each homogeneous cluster.
NOTE: There are some scalability issues with using networked filesystems on large clusters. As such, it may not be sufficient or desirable to use the common filesystem model at your site, depending on the size of your cluster and your choice of networked filesystem. YMMV.
For example, consider a cluster of 16 Pentium II nodes running Red Hat 7.0 and second group of 16 Pentium III nodes running Red Hat 7.1. Both the architecture difference and operating system difference make these sub-clusters heterogeneous.
In the common filesystem model, LAM will need to be installed twice for the heterogeneous cluster described above — once for the PII/RH7.0 machines, and once for the PIII/RH7.1 machines. Each machine in the cluster will need to either mount the appropriate LAM installation, and/or user paths will need to be set appropriately on each node in the cluster to point to the appropriate LAM installation.
- All nodes being de<lambootde<ing must be using the same version of LAM (this is actually always a requirement — this is just a clarification that "heterogeneous" does not mean "different versions of LAM/MPI").
- Each user’s de<$PATHde< must be setup properly to find the Right version of LAM/MPI on each node. That is, if multiple installations of LAM are available on each node, the user’s de<$PATHde< must be set to find the appropriate installation for that node. For example, if LAM is installed on a networked filesystem for two different architectures in:
If de</home/lamde< is NFS mounted on all nodes in the cluster, the user’s de<$PATHde< must be set to use one of those three trees as appropriate for the kind of node that they are logged in to. This is typically set in the user’s dot files (e.g., de<$HOME/.profilede<, de<$HOME/.cshrcde<, etc.), or in a system-wide default dot file (these vary between different operating systems).
- If the right binaries are in the current working directory, and the current working directory is available on all nodes, de<mpiexecde< can execute them directly. For example:
LAM will look for Linux architecture nodes in the current universe and launch the executable de<my_mpi_program.linuxde<. Similarly, LAM will launch the executable de<my_mpi_program.solarisde< on all Solaris nodes in the universe. The string after the de<-archde< switch specifies a text string to match from the output of the GNU de<config.guessde< script (i.e., the output from de<laminfode< in the architecture line).
- If the de<$PATHde< variable is set correctly for each node that LAM uses (i.e., separate directories exist containing MPI binaries for each architecture, and the correct directory for each architecture is inserted into the de<$PATHde< on each node), de<mpirun C foode< will automatically find the de<foode< for the right architecture.
- However, most users do not set their de<$PATHde< variable in this fashion. If de<mpiexecde< is not suitable, you will more than likely need to use an application schema ("app schema") file for this case. In the app schema, it is usually easiest to specify the absolute pathname of the program for each node. For example, using the following boot schema file:
Remember, it may be necessary to have different versions of the MPI binary for each OS version as well as each machine architecture. For example, you may need to have a separate versions for Solaris 2.5 and 2.6. This is also true when running between different linux distributions — as shown in the example above where Red Hat and SuSE are considered different operating systems and therefore have their own copy of de<foode<.
- Most 64 bit operating systems have the capacity to generate 32 bit executables. By doing so, one can make the cluster "homogenous" (at least in terms of bit size). Once all the executables (including relevant libraries) are 32 bits, one can run MPI jobs as if it were a homogenous cluster. Note that LAM/MPI libraries and executables should also be built as 32 bit libraries/executables.
- The differences in datatype sizes between 32 and 64 bit machcines are likely to create problems. Consider the scenario where a 64 process sends a message containing de<MPI_LONGde< data to a 32 bit process. What is the size of the datatype? On the 64 bit machine, each de<MPI_LONGde< is likely to be 64 bits, but on the 32 bit machine, they are likely to be 32 bits. So what should the 32 bit process do when it receives the data?
There is, unfortunately, no good answer to this. Obvious choices include invoking an error or truncating the data, neither of which are attractive. Debugging such applications is non-trivial and therefore this is not the preferred solution.