Contents
Functional Description
MPI-Utils is a meta package that provide mpi-start and a yaim mpi module in order to ease the installation of MPI support in the nodes.
Installation
MPI-Utils is a meta package that depends on MPI-Start and the yaim mpi module for configuration of the MPI support in the CE and WN. Administrators must install a MPI implementation and configure it at the site. Most Linux distributions provide ready to use packages for Open MPI and MPICH implmentations.
MPI-Utils can be installed from the EMI repositories, and should be as easy as:
yum install glite-mpi
or for EMI-2 and onwards:
yum install emi-mpi
In the WN, a MPI implementation must be installed also. Open MPI is recommended (the devel package allows users to compile their applications):
yum install openmpi openmpi-devel
devel packages and compilers
The devel packages of the MPI packages do not include the compiler as dependency! You should install it also if you want to support the compilation of MPI applications (e.g. gcc, gcc-gfortran, gcc-g++)
Configuration
Configuration is necessary on both the CE and WNs in order to support and advertise MPI correctly. This is performed by the yaim MPI module which should be run on both types of nodes.
WN Configuration
The yaim plugin in the WN prepares the environment for the correct execution of mpi-start. Each of the MPI flavours supported by the site must be specified setting the variable MPI_<FLAVOUR>_ENABLE to "yes". For example, to enable Open MPI, add the following:
MPI_OPENMPI_ENABLE="yes"
Optionally, if you are using a non OS provided MPI implementation, you can define the location and version with MPI_<FLAVOUR>_VERSION and MPI_<FLAVOUR>_PATH. Do not use these variables if you are using the OS provided MPI implementations. For example for Open MPI version 1.3, installed at /opt/openmpi-1.3:
MPI_OPENMPI_VERSION="1.3" MPI_OPENMPI_PATH="/opt/openmpi-1.3/"
MPI flavours that use a particular mpiexec for starting the jobs (e.g. OSC mpiexec for PBS/Torque system) may also provide in the MPI_<FLAVOUR>_MPIEXEC the path to the binary. Do not use this variable if you are not using a different mpiexec from the one provided by the MPI implementation.
Additionally, you may specify a default MPI flavour to use if non is selected for execution, with the MPI_DEFAULT_FLAVOUR. If no default flavour is specified, the first one defined in your site-info.def will be considered as default.
If you provide a shared filesystem for the execution of the applications, but it is not the path where the jobs are started, then set the variable MPI_SHARED_HOME to "yes" and the variable MPI_SHARED_HOME_PATH to the the location of the shared filesystem. Do not use this variable if the application starts its execution in a shared directory (e.g. shared home), this situation should be automatically detected.
If you use ssh host based authentication, set the variable MPI_SSH_HOST_BASED_AUTH to "yes".
SSH configuration
The yaim plugin DOES NOT configure passwordless ssh between the Worker Nodes. It must be configured manually by the site admin. The MPI_SSH_HOST_BASED_AUTH variable just sets some environment variables for the execution of the jobs.
Lastly, if your use a non default location for mpi-start, set its location with the MPI_MPI_START variable.
The complete list of configuration variables for the WN is shown in the next table:
Variable |
Mandatory |
Description |
MPI_<FLAVOUR>_ENABLE |
YES |
set to "yes" if you want to enable the <flavour> |
MPI_<FLAVOUR>_VERSION |
NO |
set to the supported version of the <flavour>, usually is automatically detected |
MPI_<FLAVOUR>_PATH |
NO |
set to the path of supported version of the <flavour>, usually is automatically detected by the yaim WN plugin |
MPI_<FLAVOUR>_MPIEXEC |
NO |
If you are using OSC mpiexec (only in PBS/Torque sites), set this to the location of the mpiexec program, e.g. "/usr/bin/mpiexec" |
MPI_DEFAULT_FLAVOUR |
NO |
Set it to the default flavour for your site, if undefined, the first defined flavour will be used |
MPI_SHARED_HOME |
NO |
set this to "yes" if you have a shared home area between WNs. |
MPI_SHARED_HOME_PATH |
NO |
location of the shared area for execution of MPI applications |
MPI_SSH_HOST_BASED_AUTH |
NO |
set it to "yes" if you have SSH based authentication between WNs |
MPI_MPI_START |
NO |
Location of mpi-start if not installed in standard location (/usr/bin/mpi-start) |
The profile for a worker node is MPI_WN. Use it along any other profiles you may need for your WN.
/opt/glite/yaim/bin/yaim -c -s site-info.def -n MPI_WN -n <other_WN_profiles>
CE Configuration
As with the WN, individual flavours of MPI are enabled by setting the MPI_<FLAVOUR>_ENABLE associated variable to "yes". The version of the MPI implementation must also be specified with the variable MPI_<FLAVOUR>_VERSION, e.g. for configuring Open MPI version 1.3:
Possible flavours are:
- OPENMPI for Open MPI
- MPICH for MPICH-1
- MPICH2 for MPICH-2
- LAM for LAM-MPI
The use of shared homes should be announced also by setting the MPI_SHARED_HOME to "yes".
If you are using PBS/Torque, you can set the variable MPI_SUBMIT_FILTER to "yes" in order to enable the submission of parallel jobs in your system.
The submit filter assumes that your Worker Nodes are correctly configured to publish in their status the ncpus variable with the number of available slots. If that's not true in your case, you may edit the file /var/torque/submit_filter in line 71 to fit your pbsnodes output. An example for using the np value is commented out in the file.
The complete list of configuration variables for the CE is shown in the next table:
Variable |
Mandatory |
Description |
MPI_<FLAVOUR>_ENABLE |
YES |
set to "yes" if you want to enable the <flavour> |
MPI_<FLAVOUR>_VERSION |
YES |
set to the supported version of the <flavour>, usually is automatically detected |
MPI_START_VERSION |
NO |
set to the available mpi-start version. If not set, the yaim plugin will try to figure out the version by checking if mpi-start is installed. |
MPI_SHARED_HOME |
NO |
set this to "yes" if you have a shared home area between WNs. |
MPI_SUBMIT_FILTER |
NO |
Set this to "yes" to configure the submit filter for torque batch system that enables the submission of parallel jobs. The configuration assumes that torque path is /var/torque or TORQUE_VAR_DIR variable if defined. |
The profile for configuring the CE is MPI_CE.
/opt/glite/yaim/bin/yaim -c -s site-info.def -n MPI_CE -n <other_ce_profiles>
MPI_CE and other yaim profiles
The MPI_CE profile should be the first in the yaim configuration, otherwise the Glue variables will not be properly defined. This restriction may be removed in future versions.
mpi-start version
The yaim plugin will publish in the tags the mpi-start version if mpi-start is installed at the CE. If not installed you should define the MPI_START_VERSION with the version available at the WNs.
Batch system
Batch system and MPI
The batch system may need extra configuration for the submission of MPI jobs. In PBS, you may use the automatic creation of the submit filter with the MPI_SUBMIT_FILTER variable. In the case of SGE you need to configure a parallel environment. Check the documentation of your batch system for any further details.
Submit filter for PBS/Torque
glite-yaim-mpi <= 1.1.10
In the case of using MPI_SUBMIT_FILTER to automatically create the submit filter in Torque/PBS, it assumes that the pbsnodes -a output has the "ncpus=" field in the status line correctly set. If not, please change the submit filter like shown in this diff:
--- submit_filter 2012-01-20 11:19:48.000000000 +0100 +++ submit_filter.new 2012-01-20 11:19:21.000000000 +0100 @@ -68,8 +68,8 @@ if (m/^\s*state\s*=\s*(\w+)/) { $state = ($1 eq "offline") ? 0 : 1; # This may be changed to fit your nodes description - # } elsif (m/^\s*np\s*=\s*(\d+)/) { - } elsif (m/^\s*status\s*=\s*.*ncpus=(\d+),/) { + } elsif (m/^\s*np\s*=\s*(\d+)/) { + # } elsif (m/^\s*status\s*=\s*.*ncpus=(\d+),/) { my $ncpus = $1; if ($state) { if (defined($machines{$ncpus})) {
Reconfiguration
Any changes to the submit filter will be overwritten if yaim is re-run.
glite-yaim-mpi >= 1.1.11
The default behaviour of the submit filter has changed in this version to use the "np=xx" parameter of the pbsnodes command output. Check the patch shown in the previous section for the changes applied.
Example configuration
Here is an example configuration (with both CEs and WN variables!):
1 #----------------------------------
2 # MPI-related configuration:
3 #----------------------------------
4 # Several MPI implementations (or "flavours") are available.
5 # If you do NOT want a flavour to be installed/configured, set its variable
6 # to "no". Else, set it to "yes" (default). If you want to use an
7 # already installed version of an implementation, set its "_PATH" and
8 # "_VERSION" variables to match your setup (examples below).
9 #
10 # NOTE 1: the CE_RUNTIMEENV will be automatically updated in the file
11 # functions/config_mpi, so that the CE advertises the MPI implementations
12 # you choose here - you do NOT have to change it manually in this file.
13 # It will become something like this:
14 #
15 # CE_RUNTIMEENV="$CE_RUNTIMEENV
16 # MPI_MPICH
17 # MPI_MPICH2
18 # MPI_OPENMPI
19 # MPI_LAM"
20 #
21 # NOTE 2: it is currently NOT possible to configure multiple concurrent
22 # versions of the same implementations (e.g. MPICH-1.2.3 and MPICH-1.2.7)
23 # using YAIM. Customize "/opt/glite/yaim/functions/config_mpi" file
24 # to do so.
25
26 MPI_MPICH_ENABLE="yes"
27 MPI_MPICH_VERSION="1.2.7p1"
28
29 MPI_MPICH2_ENABLE="yes"
30 MPI_MPICH2_VERSION="1.0.4"
31
32 MPI_OPENMPI_ENABLE="yes"
33 MPI_OPENMPI_VERSION="1.1"
34
35 MPI_LAM_ENABLE="yes"
36 MPI_LAM_VERSION="7.1.2"
37
38 # set Open MPI as default flavour
39 MPI_DEFAULT_FLAVOUR=OPENMPI
40
41 #---
42 # Example for using an already installed version of MPI.
43 # Setting "_PATH" and "_VERSION" variables will prevent YAIM
44 # from using the default OS locations
45 # Just fill in the path to its current installation (e.g. "/usr")
46 # and which version it is (e.g. "6.5.9").
47 # DO NOT USE UNLESS A NON DEFAULT LOCATION IS USED
48 #---
49 # MPI_MPICH_PATH="/opt/mpich-1.2.7p1/"
50 # MPI_MPICH2_PATH="/opt/mpich2-1.0.4/"
51
52 # If you do NOT provide a shared home, set $MPI_SHARED_HOME to "no" (default).
53 #
54 # MPI_SHARED_HOME="yes"
55
56 #
57 # If you do NOT have SSH Hostbased Authentication between your WNs, set the below
58 # variable to "no" (default). Else, set it to "yes".
59 #
60 # MPI_SSH_HOST_BASED_AUTH="yes"
61
62
63 # If you use Torque as batch system, you may want to let the yaim plugin
64 # configure a submit filter for you. Uncomment the following line to do so
65 # MPI_SUBMIT_FILTER="yes"
66
67 #
68 # If you provide an 'mpiexec' for MPICH or MPICH2, please state the full path to
69 # that file here (http://www.osc.edu/~pw/mpiexec/index.php). Else, leave empty.
70 #
71 #MPI_MPICH_MPIEXEC="/usr/bin/mpiexec"
72
Testing
You can do some basic tests by logging in on a WN as a pool user and running the following:
[dte056@cagnode48 dte056]$ env|grep MPI_
You should see something like this:
MPI_MPICC_OPTS=-m32 MPI_SSH_HOST_BASED_AUTH=yes MPI_OPENMPI_PATH=/opt/openmpi/1.1 MPI_LAM_VERSION=7.1.2 MPI_MPICXX_OPTS=-m32 MPI_LAM_PATH=/usr MPI_OPENMPI_VERSION=1.1 MPI_MPIF77_OPTS=-m32 MPI_MPICH_VERSION=1.2.7 MPI_MPIEXEC_PATH=/opt/mpiexec-0.80 MPI_MPICH2_PATH=/opt/mpich2-1.0.4 MPI_MPICH2_VERSION=1.0.4 I2G_MPI_START=/opt/mpi-start/bin/mpi-start MPI_MPICH_PATH=/opt/mpich-1.2.7p1
You can also try submitting a job to your site, please read MPI-Start user documentation