welcome: please sign in
location: Diff for "Middleware/MpiStart/MpiUtils"
Differences between revisions 3 and 35 (spanning 32 versions)
Revision 3 as of 2011-02-17 10:28:44
Size: 5452
Editor: enol
Comment:
Revision 35 as of 2012-02-17 14:34:36
Size: 12818
Editor: enol
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
= MPI-Utils =
Line 4: Line 3:
== Installation ==
The YAIM module for MPI configuration is now available as part of the gLite distribution (http://glitesoft.cern.ch/EGEE/gLite/R3.1/glite-MPI_utils/sl4/i386/RPMS.release/glite-yaim-mpi-0.1.6-3.noarch.rpm). If you install the meta-package glite-MPI_utils-3.1.1-0.i386.rpm it will pull in this YAIM module as well as other packages needed for MPI.

To install this using YUM, add a file (e.g. glite-MPI_utils.repo) with the following contents to your YUM repositories directory (/etc/yum/repos.d):

{{{
[glite-MPI_utils]
name=glite 3.1 MPI
enabled=1
gpgcheck=0
baseurl=http://glitesoft.cern.ch/EGEE/gLite/R3.1/glite-MPI_utils/sl4/i386/
}}}
Then run this:

{{{
yum install glite-MPI_utils
}}}

= yaim Configuration =
== Overview ==
Configuration is necessary on both the CE and WNs in order to support and advertise MPI correctly (see SiteConfig for details). This is performed by the gLite YAIM module glite-yaim-mpi which should be run on both the CE and WNs.


=== Configure MPI in site-info.def ===
Individual "flavours" of MPI are enabled by setting the associated variable to "yes". For example, to enable MPICH, add the following:
= Functional Description =
MPI-Utils is a meta package that provide [[../ | mpi-start]] and a yaim mpi module in order to ease the installation of MPI support in the nodes.

= Installation =
MPI-Utils is a meta package that depends on MPI-Start and the yaim mpi module for configuration of the MPI support in the CE and WN. Administrators must install a MPI implementation and configure it at the site. Most Linux distributions provide ready to use packages for Open MPI and MPICH implmentations.

MPI-Utils can be installed from the EMI repositories, and should be as easy as:

{{{
yum install glite-mpi
}}}

or for EMI-2 and onwards:

{{{
yum install emi-mpi
}}}

In the WN, a MPI implementation must be installed also. Open MPI is recommended (the devel package allows users to compile their applications):

{{{
yum install openmpi openmpi-devel
}}}

{{{#!wiki caution
'''devel packages and compilers'''

The devel packages of the MPI packages do not include the compiler as dependency! You should install it also if you want to support the compilation of MPI applications (e.g. gcc, gcc-gfortran, gcc-g++)
}}}

= Configuration =

Configuration is necessary on both the CE and WNs in order to support and advertise MPI correctly. This is performed by the yaim MPI module which should be run on both types of nodes.

== WN Configuration ==

The yaim plugin in the WN prepares the environment for the correct execution of mpi-start. Each of the MPI flavours supported by the site must be specified setting the variable `MPI_<FLAVOUR>_ENABLE` to `"yes"`. For example, to enable Open MPI, add the following:

{{{
MPI_OPENMPI_ENABLE="yes"
}}}

Optionally, if you are using a non OS provided MPI implementation, you can define the location and version with `MPI_<FLAVOUR>_VERSION` and `MPI_<FLAVOUR>_PATH`. '''Do not use these variables if you are using the OS provided MPI implementations'''. For example for Open MPI version 1.3, installed at /opt/openmpi-1.3:

{{{
MPI_OPENMPI_VERSION="1.3"
MPI_OPENMPI_PATH="/opt/openmpi-1.3/"
}}}

MPI flavours that use a particular mpiexec for starting the jobs (e.g. OSC mpiexec for PBS/Torque system) may also provide in the `MPI_<FLAVOUR>_MPIEXEC` the path to the binary. '''Do not use this variable if you are not using a different mpiexec from the one provided by the MPI implementation.'''

Additionally, you may specify a default MPI flavour to use if non is selected for execution, with the `MPI_DEFAULT_FLAVOUR`. If no default flavour is specified, the first one defined in your site-info.def will be considered as default.

If you provide a shared filesystem for the execution of the applications, but it is not the path where the jobs are started, then set the variable `MPI_SHARED_HOME` to `"yes"` and the variable `MPI_SHARED_HOME_PATH` to the the location of the shared filesystem. '''Do not use this variable if the application starts its execution in a shared directory (e.g. shared home), this situation should be automatically detected'''.

If you use ssh host based authentication, set the variable `MPI_SSH_HOST_BASED_AUTH` to `"yes"`.

{{{#!wiki caution
'''SSH configuration'''

The yaim plugin '''DOES NOT''' configure passwordless ssh between the Worker Nodes. It must be configured manually by the site admin. The `MPI_SSH_HOST_BASED_AUTH` variable just sets some environment variables for the execution of the jobs.
}}}

Lastly, if your use a non default location for mpi-start, set its location with the `MPI_MPI_START` variable.

The complete list of configuration variables for the WN is shown in the next table:
||'''Variable''' || '''Mandatory''' || '''Description''' ||
||`MPI_<FLAVOUR>_ENABLE` || YES || set to `"yes"` if you want to enable the <flavour> ||
||`MPI_<FLAVOUR>_VERSION` || NO || set to the supported version of the <flavour>, usually is automatically detected ||
||`MPI_<FLAVOUR>_PATH` || NO || set to the path of supported version of the <flavour>, usually is automatically detected by the yaim WN plugin ||
||`MPI_<FLAVOUR>_MPIEXEC` || NO || If you are using OSC mpiexec (only in PBS/Torque sites), set this to the location of the mpiexec program, e.g. `"/usr/bin/mpiexec"` ||
||`MPI_DEFAULT_FLAVOUR` || NO || Set it to the default flavour for your site, if undefined, the first defined flavour will be used ||
||`MPI_SHARED_HOME` || NO || set this to `"yes"` if you have a shared home area between WNs. ||
||`MPI_SHARED_HOME_PATH` || NO || location of the shared area for execution of MPI applications ||
||`MPI_SSH_HOST_BASED_AUTH` || NO || set it to `"yes"` if you have SSH based authentication between WNs ||
||`MPI_MPI_START` || NO || Location of mpi-start if not installed in standard location (`/usr/bin/mpi-start`) ||

The profile for a worker node is MPI_WN. Use it along any other profiles you may need for your WN.
{{{
/opt/glite/yaim/bin/yaim -c -s site-info.def -n MPI_WN -n <other_WN_profiles>
}}}

== CE Configuration ==

As with the WN, individual flavours of MPI are enabled by setting the `MPI_<FLAVOUR>_ENABLE` associated variable to `"yes"`. The version of the MPI implementation must also be specified with the variable `MPI_<FLAVOUR>_VERSION`, e.g. for configuring Open MPI version 1.3:
Line 31: Line 90:
MPI_MPICH_ENABLE="yes"
}}}
You may set the path and version for a flavour of MPI as follows:

{{{#!highlight sh
MPI_MPICH_PATH="/opt/mpich-1.2.7p1/"
MPI_MPICH_VERSION="1.2.7p1"
}}}

The remaining variables are:
||{{{MPI_SHARED_HOME}}} ||set this to "yes" if you have a shared home area between WNs. ||
||{{{MPI_SSH_HOST_BASED_AUTH}}} || ||
||{{{MPI_<FLAVOUR>_MPIEXEC}}} ||If you are using OSC mpiexec with mpich, set this to the location of the mpiexec program, e.g. "/usr/bin/mpiexec" ||
||{{{MPI_MPI_START}}} ||Location of mpi-start if not installed in standard location /opt/i2g/bin/mpi-start ||


Here is an example configuration:
MPI_OPENMPI_ENABLE="yes"
MPI_OPENMPI_VERSION="1.3"
}}}

Possible flavours are:
 * OPENMPI for Open MPI
 * MPICH for MPICH-1
 * MPICH2 for MPICH-2
 * LAM for LAM-MPI

The use of shared homes should be announced also by setting the `MPI_SHARED_HOME` to `"yes"`.

If you are using PBS/Torque, you can set the variable `MPI_SUBMIT_FILTER` to `"yes"` in order to enable the
submission of parallel jobs in your system.

{{{#!wiki caution
The submit filter assumes that your Worker Nodes are correctly configured to publish in their status the `ncpus` variable with the number of available slots. If that's not true in your case, you may edit the file `/var/torque/submit_filter` in line 71 to fit your pbsnodes output. An example for using the `np` value is commented out in the file.
}}}

The complete list of configuration variables for the CE is shown in the next table:
||'''Variable''' || '''Mandatory''' || '''Description''' ||
||`MPI_<FLAVOUR>_ENABLE` || YES || set to `"yes"` if you want to enable the <flavour> ||
||`MPI_<FLAVOUR>_VERSION` || YES || set to the supported version of the <flavour>, usually is automatically detected ||
||`MPI_START_VERSION` || NO || set to the available mpi-start version. If not set, the yaim plugin will try to figure out the version by checking if mpi-start is installed. ||
||`MPI_SHARED_HOME` || NO || set this to `"yes"` if you have a shared home area between WNs. ||
||`MPI_SUBMIT_FILTER` || NO || Set this to `"yes"` to configure the submit filter for torque batch system that enables the submission of parallel jobs. The configuration assumes that torque path is `/var/torque` or `TORQUE_VAR_DIR` variable if defined. ||

The profile for configuring the CE is MPI_CE.
{{{
/opt/glite/yaim/bin/yaim -c -s site-info.def -n MPI_CE -n <other_ce_profiles>
}}}

{{{#!wiki caution
'''MPI_CE and other yaim profiles'''

The `MPI_CE` profile should be the first in the yaim configuration, otherwise the Glue variables will not be properly defined. This restriction may be removed in future versions.
}}}

{{{#!wiki caution
'''mpi-start version'''

The yaim plugin will publish in the tags the mpi-start version if mpi-start is installed at the CE. If not installed you should define the `MPI_START_VERSION` with the version available at the WNs.
}}}

=== Batch system ===

{{{#!wiki caution
'''Batch system and MPI'''

The batch system may need extra configuration for the submission of MPI jobs. In PBS, you may use the automatic creation of the submit filter with the `MPI_SUBMIT_FILTER` variable. In the case of SGE you need to configure a parallel environment.
Check the documentation of your batch system for any further details.
}}}

==== Submit filter for PBS/Torque ====

===== glite-yaim-mpi <= 1.1.10 =====

In the case of using `MPI_SUBMIT_FILTER` to automatically create the submit filter in Torque/PBS, it assumes that the pbsnodes -a output has the "ncpus=" field in the status line correctly set. If not, please change the submit filter like shown in this diff:

{{{
--- submit_filter 2012-01-20 11:19:48.000000000 +0100
+++ submit_filter.new 2012-01-20 11:19:21.000000000 +0100
@@ -68,8 +68,8 @@
         if (m/^\s*state\s*=\s*(\w+)/) {
             $state = ($1 eq "offline") ? 0 : 1;
         # This may be changed to fit your nodes description
- # } elsif (m/^\s*np\s*=\s*(\d+)/) {
- } elsif (m/^\s*status\s*=\s*.*ncpus=(\d+),/) {
+ } elsif (m/^\s*np\s*=\s*(\d+)/) {
+ # } elsif (m/^\s*status\s*=\s*.*ncpus=(\d+),/) {
             my $ncpus = $1;
             if ($state) {
                 if (defined($machines{$ncpus})) {
}}}

{{{#!wiki caution
'''Reconfiguration'''

Any changes to the submit filter will be overwritten if yaim is re-run.
}}}


===== glite-yaim-mpi >= 1.1.11 =====

The default behaviour of the submit filter has changed in this version to use the "np=xx" parameter of the pbsnodes command output. Check the patch shown in the previous section for the changes applied.


== Example configuration ==

Here is an example configuration (with both CEs and WN variables!):
Line 76: Line 208:
MPI_MPICH_VERSION="1.2.7p1"
Line 77: Line 211:
MPI_MPICH2_VERSION="1.0.4"
Line 78: Line 214:
MPI_OPENMPI_VERSION="1.1"
Line 79: Line 217:
MPI_LAM_VERSION="7.1.2"

# set Open MPI as default flavour
MPI_DEFAULT_FLAVOUR=OPENMPI
Line 83: Line 225:
# from downloading and installing the gLite-provided packages. # from using the default OS locations
Line 86: Line 228:
# DO NOT USE UNLESS A NON DEFAULT LOCATION IS USED
Line 87: Line 230:
MPI_MPICH_PATH="/opt/mpich-1.2.7p1/"
MPI_MPICH_VERSION="1.2.7p1"
MPI_MPICH2_PATH="/opt/mpich2-1.0.4/"
MPI_MPICH2_VERSION="1.0.4"
MPI_OPENMPI_VERSION="1.1"
MPI_LAM_VERSION="7.1.2"
# MPI_MPICH_PATH="/opt/mpich-1.2.7p1/"
# MPI_MPICH2_PATH="/opt/mpich2-1.0.4/"
Line 96: Line 235:
MPI_SHARED_HOME="yes" # MPI_SHARED_HOME="yes"
Line 102: Line 241:
MPI_SSH_HOST_BASED_AUTH="no" # MPI_SSH_HOST_BASED_AUTH="yes"


# If you use Torque as batch system, you may want to let the yaim plugin
# configure a submit filter for you. Uncomment the following line to do so
# MPI_SUBMIT_FILTER="yes"
Line 109: Line 253:
MPI_MPICH_MPIEXEC="/usr/bin/mpiexec"
}}}


This should cause the variable GlueCEInfoLRMSType to be set correctly in /opt/lcg/var/gip/lcg-info-static-ce.conf and /opt/lcg/var/gip/ldif/static-file-CE.ldif

=== Configure CE ===
{{{/opt/glite/yaim/bin/yaim -c -s site-info.def -n MPI_CE}}}

=== Configure WN ===
For a Torque worker node: {{{/opt/glite/yaim/bin/yaim -c -s site-info.def -n MPI_WN -n glite-WN -n TORQUE_client}}}

=== Testing ===
}}}


== Testing ==
Line 142: Line 277:
I2G_MPI_START=/opt/i2g/bin/mpi-start I2G_MPI_START=/opt/mpi-start/bin/mpi-start
Line 145: Line 280:
You can also try submitting a job to your site using the instructions found via the page JobSubmission.

SAM tests are run daily to verify MPI functionality. You can see the results by going to the SAM tests homepage (https://lxn1181.cern.ch:8443/sam/sam.py) and selecting to display the test "mpi-all" for CEs in the dteam VO.

You can also try submitting a job to your site, please read [[../UserDocumentation | MPI-Start user documentation]]

Functional Description

MPI-Utils is a meta package that provide mpi-start and a yaim mpi module in order to ease the installation of MPI support in the nodes.

Installation

MPI-Utils is a meta package that depends on MPI-Start and the yaim mpi module for configuration of the MPI support in the CE and WN. Administrators must install a MPI implementation and configure it at the site. Most Linux distributions provide ready to use packages for Open MPI and MPICH implmentations.

MPI-Utils can be installed from the EMI repositories, and should be as easy as:

yum install glite-mpi

or for EMI-2 and onwards:

yum install emi-mpi

In the WN, a MPI implementation must be installed also. Open MPI is recommended (the devel package allows users to compile their applications):

yum install openmpi openmpi-devel

devel packages and compilers

The devel packages of the MPI packages do not include the compiler as dependency! You should install it also if you want to support the compilation of MPI applications (e.g. gcc, gcc-gfortran, gcc-g++)

Configuration

Configuration is necessary on both the CE and WNs in order to support and advertise MPI correctly. This is performed by the yaim MPI module which should be run on both types of nodes.

WN Configuration

The yaim plugin in the WN prepares the environment for the correct execution of mpi-start. Each of the MPI flavours supported by the site must be specified setting the variable MPI_<FLAVOUR>_ENABLE to "yes". For example, to enable Open MPI, add the following:

MPI_OPENMPI_ENABLE="yes"

Optionally, if you are using a non OS provided MPI implementation, you can define the location and version with MPI_<FLAVOUR>_VERSION and MPI_<FLAVOUR>_PATH. Do not use these variables if you are using the OS provided MPI implementations. For example for Open MPI version 1.3, installed at /opt/openmpi-1.3:

MPI_OPENMPI_VERSION="1.3"
MPI_OPENMPI_PATH="/opt/openmpi-1.3/"

MPI flavours that use a particular mpiexec for starting the jobs (e.g. OSC mpiexec for PBS/Torque system) may also provide in the MPI_<FLAVOUR>_MPIEXEC the path to the binary. Do not use this variable if you are not using a different mpiexec from the one provided by the MPI implementation.

Additionally, you may specify a default MPI flavour to use if non is selected for execution, with the MPI_DEFAULT_FLAVOUR. If no default flavour is specified, the first one defined in your site-info.def will be considered as default.

If you provide a shared filesystem for the execution of the applications, but it is not the path where the jobs are started, then set the variable MPI_SHARED_HOME to "yes" and the variable MPI_SHARED_HOME_PATH to the the location of the shared filesystem. Do not use this variable if the application starts its execution in a shared directory (e.g. shared home), this situation should be automatically detected.

If you use ssh host based authentication, set the variable MPI_SSH_HOST_BASED_AUTH to "yes".

SSH configuration

The yaim plugin DOES NOT configure passwordless ssh between the Worker Nodes. It must be configured manually by the site admin. The MPI_SSH_HOST_BASED_AUTH variable just sets some environment variables for the execution of the jobs.

Lastly, if your use a non default location for mpi-start, set its location with the MPI_MPI_START variable.

The complete list of configuration variables for the WN is shown in the next table:

Variable

Mandatory

Description

MPI_<FLAVOUR>_ENABLE

YES

set to "yes" if you want to enable the <flavour>

MPI_<FLAVOUR>_VERSION

NO

set to the supported version of the <flavour>, usually is automatically detected

MPI_<FLAVOUR>_PATH

NO

set to the path of supported version of the <flavour>, usually is automatically detected by the yaim WN plugin

MPI_<FLAVOUR>_MPIEXEC

NO

If you are using OSC mpiexec (only in PBS/Torque sites), set this to the location of the mpiexec program, e.g. "/usr/bin/mpiexec"

MPI_DEFAULT_FLAVOUR

NO

Set it to the default flavour for your site, if undefined, the first defined flavour will be used

MPI_SHARED_HOME

NO

set this to "yes" if you have a shared home area between WNs.

MPI_SHARED_HOME_PATH

NO

location of the shared area for execution of MPI applications

MPI_SSH_HOST_BASED_AUTH

NO

set it to "yes" if you have SSH based authentication between WNs

MPI_MPI_START

NO

Location of mpi-start if not installed in standard location (/usr/bin/mpi-start)

The profile for a worker node is MPI_WN. Use it along any other profiles you may need for your WN.

/opt/glite/yaim/bin/yaim -c -s site-info.def  -n MPI_WN -n <other_WN_profiles>

CE Configuration

As with the WN, individual flavours of MPI are enabled by setting the MPI_<FLAVOUR>_ENABLE associated variable to "yes". The version of the MPI implementation must also be specified with the variable MPI_<FLAVOUR>_VERSION, e.g. for configuring Open MPI version 1.3:

   1 MPI_OPENMPI_ENABLE="yes"
   2 MPI_OPENMPI_VERSION="1.3"

Possible flavours are:

  • OPENMPI for Open MPI
  • MPICH for MPICH-1
  • MPICH2 for MPICH-2
  • LAM for LAM-MPI

The use of shared homes should be announced also by setting the MPI_SHARED_HOME to "yes".

If you are using PBS/Torque, you can set the variable MPI_SUBMIT_FILTER to "yes" in order to enable the submission of parallel jobs in your system.

The submit filter assumes that your Worker Nodes are correctly configured to publish in their status the ncpus variable with the number of available slots. If that's not true in your case, you may edit the file /var/torque/submit_filter in line 71 to fit your pbsnodes output. An example for using the np value is commented out in the file.

The complete list of configuration variables for the CE is shown in the next table:

Variable

Mandatory

Description

MPI_<FLAVOUR>_ENABLE

YES

set to "yes" if you want to enable the <flavour>

MPI_<FLAVOUR>_VERSION

YES

set to the supported version of the <flavour>, usually is automatically detected

MPI_START_VERSION

NO

set to the available mpi-start version. If not set, the yaim plugin will try to figure out the version by checking if mpi-start is installed.

MPI_SHARED_HOME

NO

set this to "yes" if you have a shared home area between WNs.

MPI_SUBMIT_FILTER

NO

Set this to "yes" to configure the submit filter for torque batch system that enables the submission of parallel jobs. The configuration assumes that torque path is /var/torque or TORQUE_VAR_DIR variable if defined.

The profile for configuring the CE is MPI_CE.

/opt/glite/yaim/bin/yaim -c -s site-info.def  -n MPI_CE -n <other_ce_profiles>

MPI_CE and other yaim profiles

The MPI_CE profile should be the first in the yaim configuration, otherwise the Glue variables will not be properly defined. This restriction may be removed in future versions.

mpi-start version

The yaim plugin will publish in the tags the mpi-start version if mpi-start is installed at the CE. If not installed you should define the MPI_START_VERSION with the version available at the WNs.

Batch system

Batch system and MPI

The batch system may need extra configuration for the submission of MPI jobs. In PBS, you may use the automatic creation of the submit filter with the MPI_SUBMIT_FILTER variable. In the case of SGE you need to configure a parallel environment. Check the documentation of your batch system for any further details.

Submit filter for PBS/Torque

glite-yaim-mpi <= 1.1.10

In the case of using MPI_SUBMIT_FILTER to automatically create the submit filter in Torque/PBS, it assumes that the pbsnodes -a output has the "ncpus=" field in the status line correctly set. If not, please change the submit filter like shown in this diff:

--- submit_filter       2012-01-20 11:19:48.000000000 +0100
+++ submit_filter.new   2012-01-20 11:19:21.000000000 +0100
@@ -68,8 +68,8 @@
         if (m/^\s*state\s*=\s*(\w+)/) {
             $state = ($1 eq "offline") ? 0 : 1;
         # This may be changed to fit your nodes description
-        # } elsif (m/^\s*np\s*=\s*(\d+)/) {
-        } elsif (m/^\s*status\s*=\s*.*ncpus=(\d+),/) {
+        } elsif (m/^\s*np\s*=\s*(\d+)/) {
+        # } elsif (m/^\s*status\s*=\s*.*ncpus=(\d+),/) {
             my $ncpus = $1;
             if ($state) {
                 if (defined($machines{$ncpus})) {

Reconfiguration

Any changes to the submit filter will be overwritten if yaim is re-run.

glite-yaim-mpi >= 1.1.11

The default behaviour of the submit filter has changed in this version to use the "np=xx" parameter of the pbsnodes command output. Check the patch shown in the previous section for the changes applied.

Example configuration

Here is an example configuration (with both CEs and WN variables!):

   1 #----------------------------------
   2 # MPI-related configuration:
   3 #----------------------------------
   4 # Several MPI implementations (or "flavours") are available.
   5 # If you do NOT want a flavour to be installed/configured, set its variable
   6 # to "no". Else, set it to "yes" (default). If you want to use an
   7 # already installed version of an implementation, set its "_PATH" and
   8 # "_VERSION" variables to match your setup (examples below).
   9 #
  10 # NOTE 1: the CE_RUNTIMEENV will be automatically updated in the file
  11 # functions/config_mpi, so that the CE advertises the MPI implementations
  12 # you choose here - you do NOT have to change it manually in this file.
  13 # It will become something like this:
  14 #
  15 #   CE_RUNTIMEENV="$CE_RUNTIMEENV
  16 #              MPI_MPICH
  17 #              MPI_MPICH2
  18 #              MPI_OPENMPI
  19 #              MPI_LAM"
  20 #
  21 # NOTE 2: it is currently NOT possible to configure multiple concurrent
  22 # versions of the same implementations (e.g. MPICH-1.2.3 and MPICH-1.2.7)
  23 # using YAIM. Customize "/opt/glite/yaim/functions/config_mpi" file
  24 # to do so.
  25 
  26 MPI_MPICH_ENABLE="yes"
  27 MPI_MPICH_VERSION="1.2.7p1"
  28 
  29 MPI_MPICH2_ENABLE="yes"
  30 MPI_MPICH2_VERSION="1.0.4"
  31 
  32 MPI_OPENMPI_ENABLE="yes"
  33 MPI_OPENMPI_VERSION="1.1"
  34 
  35 MPI_LAM_ENABLE="yes"
  36 MPI_LAM_VERSION="7.1.2"
  37 
  38 # set Open MPI as default flavour
  39 MPI_DEFAULT_FLAVOUR=OPENMPI
  40 
  41 #---
  42 # Example for using an already installed version of MPI.
  43 # Setting "_PATH" and "_VERSION" variables will prevent YAIM
  44 # from using the default OS locations
  45 # Just fill in the path to its current installation (e.g. "/usr")
  46 # and which version it is (e.g. "6.5.9").
  47 # DO NOT USE UNLESS A NON DEFAULT LOCATION IS USED
  48 #---
  49 # MPI_MPICH_PATH="/opt/mpich-1.2.7p1/"
  50 # MPI_MPICH2_PATH="/opt/mpich2-1.0.4/"
  51 
  52 # If you do NOT provide a shared home, set $MPI_SHARED_HOME to "no" (default).
  53 #
  54 # MPI_SHARED_HOME="yes"
  55 
  56 #
  57 # If you do NOT have SSH Hostbased Authentication between your WNs, set the below
  58 # variable to "no" (default). Else, set it to "yes".
  59 #
  60 # MPI_SSH_HOST_BASED_AUTH="yes"
  61 
  62 
  63 # If you use Torque as batch system, you may want to let the yaim plugin
  64 # configure a submit filter for you. Uncomment the following line to do so
  65 # MPI_SUBMIT_FILTER="yes"
  66 
  67 #
  68 # If you provide an 'mpiexec' for MPICH or MPICH2, please state the full path to
  69 # that file here (http://www.osc.edu/~pw/mpiexec/index.php). Else, leave empty.
  70 #
  71 #MPI_MPICH_MPIEXEC="/usr/bin/mpiexec"
  72 

Testing

You can do some basic tests by logging in on a WN as a pool user and running the following:

[dte056@cagnode48 dte056]$ env|grep MPI_

You should see something like this:

MPI_MPICC_OPTS=-m32
MPI_SSH_HOST_BASED_AUTH=yes
MPI_OPENMPI_PATH=/opt/openmpi/1.1
MPI_LAM_VERSION=7.1.2
MPI_MPICXX_OPTS=-m32
MPI_LAM_PATH=/usr
MPI_OPENMPI_VERSION=1.1
MPI_MPIF77_OPTS=-m32
MPI_MPICH_VERSION=1.2.7
MPI_MPIEXEC_PATH=/opt/mpiexec-0.80
MPI_MPICH2_PATH=/opt/mpich2-1.0.4
MPI_MPICH2_VERSION=1.0.4
I2G_MPI_START=/opt/mpi-start/bin/mpi-start
MPI_MPICH_PATH=/opt/mpich-1.2.7p1

You can also try submitting a job to your site, please read MPI-Start user documentation

eciencia: Middleware/MpiStart/MpiUtils (last edited 2012-02-17 14:34:36 by enol)