welcome: please sign in
location: Diff for "Middleware/MpiStart/UserDocumentation"
Differences between revisions 1 and 8 (spanning 7 versions)
Revision 1 as of 2010-11-15 16:22:07
Size: 11099
Editor: enol
Comment:
Revision 8 as of 2011-02-23 17:08:04
Size: 13936
Editor: enol
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
= Description =  
MPI-START is an abstration layer that offers a unique interface to start
MPI jobs with different MPI implementations.

= Installation =
= User Documentation =

<<TableOfContents>>

== Desc
ription ==
MPI-Start is an abstration layer that offers a unique interface to start MPI jobs with different MPI implementations.

== Installation ==
Line 15: Line 16:
Line 18: Line 18:
= Usage =

All necessary information of the mpi-start program are provided via
environment variables. A user has to set the below specified variables
before executing mpi-start.

== Environment Variables ==

 - `I2G_MPI_APPLICATION`
   The application binary to execute.

 - `I2G_MPI_APPLICATION_ARGS`
   The command line parameters for the application

 - `I2G_MPI_TYPE`
   The name of the MPI implementation to use. So far defined values :
   - openmpi
   - pacx-mpi (Check RunningPacxMpi for details on how to run this kind of jobs)
   - mpich
   - mpich2
   - lam

 - `I2G_MPI_VERSION`
   Specifies the version of the MPI implementation specified by
   I2G_MPI_TYPE. If not specified the default version will be used.

 - `I2G_MPI_PRE_RUN_HOOK`
   This variable can be set to a script which must define the
   pre_run_hook function. This function will be called after the MPI
   support has been established and before the internal pre-run
  hooks. This hook can be used to prepare input data or compile the
  program.

 - `I2G_MPI_POST_RUN_HOOK`
  This variable can be set to a script which must define the
  post_run_hook function. This function will be called after the
  mpirun has finished.

== EGEE Environment ==

mpi-start supports the EGEE environment variable schema for specifing
the local MPI installations. For the latest specification of the EGEE
environment specification refer to :

[[http://www.grid.ie/mpi/wiki]]

==Debugging==

For debugging purpose the `I2G_MPI_START_DEBUG` variable can be set to 1
to enable the debugging output. The `I2G_MPI_START_VERBOSE` variable can be set to 1 to turn on the additional output.

The variable `I2G_MPI_START_TRACE` can be set to 1 to trace every operation that is performed by mpi-start (goes to stderr).
== Usage ==

MPI-Start can be controlled via environment variables or command line switches, most configuration dependent paramenters are automatically detected by MPI-Start and do not need to be specified by the user. The following command line will be enough to run the application with the site defaults:

{{{
$ mpi-start application [application arguments ...]
}}}

=== Command Line Options ===

  -h :: show help message and exit
  -V :: show mpi-start version
  -t mpi_type :: use `mpi_type` as MPI implementation
  -v :: be verbose
  -vv :: include debug information
  -vvv :: include full trace
  -pre hook :: use `hook` as pre-hook file
  -post hook :: use `hook` as post-hook file
  -pcmd cmd :: use `cmd` as pre-command
  -npnode ''n'' :: start ''n'' processes per node
  -pnode :: start 1 process per node
  -np ''n'' :: start exactly 'n' processes
  -i file :: use `file` as standard input file
  -o file :: use `file` as standard output file
  -e file :: use `file` as standard error file
  -x VAR[=VALUE] :: define variable `VAR` with optional `VALUE` for the application's environment (will not be seen by MPI-Start!)
  -d VAR=VALUE :: define variable `VAR` with `VALUE`
  -- :: optional separator for application and arguments, after this, any arguments will be considered the application to run and its arguments

For example, the following command line would start /bin/hostname 3 times for available node using Open MPI:
{{{
$ mpi-start -t openmpi -npnode 3 -- /bin/hostname
}}}

=== Environment Variables ===

Prior to version 1.0.0 mpi-start only used environment variables to control its behavior. This is still possible, although command line arguments will override the environment variables defined. Next table shows the complete list of variables:

|| '''Variable''' || '''Meaning''' ||
|| `I2G_MPI_APPLICATION` || The application binary to execute. ||
|| `I2G_MPI_APPLICATION_ARGS` || The command line parameters for the application ||
|| `I2G_MPI_TYPE` || The name of the MPI implementation to use. ||
|| `I2G_MPI_VERSION` || Specifies the version of the MPI implementation specified by I2G_MPI_TYPE. If not specified the default version will be used. ||
|| `I2G_MPI_PRE_RUN_HOOK` || This variable can be set to a script which must define the pre_run_hook function. This function will be called after the MPI support has been established and before the internal pre-run hooks. This hook can be used to prepare input data or compile the program. ||
|| `I2G_MPI_POST_RUN_HOOK` || This variable can be set to a script which must define the post_run_hook function. This function will be called after the mpirun has finished. ||
|| `I2G_MPI_START_VERBOSE` || Set to 1 to turn on the additional output.||
|| `I2G_MPI_START_DEBUG` || Set to 1 to enable debugging output ||
|| `I2G_MPI_START_TRACE` || Set to 1 to trace every operation that is performed by mpi-start ||
|| `I2G_MPI_APPLICATION_STDIN` || Standard input file to use. ||
|| `I2G_MPI_APPLICATION_STDOUT` || Standard output file to use. ||
|| `I2G_MPI_APPLICATION_STDERR` || Standard error file to use. ||
|| `I2G_MPI_SINGLE_PROCESS` || Set it to 1 to start only one process per node. ||
|| `I2G_MPI_PER_NODE` || Number of processes to start per node. ||
|| `I2G_MPI_NP` || Total number of processes to start.||

== Scheduler and Execution Environment Support ==

MPI-Start support different combinations of batch schedulers and execution environments using plugins. The schedulers are automatically detected from the environment and the execution environment can be selected with the `I2G_MPI_TYPE` variable or the `-t` command line option.

=== Supported Schedulers ===

Default MPI-Start installation includes the following plugins:
  sge :: supports [[http://gridengine.sunsource.net/|Grid Engine]]
  pbs :: for supporting [[http://www.clusterresources.com/products/torque-resource-manager.php| PBS/Torque]]
  lsf :: supports [[http://www.platform.com/Products/platform-lsf|LSF]].
  condor :: gives support for [[http://www.cs.wisc.edu/condor/| Condor]]. This plugin lacks the possibility to select how many processes per node should be run.
  slurm :: for supporting [[https://computing.llnl.gov/linux/slurm/ | Slurm]]. As with condor, the plugin currently lacks the processes per node support.

=== Execution Environments ===
MPI-Start supports different types of MPI implementations in its execution environment framework. There are plug-ins implemented for [[http://www.open-mpi.org/|Open MPI]], [[http://www.mcs.anl.gov/research/projects/mpich2/|MPICH2]], [[http://www.mcs.anl.gov/research/projects/mpi/mpich1-old/|MPICH]], [[http://www.lam-mpi.org/|LAM-MPI]] and [[http://www.hlrs.de/organization/av/amt/research/pacx-mpi/|PACX-MPI]].
Line 72: Line 90:

The Hooks framework opens the possibility of customizing the behavior of MPI-Start. Users can provide their own hooks to perform any pre (e.g. compilation of binaries, data fetching) or post (e.g. storage of application results, clean-up) actions needed for the execution of their application. The [[/HooksFramework|Hooks Framework]] page describes in detail the framework and how to create user hooks.

== Other Environment Variables ==

This section list other environment variables that affect to the
MPI-Start execution but are not so frequently used:

 - `I2G_MPI_APPLI
CATION_STDIN`
  Standard input file to use.

 - `I2G_MPI_APPLI
CATION_STDOUT`
  Standard output file to use.

 - `I2G_MPI_FILE_DIST`
  File distribution method to use (see [[/HooksFramework|Hooks Framework
]])
The Hooks framework opens the possibility of customizing the behavior of MPI-Start. Users can provide their own hooks to perform any pre (e.g. compilation of binaries, data fetching) or post (e.g. storage of application results, clean-up) actions needed for the execution of their application. The [[Middleware/MpiStart/UserDocumentation/HooksFramework|Hooks Framework]] page describes in detail the framework and how to create user hooks.

== System configuration ==
MPI-Start can be configured to use the best options for the site. Check [../SiteConfiguration] for more information.
Line 90: Line 97:
Line 93: Line 99:
# IMPORTANT : This example script execute a  # IMPORTANT : This example script execute a
Line 101: Line 107:
Line 103: Line 108:
Line 110: Line 114:
# The "mpi_start_foreach_host" function takes as parameter the name of 
# another function that will be called for each host in the machine as 
# first parameter. 
# - For each host the callback function will be called exactly once, 
# The "mpi_start_foreach_host" function takes as parameter the name of
# another function that will be called for each host in the machine as
# first parameter.
# - For each host the callback function will be called exactly once,
Line 125: Line 129:
        echo "If we need a shared file system we can return -1 to abort"          echo "If we need a shared file system we can return -1 to abort"
Line 133: Line 137:
# create the post-run hook  # create the post-run hook
Line 135: Line 139:
# the first paramter is the name of a host in the  # the first paramter is the name of a host in the
Line 162: Line 166:
= Using MPI-Start with grid middleware =
== gLite ==
== Using MPI-Start with grid middleware ==
=== gLite ===
Line 166: Line 169:
 
=== Basic Job Submission ===

==== Basic Job Submission ====
Line 170: Line 172:
Line 176: Line 179:
JobType = Normal; JobType = "Normal";
Line 184: Line 187:
Requirements = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment)  Requirements = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment)
Line 187: Line 190:
Line 189: Line 191:
Line 194: Line 197:
Line 207: Line 211:
 
Line 211: Line 215:
 
Line 215: Line 219:
Line 233: Line 236:
Line 235: Line 237:
Line 238: Line 241:
Connecting to the service https://gridwms01.ifca.es:7443/glite_wms_wmproxy_server     ================================================================================

   
JOB GET OUTPUT OUTCOME
Connecting to the service https://gridwms01.ifca.es:7443/glite_wms_wmproxy_server

================================================================================

                        
JOB GET OUTPUT OUTCOME
Line 247: Line 250:
/gpfs/csic_projects/grid/tmp/jobOutput/enol_8jG3MUNRm-ol7BqhFP5Crg 
  
================================================================================ 
  
/gpfs/csic_projects/grid/tmp/jobOutput/enol_8jG3MUNRm-ol7BqhFP5Crg

================================================================================
Line 260: Line 263:
Line 262: Line 264:

MPI-Start behavior can be customized by setting different environment variables (see [#Usage usage section] for a complete list). If using the generic wrapper, one easy way of customizing MPI-Start execution is using the `Environment` attribute of the JDL. The following JDL adds debugging to the previous example by setting the `I2G_MPI_START_VERBOSE` and `I2G_MPI_START_DEBUG` variables to 1:
{{{
JobType = Normal;
MPI-Start behavior can be customized by setting different environment variables (see [[#Usage|usage section]] for a complete list). If using the generic wrapper, one easy way of customizing MPI-Start execution is using the `Environment` attribute of the JDL. The following JDL adds debugging to the previous example by setting the `I2G_MPI_START_VERBOSE` and `I2G_MPI_START_DEBUG` variables to 1:

{{{
JobType = "Normal";
Line 273: Line 275:
Requirements = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment)  Requirements = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment)
Line 277: Line 279:

Use of hooks (see [[HooksFramework|Hooks Framework]]) is also possible using this mechanism. If the user has a file with the MPI-Start hooks called `hooks.sh`, the following JDL would add it to the execution (notice that the file is also added in the `InputSandbox`): 

{{{
JobType = Normal;
Use of hooks (see [[/HooksFramework|Hooks Framework]]) is also possible using this mechanism. If the user has a file with the MPI-Start hooks called `hooks.sh`, the following JDL would add it to the execution (notice that the file is also added in the `InputSandbox`):

{{{
JobType = "Normal";
Line 289: Line 290:
Requirements = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment)  Requirements = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment)

User Documentation

Description

MPI-Start is an abstration layer that offers a unique interface to start MPI jobs with different MPI implementations.

Installation

Normally users do not need to install MPI-Start. However if they want to use it in a site without an existing installation, the recommendation is to create a tarball installation that can be transfered in the input sandbox of the job.

In order to create a tarball installation, get the source code and do the following:

$ make tarball

This will create a mpi-start-X.Y.Z.tar.gz (with X.Y.Z being the version of MPI-Start) that contains all that is needed for the execution of jobs. In your job script unpack the tarball and set the I2G_MPI_START environment variable to $PWD/bin/mpi-start.

Usage

MPI-Start can be controlled via environment variables or command line switches, most configuration dependent paramenters are automatically detected by MPI-Start and do not need to be specified by the user. The following command line will be enough to run the application with the site defaults:

$ mpi-start application [application arguments ...]

Command Line Options

-h
show help message and exit
-V
show mpi-start version
-t mpi_type

use mpi_type as MPI implementation

-v
be verbose
-vv
include debug information
-vvv
include full trace
-pre hook

use hook as pre-hook file

-post hook

use hook as post-hook file

-pcmd cmd

use cmd as pre-command

-npnode ''n''

start n processes per node

-pnode
start 1 process per node
-np ''n''
start exactly 'n' processes
-i file

use file as standard input file

-o file

use file as standard output file

-e file

use file as standard error file

-x VAR[=VALUE]

define variable VAR with optional VALUE for the application's environment (will not be seen by MPI-Start!)

-d VAR=VALUE

define variable VAR with VALUE

--
optional separator for application and arguments, after this, any arguments will be considered the application to run and its arguments

For example, the following command line would start /bin/hostname 3 times for available node using Open MPI:

$ mpi-start -t openmpi -npnode 3 -- /bin/hostname

Environment Variables

Prior to version 1.0.0 mpi-start only used environment variables to control its behavior. This is still possible, although command line arguments will override the environment variables defined. Next table shows the complete list of variables:

Variable

Meaning

I2G_MPI_APPLICATION

The application binary to execute.

I2G_MPI_APPLICATION_ARGS

The command line parameters for the application

I2G_MPI_TYPE

The name of the MPI implementation to use.

I2G_MPI_VERSION

Specifies the version of the MPI implementation specified by I2G_MPI_TYPE. If not specified the default version will be used.

I2G_MPI_PRE_RUN_HOOK

This variable can be set to a script which must define the pre_run_hook function. This function will be called after the MPI support has been established and before the internal pre-run hooks. This hook can be used to prepare input data or compile the program.

I2G_MPI_POST_RUN_HOOK

This variable can be set to a script which must define the post_run_hook function. This function will be called after the mpirun has finished.

I2G_MPI_START_VERBOSE

Set to 1 to turn on the additional output.

I2G_MPI_START_DEBUG

Set to 1 to enable debugging output

I2G_MPI_START_TRACE

Set to 1 to trace every operation that is performed by mpi-start

I2G_MPI_APPLICATION_STDIN

Standard input file to use.

I2G_MPI_APPLICATION_STDOUT

Standard output file to use.

I2G_MPI_APPLICATION_STDERR

Standard error file to use.

I2G_MPI_SINGLE_PROCESS

Set it to 1 to start only one process per node.

I2G_MPI_PER_NODE

Number of processes to start per node.

I2G_MPI_NP

Total number of processes to start.

Scheduler and Execution Environment Support

MPI-Start support different combinations of batch schedulers and execution environments using plugins. The schedulers are automatically detected from the environment and the execution environment can be selected with the I2G_MPI_TYPE variable or the -t command line option.

Supported Schedulers

Default MPI-Start installation includes the following plugins:

sge

supports Grid Engine

pbs

for supporting PBS/Torque

lsf

supports LSF.

condor

gives support for Condor. This plugin lacks the possibility to select how many processes per node should be run.

slurm

for supporting Slurm. As with condor, the plugin currently lacks the processes per node support.

Execution Environments

MPI-Start supports different types of MPI implementations in its execution environment framework. There are plug-ins implemented for Open MPI, MPICH2, MPICH, LAM-MPI and PACX-MPI.

Hooks

The Hooks framework opens the possibility of customizing the behavior of MPI-Start. Users can provide their own hooks to perform any pre (e.g. compilation of binaries, data fetching) or post (e.g. storage of application results, clean-up) actions needed for the execution of their application. The Hooks Framework page describes in detail the framework and how to create user hooks.

System configuration

MPI-Start can be configured to use the best options for the site. Check [../SiteConfiguration] for more information.

Examples

Simple Job

   1 #!/bin/sh
   2 # IMPORTANT : This example script execute a
   3 #             non-mpi program with Open MPI
   4 #
   5 export I2G_MPI_APPLICATION=/bin/hostname
   6 export I2G_MPI_TYPE=openmpi
   7 
   8 $I2G_MPI_START

Job with user specified hooks

   1 #!/bin/sh
   2 #
   3 # MPI_START_SHARED_FS can be used to figure out if the current working
   4 # is located on a shared file system or not. (1=yes, 0=no);
   5 #
   6 # The "mpi_start_foreach_host" function takes as parameter the name of
   7 # another function that will be called for each host in the machine as
   8 # first parameter.
   9 # - For each host the callback function will be called exactly once,
  10 #   independent how often the host appears in the machinefile.
  11 # - The callback function will also be called for the local host.
  12 
  13 # create the pre-run hook
  14 cat > pre_run_hook.sh << EOF
  15 pre_run_hook () {
  16     echo "pre run hook called "
  17     # - download data
  18     # - compile program
  19 
  20     if [ "x\$MPI_START_SHARED_FS" = "x0" ] ; then
  21         echo "If we need a shared file system we can return -1 to abort"
  22         # return -1
  23     fi
  24 
  25     return 0
  26 }
  27 EOF
  28 
  29 # create the post-run hook
  30 cat > post_run_hook.sh << EOF
  31 # the first paramter is the name of a host in the
  32 my_copy () {
  33     CMD="scp . \$1:\$PWD/mydata.1"
  34     echo \$CMD
  35     #\$CMD
  36     # upload data
  37 }
  38 
  39 post_run_hook () {
  40     echo "post_run_hook called"
  41     if [ "x\$MPI_START_SHARED_FS" = "x0" ] ; then
  42         echo "gather output from remote hosts"
  43         mpi_start_foreach_host my_copy
  44     fi
  45     return 0
  46 }
  47 EOF
  48 
  49 export I2G_MPI_APPLICATION=mpi_sleep
  50 export I2G_MPI_APPLICATION_ARGS=0
  51 export I2G_MPI_TYPE=openmpi
  52 export I2G_MPI_PRE_RUN_HOOK=./pre_run_hook.sh
  53 export I2G_MPI_POST_RUN_HOOK=./post_run_hook.sh
  54 
  55 $I2G_MPI_START

Using MPI-Start with grid middleware

gLite

gLite uses the WMS software for submitting jobs to the different available resources. The WMS gets a job description in the JDL language and performs the selection and actual submission of the job into the resources on behalf of the user. The following sections describe how to submit a job using the WMS.

Basic Job Submission

Jobs are described with the JDL language. Most relevant attributes for parallel job submission are:

  • CPUNumber: number of processes to allocate.

  • Requirements: requirements of the job, will allow to force the selection of sites with MPI-Start support.

The following example shows a job that will use 6 processes and it is executed with Open MPI. The requirements attribute makes the WMS to select sites that publish that they support MPI-Start and Open MPI.

JobType       = "Normal";
CPUNumber     = 6;
Executable    = "starter.sh";
Arguments     = "OPENMPI hello_bin hello arguments";
InputSandbox  = {"starter.sh", "hello_bin"};
OutputSandbox = {"std.out", "std.err"};
StdOutput     = "std.out";
StdError      = "std.err";
Requirements  = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment)
                && member("OPENMPI", other.GlueHostApplicationSoftwareRunTimeEnvironment);

The Executable attribute is a script that will invoke MPI-Start with the correct options for the execution of the user's application. We propose a generic wrapper that can be used for any application and MPI flavour that gets in the Arguments attribute:

  • Name of MPI-Start execution environment (I2G_MPI_FLAVOUR variable), in the example: OPENMPI
  • Name of user binary, in the example: hello_bin
  • Arguments for the user binary, in the example: hello arguments

This is the content of the wrapper:

   1 #!/bin/bash
   2 # Pull in the arguments.
   3 MPI_FLAVOR=$1
   4 
   5 MPI_FLAVOR_LOWER=`echo $MPI_FLAVOR | tr '[:upper:]' '[:lower:]'`
   6 export I2G_MPI_TYPE=$MPI_FLAVOR_LOWER
   7 
   8 shift
   9 export I2G_MPI_APPLICATION=$1
  10 
  11 shift
  12 export I2G_MPI_APPLICATION_ARGS=$*
  13 
  14 # Touch the executable, and make sure it's executable.
  15 touch $I2G_MPI_APPLICATION
  16 chmod +x $I2G_MPI_APPLICATION
  17 
  18 # Invoke mpi-start.
  19 $I2G_MPI_START

User needs to include this wrapper in the InputSandbox of the JDL (starter.sh) and set it as the Executable of the job. Submission is performed as any other gLite job:

$ glite-wms-job-submit -a hello-mpi.sh

Connecting to the service https://gridwms01.ifca.es:7443/glite_wms_wmproxy_server


====================== glite-wms-job-submit Success ======================

The job has been successfully submitted to the WMProxy
Your job identifier is:

https://gridwms01.ifca.es:9000/8jG3MUNRm-ol7BqhFP5Crg

==========================================================================

Once the job is finished, the output can be retrieved:

$ glite-wms-job-output https://gridwms01.ifca.es:9000/8jG3MUNRm-ol7BqhFP5Crg

Connecting to the service https://gridwms01.ifca.es:7443/glite_wms_wmproxy_server

================================================================================

                        JOB GET OUTPUT OUTCOME

Output sandbox files for the job:
https://gridwms01.ifca.es:9000/8jG3MUNRm-ol7BqhFP5Crg
have been successfully retrieved and stored in the directory:
/gpfs/csic_projects/grid/tmp/jobOutput/enol_8jG3MUNRm-ol7BqhFP5Crg

================================================================================


$ cat /gpfs/csic_projects/grid/tmp/jobOutput/enol_8jG3MUNRm-ol7BqhFP5Crg/std.*
Hello world from gcsic054wn. Process 3 of 6
Hello world from gcsic054wn. Process 1 of 6
Hello world from gcsic054wn. Process 2 of 6
Hello world from gcsic054wn. Process 0 of 6
Hello world from gcsic055wn. Process 4 of 6
Hello world from gcsic055wn. Process 5 of 6

Modifying MPI-Start behavior

MPI-Start behavior can be customized by setting different environment variables (see usage section for a complete list). If using the generic wrapper, one easy way of customizing MPI-Start execution is using the Environment attribute of the JDL. The following JDL adds debugging to the previous example by setting the I2G_MPI_START_VERBOSE and I2G_MPI_START_DEBUG variables to 1:

JobType       = "Normal";
CPUNumber     = 6;
Executable    = "starter.sh";
Arguments     = "OPENMPI hello_bin hello arguments";
InputSandbox  = {"starter.sh", "hello_bin"};
OutputSandbox = {"std.out", "std.err"};
StdOutput     = "std.out";
StdError      = "std.err";
Requirements  = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment)
                && member("OPENMPI", other.GlueHostApplicationSoftwareRunTimeEnvironment);
Environment   = {"I2G_MPI_START_VERBOSE=1", "I2G_MPI_START_DEBUG=1"};

Use of hooks (see Hooks Framework) is also possible using this mechanism. If the user has a file with the MPI-Start hooks called hooks.sh, the following JDL would add it to the execution (notice that the file is also added in the InputSandbox):

JobType       = "Normal";
CPUNumber     = 6;
Executable    = "starter.sh";
Arguments     = "OPENMPI hello_bin hello arguments";
InputSandbox  = {"starter.sh", "hello_bin", "hooks.sh"};
OutputSandbox = {"std.out", "std.err"};
StdOutput     = "std.out";
StdError      = "std.err";
Requirements  = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment)
                && member("OPENMPI", other.GlueHostApplicationSoftwareRunTimeEnvironment);
Environment   = {"I2G_MPI_PRE_RUN_HOOK=hooks.sh", "I2G_MPI_POST_RUN_HOOK=hooks.sh"};

eciencia: Middleware/MpiStart/UserDocumentation (last edited 2012-02-17 14:43:17 by enol)