Size: 11148
Comment:
|
Size: 11171
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 2: | Line 2: |
<<TableOfContents>> |
User Documentation
Contents
Description
MPI-Start is an abstration layer that offers a unique interface to start MPI jobs with different MPI implementations.
Installation
Normally users do not need to install MPI-Start. However if they want to use it in a site without an existing installation, the recommendation is to create a tarball installation that can be transfered in the input sandbox of the job.
In order to create a tarball installation, get the source code and do the following:
$ make tarball
This will create a mpi-start-X.Y.Z.tar.gz (with X.Y.Z being the version of MPI-Start) that contains all that is needed for the execution of jobs. In your job script unpack the tarball and set the I2G_MPI_START environment variable to $PWD/bin/mpi-start.
Usage
All necessary information of the mpi-start program are provided via environment variables. A user has to set the below specified variables before executing mpi-start.
Environment Variables
- I2G_MPI_APPLICATION
- The application binary to execute.
- I2G_MPI_APPLICATION_ARGS
- The command line parameters for the application
- I2G_MPI_TYPE
- The name of the MPI implementation to use. So far defined values : - openmpi
- pacx-mpi (Check RunningPacxMpi for details on how to run this kind of jobs) - mpich - mpich2 - lam
- I2G_MPI_VERSION
- Specifies the version of the MPI implementation specified by I2G_MPI_TYPE. If not specified the default version will be used.
- I2G_MPI_PRE_RUN_HOOK
- This variable can be set to a script which must define the pre_run_hook function. This function will be called after the MPI support has been established and before the internal pre-run
- hooks. This hook can be used to prepare input data or compile the program.
- I2G_MPI_POST_RUN_HOOK
- This variable can be set to a script which must define the post_run_hook function. This function will be called after the mpirun has finished.
EGEE Environment
mpi-start supports the EGEE environment variable schema for specifing the local MPI installations. For the latest specification of the EGEE environment specification refer to :
Debugging
For debugging purpose the I2G_MPI_START_DEBUG variable can be set to 1 to enable the debugging output. The I2G_MPI_START_VERBOSE variable can be set to 1 to turn on the additional output.
The variable I2G_MPI_START_TRACE can be set to 1 to trace every operation that is performed by mpi-start (goes to stderr).
Hooks
The Hooks framework opens the possibility of customizing the behavior of MPI-Start. Users can provide their own hooks to perform any pre (e.g. compilation of binaries, data fetching) or post (e.g. storage of application results, clean-up) actions needed for the execution of their application. The Hooks Framework page describes in detail the framework and how to create user hooks.
Other Environment Variables
This section list other environment variables that affect to the MPI-Start execution but are not so frequently used:
- I2G_MPI_APPLICATION_STDIN
- Standard input file to use.
- I2G_MPI_APPLICATION_STDOUT
- Standard output file to use.
- I2G_MPI_FILE_DIST
File distribution method to use (see Hooks Framework)
Examples
Simple Job
Job with user specified hooks
1 #!/bin/sh
2 #
3 # MPI_START_SHARED_FS can be used to figure out if the current working
4 # is located on a shared file system or not. (1=yes, 0=no);
5 #
6 # The "mpi_start_foreach_host" function takes as parameter the name of
7 # another function that will be called for each host in the machine as
8 # first parameter.
9 # - For each host the callback function will be called exactly once,
10 # independent how often the host appears in the machinefile.
11 # - The callback function will also be called for the local host.
12
13 # create the pre-run hook
14 cat > pre_run_hook.sh << EOF
15 pre_run_hook () {
16 echo "pre run hook called "
17 # - download data
18 # - compile program
19
20 if [ "x\$MPI_START_SHARED_FS" = "x0" ] ; then
21 echo "If we need a shared file system we can return -1 to abort"
22 # return -1
23 fi
24
25 return 0
26 }
27 EOF
28
29 # create the post-run hook
30 cat > post_run_hook.sh << EOF
31 # the first paramter is the name of a host in the
32 my_copy () {
33 CMD="scp . \$1:\$PWD/mydata.1"
34 echo \$CMD
35 #\$CMD
36 # upload data
37 }
38
39 post_run_hook () {
40 echo "post_run_hook called"
41 if [ "x\$MPI_START_SHARED_FS" = "x0" ] ; then
42 echo "gather output from remote hosts"
43 mpi_start_foreach_host my_copy
44 fi
45 return 0
46 }
47 EOF
48
49 export I2G_MPI_APPLICATION=mpi_sleep
50 export I2G_MPI_APPLICATION_ARGS=0
51 export I2G_MPI_TYPE=openmpi
52 export I2G_MPI_PRE_RUN_HOOK=./pre_run_hook.sh
53 export I2G_MPI_POST_RUN_HOOK=./post_run_hook.sh
54
55 $I2G_MPI_START
Using MPI-Start with grid middleware
gLite
gLite uses the WMS software for submitting jobs to the different available resources. The WMS gets a job description in the JDL language and performs the selection and actual submission of the job into the resources on behalf of the user. The following sections describe how to submit a job using the WMS.
Basic Job Submission
Jobs are described with the JDL language. Most relevant attributes for parallel job submission are:
CPUNumber: number of processes to allocate.
Requirements: requirements of the job, will allow to force the selection of sites with MPI-Start support.
The following example shows a job that will use 6 processes and it is executed with Open MPI. The requirements attribute makes the WMS to select sites that publish that they support MPI-Start and Open MPI.
JobType = Normal; CPUNumber = 6; Executable = "starter.sh"; Arguments = "OPENMPI hello_bin hello arguments"; InputSandbox = {"starter.sh", "hello_bin"}; OutputSandbox = {"std.out", "std.err"}; StdOutput = "std.out"; StdError = "std.err"; Requirements = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && member("OPENMPI", other.GlueHostApplicationSoftwareRunTimeEnvironment);
The Executable attribute is a script that will invoke MPI-Start with the correct options for the execution of the user's application. We propose a generic wrapper that can be used for any application and MPI flavour that gets in the Arguments attribute:
- Name of MPI-Start execution environment (I2G_MPI_FLAVOUR variable), in the example: OPENMPI
- Name of user binary, in the example: hello_bin
- Arguments for the user binary, in the example: hello arguments
This is the content of the wrapper:
1 #!/bin/bash
2 # Pull in the arguments.
3 MPI_FLAVOR=$1
4
5 MPI_FLAVOR_LOWER=`echo $MPI_FLAVOR | tr '[:upper:]' '[:lower:]'`
6 export I2G_MPI_TYPE=$MPI_FLAVOR_LOWER
7
8 shift
9 export I2G_MPI_APPLICATION=$1
10
11 shift
12 export I2G_MPI_APPLICATION_ARGS=$*
13
14 # Touch the executable, and make sure it's executable.
15 touch $I2G_MPI_APPLICATION
16 chmod +x $I2G_MPI_APPLICATION
17
18 # Invoke mpi-start.
19 $I2G_MPI_START
User needs to include this wrapper in the InputSandbox of the JDL (starter.sh) and set it as the Executable of the job. Submission is performed as any other gLite job:
$ glite-wms-job-submit -a hello-mpi.sh Connecting to the service https://gridwms01.ifca.es:7443/glite_wms_wmproxy_server ====================== glite-wms-job-submit Success ====================== The job has been successfully submitted to the WMProxy Your job identifier is: https://gridwms01.ifca.es:9000/8jG3MUNRm-ol7BqhFP5Crg ==========================================================================
Once the job is finished, the output can be retrieved:
$ glite-wms-job-output https://gridwms01.ifca.es:9000/8jG3MUNRm-ol7BqhFP5Crg Connecting to the service https://gridwms01.ifca.es:7443/glite_wms_wmproxy_server ================================================================================ JOB GET OUTPUT OUTCOME Output sandbox files for the job: https://gridwms01.ifca.es:9000/8jG3MUNRm-ol7BqhFP5Crg have been successfully retrieved and stored in the directory: /gpfs/csic_projects/grid/tmp/jobOutput/enol_8jG3MUNRm-ol7BqhFP5Crg ================================================================================ $ cat /gpfs/csic_projects/grid/tmp/jobOutput/enol_8jG3MUNRm-ol7BqhFP5Crg/std.* Hello world from gcsic054wn. Process 3 of 6 Hello world from gcsic054wn. Process 1 of 6 Hello world from gcsic054wn. Process 2 of 6 Hello world from gcsic054wn. Process 0 of 6 Hello world from gcsic055wn. Process 4 of 6 Hello world from gcsic055wn. Process 5 of 6
Modifying MPI-Start behavior
MPI-Start behavior can be customized by setting different environment variables (see usage section for a complete list). If using the generic wrapper, one easy way of customizing MPI-Start execution is using the Environment attribute of the JDL. The following JDL adds debugging to the previous example by setting the I2G_MPI_START_VERBOSE and I2G_MPI_START_DEBUG variables to 1:
JobType = Normal; CPUNumber = 6; Executable = "starter.sh"; Arguments = "OPENMPI hello_bin hello arguments"; InputSandbox = {"starter.sh", "hello_bin"}; OutputSandbox = {"std.out", "std.err"}; StdOutput = "std.out"; StdError = "std.err"; Requirements = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && member("OPENMPI", other.GlueHostApplicationSoftwareRunTimeEnvironment); Environment = {"I2G_MPI_START_VERBOSE=1", "I2G_MPI_START_DEBUG=1"};
Use of hooks (see Hooks Framework) is also possible using this mechanism. If the user has a file with the MPI-Start hooks called hooks.sh, the following JDL would add it to the execution (notice that the file is also added in the InputSandbox):
JobType = Normal; CPUNumber = 6; Executable = "starter.sh"; Arguments = "OPENMPI hello_bin hello arguments"; InputSandbox = {"starter.sh", "hello_bin", "hooks.sh"}; OutputSandbox = {"std.out", "std.err"}; StdOutput = "std.out"; StdError = "std.err"; Requirements = member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && member("OPENMPI", other.GlueHostApplicationSoftwareRunTimeEnvironment); Environment = {"I2G_MPI_PRE_RUN_HOOK=hooks.sh", "I2G_MPI_POST_RUN_HOOK=hooks.sh"};