welcome: please sign in

Revision 10 as of 2011-07-18 09:37:09

Clear message
location: Middleware / MpiStart / UserDocumentation / HooksFramework

Hooks Framework

The mpi-start Hooks Framework allow the extension of mpi-start features without changing the core functionality. Several hooks are included in the default distribution of mpi-start for dealing with file distribution and some MPI extensions. Site admins can check the local hooks description. Users probably are interested in developing their own hooks.

File distribution hooks

File distribution hooks are responsible for providing a common set of files prior to the execution of the application in all the hosts involved in that execution. Two steps are taken for file distribution:

The file distribution method can be fixed by using the I2G_MPI_FILE_DIST variable.

File Distribution Method Plugins

A file distribution plugin must contain the following functions:

These distribution methods are included in mpi-start:

Extensions hooks

Extension hooks are local site hooks that come in the default mpi-start distribution. The following hooks are available:

{{{!wiki:comment

Affinity

}}}

OpenMP

The OpenMP hook is enabled by setting the MPI_USE_OMP variable to 1. When enabled it will define the OMP_NUM_THREADS environment variable to the number of processors available per mpi process.

MPItrace

MPItrace is enabled by setting the I2G_USE_MPITRACE variable to 1. It adds to the execution the mpitrace utility, assuming it is installed at MPITRACE_INSTALLATION. Once the execution is finished, it gathers and creates the output files at the first host.

MARMOT

Marmot is a tool for analysing and checking MPI programs. This hook enables the use of the tool if the variable I2G_USE_MARMOT is set to 1. It also copies the analysis output to the first host.

Compiler flags

Sites that do have various compilers and support various architectures may not have proper compiler flags (MPI_MPIxx_OPTS) for all the possible combination. This plugin tries to avoid compilation errors by checking the current compiler options and change them to the default architecture of the compiler if the binaries are not generated.

These hooks can be completely removed by deleting the affinity.hook, openmp.hook, mpitrace.hook, marmot.hook, or compiler.hook in the mpi-start configuration directory.

Local site hooks

Site admins can define their own hooks by creating .hook files in the configuration directory of MPI-Start (by default /opt/i2g/etc/mpi-start). The file must contain one of the following functions:

Developing User Hooks

Users can also customize the MPI-Start behavior defining their own hooks by setting the I2G_MPI_PRE_RUN_HOOK or I2G_MPI_POST_RUN_HOOK variables.

Both pre and post hooks can be in the same file.

In the next sections there are some hooks examples

Compilation

Pre-run hook can be used for generating the binaries of the application that will be run by MPI-Start. The following sample shows a hook that compiles an application using mpicc. It assumes that the source code is called like the application binary, but with a .c extension. Use of complex compilation commands like configure, make, etc is also possible. This code is only executed in the first host. The results of the compilation will be available to all hosts thanks to the file distribution mechanisms.

   1 #!/bin/sh
   2 
   3 # This function will be called before the execution of MPI application
   4 pre_run_hook () {
   5 
   6   # Compile the program.
   7   echo "Compiling ${I2G_MPI_APPLICATION}"
   8   mpicc $MPI_MPICC_OPTS -o ${I2G_MPI_APPLICATION} ${I2G_MPI_APPLICATION}.c
   9   if [ ! $? -eq 0 ]; then
  10     echo "Error compiling program.  Exiting..."
  11     return 1
  12   fi
  13   echo "Successfully compiled ${I2G_MPI_APPLICATION}"
  14   return 0
  15 }

Input Preprocessing

Some applications require some input preprocessing before the application gets executed. For example, gromacs has a grompp tool that prepares the input for the actual mdrun application. In the following example the grompp tool prepares the input for gromacs:

   1 #!/bin/sh
   2 
   3 pre_run_hook()
   4 {
   5    echo "pre_run_hook called"
   6 
   7    # Here comes the pre-mpirun actions of gromacs
   8    export PATH=$PATH:/$VO_COMPCHEM_SW_DIR/gromacs-3.3/bin
   9    grompp -v -f full -o full -c after_pr -p speptide -np 4
  10 
  11    return 0
  12 }

Output Gathering

Applications that write output files in each of the hosts involved in the execution may need to fetch all those files to transfer them back to the user once the execution is finished. The following example copies all the mydata.* files to the first host. It uses the mpi_start_foreach_host function of MPI-Start that will call the first argument for each of the hosts passing the name of the host as parameter.

   1 # the first paramter is the name of a host in the
   2 my_copy () {
   3     CMD="scp . \$1:\$PWD/mydata.*"
   4     echo \$CMD
   5 }
   6 
   7 post_run_hook () {
   8     echo "post_run_hook called"
   9     if [ "x\$MPI_START_SHARED_FS" = "x0" ] ; then
  10         echo "gather output from remote hosts"
  11         mpi_start_foreach_host my_copy
  12     fi
  13     return 0
  14 }