welcome: please sign in
location: Middleware / MpiStart / UserDocumentation / HooksFramework

Hooks Framework

The mpi-start Hooks Framework allow the extension of mpi-start features without changing the core functionality. Several hooks are included in the default distribution of mpi-start for dealing with file distribution and some MPI extensions. Site admins can check the local hooks description. Users probably are interested in developing their own hooks.

File distribution hooks

File distribution hooks are responsible for providing a common set of files prior to the execution of the application in all the hosts involved in that execution. Three steps are taken for file distribution:

The file distribution method can be fixed by using the I2G_MPI_FILE_DIST variable.

File Distribution Method Plugins

A file distribution plugin must contain the following functions:

These distribution methods are included in mpi-start:

Extensions hooks

Extension hooks are local site hooks that come in the default mpi-start distribution. The following hooks are available. These hooks can be completely removed by deleting the affinity.hook, openmp.hook, mpitrace.hook, marmot.hook, or compiler.hook in the mpi-start configuration directory.


The Affinity hook is enabled by setting the MPI_USE_AFFINITY variable to 1. When enabled (and the execution environment supports it), it will define the appropriate options for setting the processor affinity under the selected MPI implementation.


The OpenMP hook is enabled by setting the MPI_USE_OMP variable to 1. When enabled it will define the OMP_NUM_THREADS environment variable to the number of processors available per mpi process.


MPItrace is enabled by setting the I2G_USE_MPITRACE variable to 1. It adds to the execution the mpitrace utility, assuming it is installed at MPITRACE_INSTALLATION. Once the execution is finished, it gathers and creates the output files at the first host.


Marmot is a tool for analysing and checking MPI programs. This hook enables the use of the tool if the variable I2G_USE_MARMOT is set to 1. It also copies the analysis output to the first host.


This hook sets environment variables MPI_MPI<COMPILER>, where COMPILER is one of CC, F90, F77, CXX, for C, FORTRAN 90, FORTRAN 77 and C++ compilers respectively. This variables should point to valid compilers for the current MPI implementation. The hook also fixes compiler flags (MPI_MPIxx_OPTS) to avoid problems with bad flag for the current processor architecture. This hook can be disabled by setting the environment variable MPI_COMPILER_HOOK to 0.

Local Site Hooks

Site admins can define their own hooks by:

The .hook files are executed in alphabetical order and the mpi-start.hooks.local will be executed after any other hook in the system are executed and the shared file system detection is performed. Each hook file contains the following functions:

If any of these functions is not available, the hook will be ignored.

Developing User Hooks

Users can also customize the mpi-start behavior defining their own hooks by using the -pre and -post command line switches or by setting the I2G_MPI_PRE_RUN_HOOK and I2G_MPI_POST_RUN_HOOK environment variables

Both pre and post hooks can be in the same file. Next sections contain some hook examples


Pre-run hook can be used for generating the binaries of the application that will be run by mpi-start. The following sample shows a hook that compiles an application using the C MPI compiler, as defined by the compiler hook in the MPI_MPICC variable. It assumes that the source code is called like the application binary, but with a .c extension. Use of complex compilation commands like configure, make, etc is also possible. This code is only executed in the first host. The results of the compilation will be available to all hosts thanks to the file distribution mechanisms.

   1 #!/bin/sh
   3 # This function will be called before the execution of MPI application
   4 pre_run_hook () {
   6   # Compile the program.
   7   echo "Compiling ${I2G_MPI_APPLICATION}"
   9   if [ ! $? -eq 0 ]; then
  10     echo "Error compiling program.  Exiting..."
  11     return 1
  12   fi
  13   echo "Successfully compiled ${I2G_MPI_APPLICATION}"
  14   return 0
  15 }

Input Preprocessing

Some applications require some input preprocessing before the application gets executed. For example, gromacs has a grompp tool that prepares the input for the actual mdrun application. In the following example the grompp tool prepares the input for gromacs:

   1 #!/bin/sh
   3 pre_run_hook()
   4 {
   5    echo "pre_run_hook called"
   7    # Here comes the pre-mpirun actions of gromacs
   8    export PATH=$PATH:/$VO_COMPCHEM_SW_DIR/gromacs-3.3/bin
   9    grompp -v -f full -o full -c after_pr -p speptide -np $MPI_START_NP
  11    return 0
  12 }

Note the use of the MPI_START_NP variable to get the number of processors. See the developer section for a list of internal mpi-start variables.

Output Gathering

Applications that write output files in each of the hosts involved in the execution may need to fetch all those files to transfer them back to the user once the execution is finished. The following example copies all the mydata.* files to the first host. It uses the mpi_start_foreach_host function of mpi-start that will call the first argument for each of the hosts passing the name of the host as parameter.

   1 # the first paramter is the name of a host in the
   2 my_copy () {
   3     CMD="scp . \$1:\$PWD/mydata.*"
   4     echo \$CMD
   5 }
   7 post_run_hook () {
   8     echo "post_run_hook called"
   9     if [ "x\$MPI_START_SHARED_FS" = "x0" ] ; then
  10         echo "gather output from remote hosts"
  11         mpi_start_foreach_host my_copy
  12     fi
  13     return 0
  14 }

Hooks Variable Summary

This section contains a summary of the variables that can modify the existing hook behaviour. They can be set using the -d command line switch.




File Distribution


If undefined, mpi-start will try to detect a shared file system in the execution directory. If defined and equal to 1, mpi-start will assume that the execution directory is shared between all hosts and will not try to copy files. Any other value will make mpi-start assume that the execution directory is not shared.

File Distribution


Forces the use of a specific distribution method.

File Distribution


If set to "yes", mpi-start will use the path defined in MPI_SHARED_HOME_PATH for copying the files and executing the application.

File Distribution


Path to a shared directory.

File Distribution


If set to "yes", mpi-start will not try to cleanup files after job execution.



If set to 1, enable Open MP hook.



If set to 1, enable MPItrace hook.



If set to 1, enable Marmot hook.



If set to 1, enable processor affinity hook.



If set to 0, disable compiler hook.

eciencia: Middleware/MpiStart/UserDocumentation/HooksFramework (last edited 2012-02-22 10:24:34 by enol)