16540
Comment:
|
21663
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
#acl Known:read All:read |
#acl EditorGroup:read,write,revert Known:read All:read #pragma section-numbers 2 |
Line 4: | Line 4: |
Line 7: | Line 6: |
{{{#!wiki note If you find any information that is out-dated, incorrect or incomplete, do not hesitate to [[#Support|Open a ticket]]. }}} |
|
Line 9: | Line 13: |
'''GRIDUI''' (Grid [[User Interface]]) cluster is the interactive gateway for the computing resources and projects IFCA is involved on. It is a SSH [[http://www.linuxvirtualserver.org/|LVS]], based on [[http://linuxsoft.cern.ch/|Scientific Linux CERN]] versions 5. For legacy applications some nodes on Scientific Linux CERN 4 are still available, however the maintenance of these machines is made in a best-effort basis. The login machines are as follow: |
The '''GRIDUI''' (Grid User Interface) cluster is the interactive gateway to the [[http://grid.ifca.es|Advanced Computing and e-Science]] resources at IFCA. This cluster is formed by several identical Scientific Linux 5 hosts, that can be reached through a single entry point. The connections to the internal machines are managed by a director node that tries to ensure that proper balancing is made across the available nodes at a given moment. Nevertheless, direct access to a particular node can be obtained. Please note that this cluster is ''not intended for the execution of CPU intensive tasks'', for this purpose use any of the available computing resources. Every process spawned are limited to a maximum CPU time of 2 hours. Login on these machines is provided via [[http://en.wikipedia.org/wiki/Secure_Shell|Secure Shell]]. The RSA key fingerprint of them is `46:85:91:c1:eb:61:55:34:25:2c:d6:0a:08:22:1f:77`. Outgoing SSH connections are not allowed by default from this cluster. Inactive SSH sessions will be closed after 12h. {{{#!wiki caution '''Direct node access''' Note that even though is possible to bypass the director accessing a node directly this is ''not recommended nor advisable''. Fair usage of the resources and proper balancing of the connections cannot be guaranteed if any user commits any abuse of this feature. Please note also that the GRIDUI machines have exactly the same hardware and software. }}} {{{#!wiki warning '''Scientific Linux 4 cluster''' Note that from 3rd November 2010 no Scientific Linux Cern 4 version is available. }}} |
Line 16: | Line 38: |
<tr align=center bgcolor=#A0A0A0><td>Hostname</td><td>Distribution</td><td>Architecture</td></tr> <tr><td><strong>gridui.ifca.es</strong></td><td rowspan="2">Scientific Linux CERN SLC release 5.5 (Boron)</td><td rowspan="2" align="center">x86_64</td></tr> <tr><td><strong>griduisl5.ifca.es</strong></td></tr> <tr><td><strong>griduisl4.ifca.es</strong></td><td>Scientific Linux CERN SLC release 4.7 (Beryllium)</td><td align="center">i386</td></tr> |
<tr align=center bgcolor=#A0A0A0><td>Hostname</td><td>Port</td><td>Gives access to</td><td>Distribution and Architecture</td></tr> <tr> <td><strong>gridui.ifca.es</strong></td> <td rowspan="2">22, 22000</td> <td rowspan="2">Balanced GRIDUI Cluster</td> <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td> </tr> <tr> <td><strong>griduisl5.ifca.es</strong></td> </tr> <tr> <td><strong>gridui.ifca.es</strong></td> <td rowspan="2">22001</td> <td rowspan="2">gridui01.ifca.es</td> <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td> </tr> <tr> <td><strong>griduisl5.ifca.es</strong></td> </tr> <tr> <td><strong>gridui.ifca.es</strong></td> <td rowspan="2">22002</td> <td rowspan="2">gridui02.ifca.es</td> <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td> </tr> <tr> <td><strong>griduisl5.ifca.es</strong></td> </tr> <tr> <td><strong>gridui.ifca.es</strong></td> <td rowspan="2">22003</td> <td rowspan="2">gridui03.ifca.es</td> <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td> </tr> <tr> <td><strong>griduisl5.ifca.es</strong></td> </tr> <tr> <td><strong>gridui.ifca.es</strong></td> <td rowspan="2">22004</td> <td rowspan="2">gridui04.ifca.es</td> <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td> </tr> <tr> <td><strong>griduisl5.ifca.es</strong></td> </tr> <tr> <td><strong>gridui.ifca.es</strong></td> <td rowspan="2">22005</td> <td rowspan="2">gridui05.ifca.es</td> <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td> </tr> <tr> <td><strong>griduisl5.ifca.es</strong></td> </tr> <tr> <td><strong>gridui.ifca.es</strong></td> <td rowspan="2">22006</td> <td rowspan="2">gridui06.ifca.es</td> <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td> </tr> <tr> <td><strong>griduisl5.ifca.es</strong></td> </tr> |
Line 23: | Line 105: |
Login on these machines is provided via [[http://en.wikipedia.org/wiki/Secure_Shell|Secure Shell]]. You can use the standard port 22 or the 22000. Please note that this cluster is not intended for the execution of CPU intensive tasks, for this purpose use any of the available computing resources. Outgoing SSH connections are not allowed by default from this cluster. Inactive SSH sessions will be closed after 24h. |
|
Line 33: | Line 106: |
Authentication is centralized via secured LDAP. All the changes made to a user account in one node take immediate effect in the whole cluster. There is also a secured web interface, that allows a user to change his/her details, available at https://cerbero.ifca.es/. If you need to reset your account password, please contact the system administrators. With this username you should be able to access also the ticketing system at http://support.ifca.es/ |
Authentication is centralized via secured LDAP. All the changes made to a user account in one node take immediate effect in the whole cluster. There is also a secured web interface, that allows a user to change his/her details, available at https://cerbero.ifca.es/. If you need to reset your account password, please contact the system administrators. With this username you should be able to access also the ticketing system at http://support.ifca.es/ |
Line 44: | Line 111: |
As the cluster is based on several Grid User Interfaces it allows users to access either EGEE-III, int.eu.grid, [[http://www.euforia-project.eu/EUFORIA/|EUFORIA]], [[https://web.lip.pt/wiki-IBERGRID/|IBERGRID]] and [[http://grid.csic.es|GRID-CSIC]] infrastructures. To set up the correct environment variables, please <code>source</code> any of the environment scripts located under <code>/gpfs/csic_projects/grid/etc/env/</code>. For example, to use the I2G infrastructure: {{{ % source /gpfs/csic_projects/grid/etc/env/i2g-env.sh }}} The available enviroments are: |
As the cluster is based on several Grid User Interfaces it allows users to access either EGEE-III, int.eu.grid, [[http://www.euforia-project.eu/EUFORIA/|EUFORIA]], [[https://web.lip.pt/wiki-IBERGRID/|IBERGRID]] and [[http://grid.csic.es|GRID-CSIC]] infrastructures. To set up the correct environment variables, please `source` any of the environment scripts located under `/gpfs/csic_projects/grid/etc/env/`. For example, to use the I2G infrastructure: {{{#!highlight console numbers=disable $ source /gpfs/csic_projects/grid/etc/env/i2g-env.sh }}} The available environments are: |
Line 68: | Line 129: |
<td>i2g-env.{csh,sh}</td> <td>int.eu.grid</td> </tr> <tr> |
|
Line 81: | Line 138: |
More information on seting up the Grid UI is available locally at the [[DUS: Setting up the User Interface account|corresponding DUS page]]. |
More information on setting up the Grid UI is available locally at the [[DUS: Setting up the User Interface account|corresponding DUS page]]. |
Line 86: | Line 141: |
PBS cluster was <strong>decommissioned</strong> in December 2009. Please use the new [[#SGE_Cluster|SGE Cluster]] instead. |
PBS cluster was '''decommissioned''' in December 2009. Please use the new [[#SGE_Cluster|SGE Cluster]] instead. |
Line 90: | Line 144: |
The SGE Cluster is based on <code>Scientific Linux CERN SLC release 5.5</code> machines, running on x86_64. The exact number or resources available is shown on the [[http://monitor.ifca.es/ganglia/?c=SGE%20Worker%20Nodes&m=&r=hour&s=descending&hc=4|monitorization page]]. ==== Job submission ==== Local submission is allowed for certain projects. As stated below, there are some shared areas that can be accessed from the computing nodes. The underlying batch system is [http://gridengine.sunsource.net/ Sun Grid Engine]. Please note that the syntax for job submission (<tt>qsub</tt>) and monitoring (<tt>qstat</tt>) is similar to the one you might be accustomed to from PBS, but there are important differences. Refer to the following sources for information: * Grid Engine [[http://gridengine.sunsource.net/documentation.html|documentation page]] * Some useful [[http://gridengine.sunsource.net/project/gridengine/howto/howto.html|HOWTOS]] * SGE [[http://gridengine.sunsource.net/project/gridengine/howto/basic_usage.html|basic usage]]. Users should submit their jobs directly to their project by using <code>qstat -P <project></code>. In other words, instead of: {{{ % qsub -q lhidra ''jobfile'' }}} you should write (note the dot between the "<tt>l</tt>" and the project name): {{{ % qsub -P l.hidra ''jobfile'' }}} If a project name is not specified, the job will fall in the ''catch-all low-priority'' queue. Submission without specifying a project is allowed, in order to perform <strong>short</strong> test jobs. However, it is in the best interest of the user to use her project in general, so that the job can take full advantage of more capable (longer, more CPUs, higher priority) queues. A scratch area is defined for every job as the environment variable <code>$TMPDIR</code>. This area is cleared after the job has exited. |
The SGE Cluster is based on `Scientific Linux CERN SLC release 5.5` machines, running on x86_64. The exact number or resources available is shown on the [[http://monitor.ifca.es/ganglia/?c=SGE%20Worker%20Nodes&m=&r=hour&s=descending&hc=4|monitorization page]]. Local submission is allowed for certain projects. As stated below, there are some shared areas that can be accessed from the computing nodes. The underlying batch system is [[http://wikis.sun.com/display/GridEngine|Sun Grid Engine]] 6.2u5. Please note that the syntax for job submission (`qsub`) and monitoring (`qstat`) is similar to the one you might be accustomed to from PBS, but there are important differences. Refer to the following sources for information: * Using Grid Engine [[http://wikis.sun.com/display/GridEngine/Using+Sun+Grid+Engine|official documentation]]. * Some useful [[http://arc.liv.ac.uk/SGE/howto/|HOWTOS]]. * SGE [[http://arc.liv.ac.uk/SGE/howto/basic_usage.html|basic usage]]. For the shake of clarity, in the following examples we will only show the options relevant to that example. However, you should take into account that there are some options that are mandatory (for example, the project submission). === Job submission, projects and queues === Users should submit their jobs directly to their project by using `qsub -P <project>`. {{{#!highlight console numbers=disable $ qsub -P <project> <jobfile> }}} {{{#!wiki warning '''Job submission without project''' '''Job submission without specifying a project is not allowed'''. Although you should already know to which project you should submit, you can check the list of projects you are allowed to use by issuing `/gpfs/csic_projects/utils/bin/rep-sge-prjs.py`. However, you should contact your supervisor if you are unsure about your project. }}} A scratch area is defined for every job as the environment variable `$TMPDIR`. This area is cleared after the job has exited. {{{#!wiki caution '''Do not specify any queues''' Although it is possible to specify a queue in your job submission, it is not recommended to do so. Access to certain queues is possible only if the user and/or project has special privileges, so if you make a hard request for a given queue, your job surely won't be properly scheduled (it could even starve). }}} === Specifying resources === In order to get your submitted jobs executed fast, you should tweak some of the resources that your job is requested. The more accurate you are to your job bounds, the faster your job will run (the default values are quite high in order to prevent jobs to be killed by the batch system, thus they penalize a lot the job execution). Some of these limits are defined in the `$SGE_ROOT/default/common/sge_request` file. If your application is always expected to use the same values, you can override that file by creating a `$HOME/.sge_request` file. For further details, please check the `sge_request` manual page. |
Line 121: | Line 181: |
A default wall clock time of 72 hours is enforced by default in all jobs submitted to the cluster. Should you require a higher value, please set it by yourself by requesting a new <code>h_rt</code> value in the form <code>hours:minutes:seconds</code>. Please note that requesting a high value may impact negatively in your job scheduling and execution. Please try to be as accurate as possible when setting this required value. For example, a job requiring 23h should be sent as follows: {{{ % qsub -P l.hidra -l h_rt=24:00:00 ''jobfile'' }}} |
A default wall clock time of 72 hours is enforced by default in all jobs submitted to the cluster. Should you require a higher value, please set it by yourself by requesting a new `h_rt` value in the form `hours:minutes:seconds`. Please note that requesting a high value may impact negatively in your job scheduling and execution. Please try to be as accurate as possible when setting this required value. For example, a job requiring 22h, with a couple of "safety" one extra hours should be sent as follows: {{{#!highlight console numbers=disable $ qsub -l h_rt=24:00:00 <jobfile> }}} |
Line 129: | Line 187: |
When requesting memory for a job you must take into account that per-job memory is limited in the default queues to a [http://en.wikipedia.org/wiki/Resident_set_size Resident Set Size] (h_rss) of 5 GB. If you need to use more memory, you should request the special resource '''''highmem'''''. Please notice that your group may not be able to request that flag by default. If you need to do so, please [http://wiki.ifca.es/e-ciencia/index.php/GRIDUI_Cluster#Support Open a ticket] requesting it. However, it is <strong>highly recommended</strong> that you tune your memory requirements to some realistic values. Special emphasis is made in the following resources: * <code>h_rss</code> * <code>mem_free</code> ====== h_rss ====== The first one (<code>h_rss</code>) refers to the '''hard resident set size limit'''. The batch system will make sure a given job does not consume more memory than the value assigned to that variable. This means that <strong>any job above the requested <code>h_rss</code> limit will be killed (SIGKILL) by the batch system</strong>. It is recommended to request this resource as a top limit for your application. If you expect your job to consume no more than a peak value of 3GB you should request those 3GB as its resident set size limit. This request will not produce a penalty on the scheduling of your jobs. ====== mem_free ====== The second one (<code>mem_free</code>) refers to the free RAM necessary for the job to run. The batch system will allow jobs to run only if sufficient memory (as requested by <code>mem_free</code>) is available for them. It will also subtract that amount of memory from the available resources, once the job is running. This ensures that a node with 16 GB of memory will not run jobs totalling more than 16 GB. The default value is 1.8 GB per slot. Please note that breaking the <code>mem_free</code> limit will not automatically kill your job. Its aim is just to ensure that your job has available the memory you requested. Also note that this value is not intended to be use to reflect the memory peaks of your job. This request will impact the scheduling of your jobs, so it is highly recommended to tune it to fit your application memory usage. This limit is defined in the <code>/opt/gridengine/default/common/sge_request</code> file. If your application is always expected to use the same values, you can override that file by creating a <code>$HOME/.sge_request</code> file. For further details, please check the <code>sge_request</code> manual page. ====== Examples ====== |
When requesting memory for a job you must take into account that per-job memory is limited in the default queues to a [[http://en.wikipedia.org/wiki/Resident_set_size|Resident Set Size]] (h_rss) of 5 GB. If you need to use more memory, you should request the special resource `highmem`. Please notice that your group may not be able to request that flag by default. If you need to do so, please [[http://grid.ifca.es/wiki/Cluster/Usage#Support|open a ticket]] requesting it. Also, notice that these nodes might be overloaded by other users requesting the same flag, so use it wisely. It is '''highly recommended''' that you tune your memory requirements to some realistic values. Special emphasis is made in the following resources: * `h_rss` * `mem_free` ===== h_rss ===== This limit refers to the '''hard resident set size limit'''. The batch system will make sure a given job does not consume more memory than the value assigned to that variable. This means that '''any job above the requested `h_rss` limit will be killed (SIGKILL) by the batch system'''. It is recommended to request this resource as a top limit for your application. If you expect your job to consume no more than a peak value of 3GB you should request those 3GB as its resident set size limit. This request '''shall not produce a penalty''' on the scheduling of your jobs. ===== mem_free ===== This refers to the free RAM necessary for the job to run. The batch system will allow jobs to run only if sufficient memory (as requested by `mem_free`) is available for them on a given node. It will subtract that amount of memory from the available resources, once the job is running. This ensures that a node with 16 GB of memory will not run jobs totaling more than 16 GB. The default value is 1.8 GB per slot. Please note that breaking the `mem_free` limit will not automatically kill your job. Its aim is just to try to ensure that your job has available the memory you requested. Also note that this value is not intended to be use to reflect the memory peaks of your job. This request will impact the scheduling of your jobs, so it is highly recommended to tune it to fit your actual application memory usage. ===== Memory usage above 5G ===== For serial jobs requiring more than 5 GB of memory, submission requesting the '''''highmem''''' flag is necessary. Using this flag, the `h_rss` limit will be unset, but the requirement tuning described above still applies. If your group is allowed to request it, and your job needs 20GB of memory, you can request it as follows: {{{#!highlight console numbers=disable $ qsub -l highmem,mem_free=20G <jobfile> }}} * A job that might reach 30 GB and needs 20GB will be submitted as: . {{{#!highlight console numbers=disable $ qsub -l highmem,h_rss=30G,mem_free=20G <jobfile> }}} For jobs needing more than these 5G using MPI, please refer to the [[#Parallel_jobs|Parallel job submission]] section. ===== Examples ===== |
Line 151: | Line 216: |
{{{ % qsub -l mem_free=4G ''jobfile'' }}} |
. {{{#!highlight console numbers=disable $ qsub -l mem_free=4G <jobfile> }}} |
Line 157: | Line 221: |
{{{ % qsub -l h_rss=4G,mem_free=3G ''jobfile'' }}} |
. {{{#!highlight console numbers=disable $ qsub -l h_rss=4G,mem_free=3G <jobfile> }}} |
Line 163: | Line 226: |
{{{ % qsub -l h_rss=4G,mem_free=4G ''jobfile'' }}} For serial jobs requiring more than 5 GB of memory, submission requesting the '''''highmem''''' flag is necessary. Using this flag, the <code>h_rss</code> limit will be unset, but the requirement tuning described above still applies. If your group is allowed to request it, and your job needs 20GB of memory, you can request it as follows: {{{ % qsub -P ''projectname'' -l highmem,mem_free=20G ''jobfile'' }}} * A job that might reach 30 GB and needs 20GB will be submitted as: {{{ % qsub -P ''projectname'' -l highmem,h_rss=30G,mem_free=20G ''jobfile'' }}} * For jobs using [[MPI]], please refer to the [[http://wiki.ifca.es/e-ciencia/index.php/GRIDUI_Cluster#Parallel_jobs|Parallel job submission]] section. |
. {{{#!highlight console numbers=disable $ qsub -l h_rss=4G,mem_free=4G <jobfile> }}} ==== Infiniband ==== If you are executing MPI parallel jobs you may benefit from the Infiniband interconnection available on the nodes. In order to do so, you must request the special resource `infiniband`: {{{#!highlight console numbers=disable $ qsub -l infiniband <jobfile> }}} |
Line 183: | Line 237: |
##The scratch area for the jobs submitted to the cluster is located under <code>/tmp/</code> and is pointed by the <code>$TMPDIR</code> variable. ## ##By default, jobs request a 2GB scratch area. Should you need more space, please use the <code>scract_space</code> in your resource requiremens: ## ## % qsub -l scratch_space=20G |
##The scratch area for the jobs submitted to the cluster is located under `/tmp/` and is pointed by the `$TMPDIR` variable. ## ##By default, jobs request a 2GB scratch area. Should you need more space, please use the `scratch_space` in your resource requirements: ## ## $ qsub -l scratch_space=20G |
Line 191: | Line 245: |
## % qhost -F scratch_space | ## $ qhost -F scratch_space |
Line 193: | Line 247: |
==== Parallel jobs ==== Parallel jobs can be submitted to the parallel environment <code>mpi</code>, specifying the number of slots required. SGE will try to spread them over the available resources. {{{ % qsub -pe mpi 8 ''jobfile'' }}} Please note that parallel jobs will be routed to queue ''parallel'' (see [[#Memory management|previous section]]). Also note that access to that queue is restricted to groups having requested it beforehand. ==== Interactive jobs ==== Interactive, short lived and high priority jobs can be send to a special queue <code>interactive.q</code> if the user's project has access to it. {{{ % qsub -q interactive.q ''jobfile'' }}} Execution on this queue is limited to a maximum of 1h of WALL clock time. ==== Resource quotas ==== |
=== Parallel jobs === Parallel jobs must be submitted to a parallel environment (pe), specifying the number of slots required. Depending on the used pe, SGE will allocate the slots in a different way. {{{#!highlight console numbers=disable $ qsub -pe mpi 8 <jobfile> }}} Please note that parallel jobs will be routed to queue ''parallel'' (see [[#Memory_management|previous section]]). Also note that access to that queue is restricted to groups having requested it beforehand. The following parallel environments are available: {{{#!rhtml <table border="1" cellpadding="2" cellspacing="0" align="center"> <tr align=center bgcolor=#A0A0A0> <td>PE Name</td> <td>Node distribution</td> </tr> <tr> <td>smp</td> <td>All slots in just 1 node</td> </tr> <tr> <td>mpi</td> <td>All slots spread across available nodes.</td> </tr> <tr> <td>8mpi</td> <td>All slots spread across available nodes, 8 slots on each node. The number of slots requested must be multiple of 8.</td> </tr> </table> }}} === Interactive jobs === Interactive, short lived and high priority jobs can be sent if your project has permission to do so (see the SUBMIT manual page (`man submit`)). This kind of jobs can only request a '''maximum of 1h of WALL clock time''', see [[#Wall_Clock_time| previous section]] for details about limiting the wall clock time of a job. X11 forwarding is possible when using the `qlogin` command. Using X11 forwarding requires a valid DISPLAY, use `ssh -X` or `ssh -Y` to enable X11 forwarding in your ssh session when logging in the UI. {{{#!highlight console numbers=disable $ qlogin -P <project> -l h_rt=1:00:00 }}} === Pseudo-Interactive jobs === A special resource, called `immediate` is available for some users, that need fast scheduling for their short-lived batch jobs. This kind of jobs can only request a '''maximum of 1h of WALL clock time'''. {{{#!highlight console numbers=disable $ qsub -l immediate <jobfile> }}} Please note that you might not have access to these resources. === Resource quotas === |
Line 218: | Line 298: |
{{{ % qconf -srqs }}} In order to know the current usage of the quotas defined above, the comand <code>qquota</code> must be used: {{{ % qquota -P ''<project> '' }}} ==== Advanced reservation ==== |
{{{#!highlight console numbers=disable $ qconf -srqs }}} In order to know the current usage of the quotas defined above, the comand `qquota` must be used: {{{#!highlight console numbers=disable $ qquota -P <project> }}} === Advanced reservation === |
Line 232: | Line 309: |
* Start datetime, and end datetime (or duration) of the reservation. * Duration of your job(s) (i.e. h_rt for the individual jobs). * Computational resources needed (mem_free, number of slots). Once the request has been made, the system administrators shall give you the ID(s) of the AR created. You can submit your jobs whenever you want by issuing: {{{ % qsub -ar ''<reservation_id>'' ''<other_job_options>'' }}} You can submit your job(s) before the AR starts and also once it is started. However, you should take care of the duration of the reservation and your job' duration. If your job execution exceeds either the <code>h_rt</code> that it has requested or the duration of the AR it will be killed by the batch system. |
* Start datetime, and end datetime (or duration) of the reservation. * Duration of your job(s) (i.e. `h_rt` for the individual jobs). * Computational resources needed (mem_free, number of slots). Once the request has been made, the system administrators will give you the ID(s) of the AR created. You can submit your jobs whenever you want by issuing: {{{#!highlight console numbers=disable $ qsub -ar <reservation_id> <other_job_options> }}} You can submit your job(s) before the AR starts and also once it is started. However, you should take care of the duration of the reservation and your job' duration. If your job execution exceeds either the `h_rt` that it has requested or the duration of the AR it will be killed by the batch system. |
Line 246: | Line 322: |
Since the requested and reserved resources cannot be used for other jobs, those requested resources will be used for accounting purposes as if they were resources used by normal jobs (even in the case that the AR is unused). <strong>Please request only the resources that you need</strong>. If you want to query the existing advance reservations, you can use the <code>qrstat</code> command. To query about an specific advance reservation, you can issue: {{{ % qrstat -ar ''<reservation_id>'' }}} |
Since the requested and reserved resources cannot be used for other jobs, those requested resources will be used for accounting purposes as if they were resources used by normal jobs (even in the case that the AR is unused). '''Please request only the resources that you need'''. If you want to query the existing advance reservations, you can use the `qrstat` command. To query about an specific advance reservation, you can issue: {{{#!highlight console numbers=disable $ qrstat -ar ''<reservation_id>'' }}} |
Line 255: | Line 330: |
The <tt>home</tt> directories (<code>/home/$USER</code>) are shared between the UIs and the computing nodes. There is a ''projects'' shared area (located at <code>/gpfs/csic_projects/</code>), also accessible from the UI and the computing nodes. If your group does not have this area, please contact the system administrators. ==== Usage ==== The shared directories <strong>are not intended for scratch</strong>, use the temporal areas of the local filesystems instead. In other words, instruct every job you send to copy the input from the shared directory to the local scratch (<tt>$TMPDIR</tt>), execute all operations there, then copy the output back to some shared area where you will be able to retrieve it comfortably from the UI. As mentioned above, the contents of <tt>$TMPDIR</tt> are removed after job execution. ==== Disk quotas ==== Disk quotas are enabled on both user and projects filesystems. A message with this information should be shown upon login. If you need more quota on your user space (not in the project shared area), please contact the system administrators explaining your reasons. If you wish to check your quota at a later time, you can use the commands <code>mmlsquota gpfs_csic</code> (for user quotas) and <code>mmlsquota -g `id -g` gpfs_projects</code> (for group quotas). A script reporting both quotas is located on <code>/gpfs/csic_projects/utils/bin/rep-user-quotas.py</code>. A sample output of the latter could be: |
The `$HOME` directories (`/home/$USER`) are shared between the UIs and the computing nodes. There is a ''projects'' shared area (located at `/gpfs/csic_projects/`), also accessible from the UI and the computing nodes. If your group does not have this area, please open an [[http://support.ifca.es|Incidence ticket]]. === Usage === The shared directories '''are not intended for scratch''', use the temporal areas of the local filesystems instead. In other words, instruct every job you send to copy the input from the shared directory to the local scratch (`$TMPDIR`), execute all operations there, then copy the output back to some shared area where you will be able to retrieve it comfortably from the UI. As mentioned above, the contents of `$TMPDIR` are removed after job execution. === Disk quotas === Disk quotas are enabled on both user and projects filesystems. A message with this information should be shown upon login. If you need more quota on your user space (not in the project shared area), please contact the system administrators explaining your reasons. If you wish to check your quota at a later time, you can use the commands `mmlsquota gpfs_csic` (for user quotas) and `mmlsquota -g `id -g` gpfs_projects` (for group quotas). A script reporting both quotas is located on `/gpfs/csic_projects/utils/bin/rep-user-quotas.py`. A sample output of the latter could be: |
Line 273: | Line 344: |
INFORMATION ABOUT YOUR CURRENT DISK USAGE | INFORMATION ABOUT YOUR CURRENT DISK USAGE |
Line 284: | Line 355: |
For a basic interpretation of this output, note that the "Used" column will tell you about how much disk space you are using, whereas "Soft" denotes the limit this "Used" amount should not exceed. The "Hard" column is the value of the limit "Used" plus "Doubt" should not cross. A healthy disk space management would require that you periodically delete unused files in your <tt>$HOME</tt> directory, keeping its usage below the limits at all times. In the event that the user exceeds a limit, a grace period will be shown in the "Grace" column. If the user does not correct the situation within the grace period, she will be banned from writing to the disk. For further information you can read the [http://www.nersc.gov/vendor_docs/ibm/gpfs/am3admst119.html mmlsquota command manual page]. |
For a basic interpretation of this output, note that the "Used" column will tell you about how much disk space you are using, whereas "Soft" denotes the limit this "Used" amount should not exceed. The "Hard" column is the value of the limit "Used" plus "Doubt" should not cross. A healthy disk space management would require that you periodically delete unused files in your `$HOME` directory, keeping its usage below the limits at all times. In the event that the user exceeds a limit, a grace period will be shown in the "Grace" column. If the user does not correct the situation within the grace period, she will be banned from writing to the disk. For further information you can read the [[http://www.nersc.gov/vendor_docs/ibm/gpfs/am3admst119.html|mmlsquota command manual page]]. |
Line 290: | Line 360: |
Some extra packages as [[http://python.org|Python 2.6]] and [[http://software.intel.com/en-us/articles/non-commercial-software-development/|Intel Non-Commercial Compilers]] can be found on <code>/gpfs/csic_projects/utils/</code>. |
Some extra packages as [[http://python.org|Python 2.6]] and [[http://software.intel.com/en-us/articles/non-commercial-software-development/|Intel Non-Commercial Compilers]] can be found on `/gpfs/csic_projects/utils/`. |
Line 296: | Line 365: |
Before opening a new incidence, please check the [[Cluster/FAQ|Frequently Asked Questions page]] | |
Line 300: | Line 370: |
CategoryDatacenter | CategoryUserSupport CategoryLocalCluster |
IFCA Datacenter usage guidelines
Contents
If you find any information that is out-dated, incorrect or incomplete, do not hesitate to Open a ticket.
1. Introduction
The GRIDUI (Grid User Interface) cluster is the interactive gateway to the Advanced Computing and e-Science resources at IFCA. This cluster is formed by several identical Scientific Linux 5 hosts, that can be reached through a single entry point. The connections to the internal machines are managed by a director node that tries to ensure that proper balancing is made across the available nodes at a given moment. Nevertheless, direct access to a particular node can be obtained.
Please note that this cluster is not intended for the execution of CPU intensive tasks, for this purpose use any of the available computing resources. Every process spawned are limited to a maximum CPU time of 2 hours.
Login on these machines is provided via Secure Shell. The RSA key fingerprint of them is 46:85:91:c1:eb:61:55:34:25:2c:d6:0a:08:22:1f:77.
Outgoing SSH connections are not allowed by default from this cluster. Inactive SSH sessions will be closed after 12h.
Direct node access
Note that even though is possible to bypass the director accessing a node directly this is not recommended nor advisable. Fair usage of the resources and proper balancing of the connections cannot be guaranteed if any user commits any abuse of this feature.
Please note also that the GRIDUI machines have exactly the same hardware and software.
Scientific Linux 4 cluster
Note that from 3rd November 2010 no Scientific Linux Cern 4 version is available.
Hostname | Port | Gives access to | Distribution and Architecture |
gridui.ifca.es | 22, 22000 | Balanced GRIDUI Cluster | Scientific Linux CERN SLC release 5.5 (Boron), x86_64 |
griduisl5.ifca.es | |||
gridui.ifca.es | 22001 | gridui01.ifca.es | Scientific Linux CERN SLC release 5.5 (Boron), x86_64 |
griduisl5.ifca.es | |||
gridui.ifca.es | 22002 | gridui02.ifca.es | Scientific Linux CERN SLC release 5.5 (Boron), x86_64 |
griduisl5.ifca.es | |||
gridui.ifca.es | 22003 | gridui03.ifca.es | Scientific Linux CERN SLC release 5.5 (Boron), x86_64 |
griduisl5.ifca.es | |||
gridui.ifca.es | 22004 | gridui04.ifca.es | Scientific Linux CERN SLC release 5.5 (Boron), x86_64 |
griduisl5.ifca.es | |||
gridui.ifca.es | 22005 | gridui05.ifca.es | Scientific Linux CERN SLC release 5.5 (Boron), x86_64 |
griduisl5.ifca.es | |||
gridui.ifca.es | 22006 | gridui06.ifca.es | Scientific Linux CERN SLC release 5.5 (Boron), x86_64 |
griduisl5.ifca.es |
2. Authentication
Authentication is centralized via secured LDAP. All the changes made to a user account in one node take immediate effect in the whole cluster. There is also a secured web interface, that allows a user to change his/her details, available at https://cerbero.ifca.es/. If you need to reset your account password, please contact the system administrators.
With this username you should be able to access also the ticketing system at http://support.ifca.es/
3. Grid resources
As the cluster is based on several Grid User Interfaces it allows users to access either EGEE-III, int.eu.grid, EUFORIA, IBERGRID and GRID-CSIC infrastructures. To set up the correct environment variables, please source any of the environment scripts located under /gpfs/csic_projects/grid/etc/env/. For example, to use the I2G infrastructure:
$ source /gpfs/csic_projects/grid/etc/env/i2g-env.sh
The available environments are:
Filename | Allows access to |
euforia-env.{csh,sh} | EUFORIA |
ibergrid-env.{csh,sh} | IBERGRID and EGI |
ngi-env.{csh,sh} | EGI and IBERGRID NGI |
More information on setting up the Grid UI is available locally at the corresponding DUS page.
4. PBS Cluster
PBS cluster was decommissioned in December 2009. Please use the new SGE Cluster instead.
5. SGE Cluster
The SGE Cluster is based on Scientific Linux CERN SLC release 5.5 machines, running on x86_64. The exact number or resources available is shown on the monitorization page.
Local submission is allowed for certain projects. As stated below, there are some shared areas that can be accessed from the computing nodes. The underlying batch system is Sun Grid Engine 6.2u5. Please note that the syntax for job submission (qsub) and monitoring (qstat) is similar to the one you might be accustomed to from PBS, but there are important differences. Refer to the following sources for information:
Using Grid Engine official documentation.
Some useful HOWTOS.
SGE basic usage.
For the shake of clarity, in the following examples we will only show the options relevant to that example. However, you should take into account that there are some options that are mandatory (for example, the project submission).
5.1. Job submission, projects and queues
Users should submit their jobs directly to their project by using qsub -P <project>.
$ qsub -P <project> <jobfile>
Job submission without project
Job submission without specifying a project is not allowed. Although you should already know to which project you should submit, you can check the list of projects you are allowed to use by issuing /gpfs/csic_projects/utils/bin/rep-sge-prjs.py. However, you should contact your supervisor if you are unsure about your project.
A scratch area is defined for every job as the environment variable $TMPDIR. This area is cleared after the job has exited.
Do not specify any queues
Although it is possible to specify a queue in your job submission, it is not recommended to do so. Access to certain queues is possible only if the user and/or project has special privileges, so if you make a hard request for a given queue, your job surely won't be properly scheduled (it could even starve).
5.2. Specifying resources
In order to get your submitted jobs executed fast, you should tweak some of the resources that your job is requested. The more accurate you are to your job bounds, the faster your job will run (the default values are quite high in order to prevent jobs to be killed by the batch system, thus they penalize a lot the job execution).
Some of these limits are defined in the $SGE_ROOT/default/common/sge_request file. If your application is always expected to use the same values, you can override that file by creating a $HOME/.sge_request file. For further details, please check the sge_request manual page.
5.2.1. Wall Clock time
A default wall clock time of 72 hours is enforced by default in all jobs submitted to the cluster. Should you require a higher value, please set it by yourself by requesting a new h_rt value in the form hours:minutes:seconds. Please note that requesting a high value may impact negatively in your job scheduling and execution. Please try to be as accurate as possible when setting this required value. For example, a job requiring 22h, with a couple of "safety" one extra hours should be sent as follows:
$ qsub -l h_rt=24:00:00 <jobfile>
5.2.2. Memory management
When requesting memory for a job you must take into account that per-job memory is limited in the default queues to a Resident Set Size (h_rss) of 5 GB. If you need to use more memory, you should request the special resource highmem. Please notice that your group may not be able to request that flag by default. If you need to do so, please open a ticket requesting it. Also, notice that these nodes might be overloaded by other users requesting the same flag, so use it wisely.
It is highly recommended that you tune your memory requirements to some realistic values. Special emphasis is made in the following resources:
h_rss
mem_free
5.2.2.1. h_rss
This limit refers to the hard resident set size limit. The batch system will make sure a given job does not consume more memory than the value assigned to that variable. This means that any job above the requested h_rss limit will be killed (SIGKILL) by the batch system. It is recommended to request this resource as a top limit for your application. If you expect your job to consume no more than a peak value of 3GB you should request those 3GB as its resident set size limit. This request shall not produce a penalty on the scheduling of your jobs.
5.2.2.2. mem_free
This refers to the free RAM necessary for the job to run. The batch system will allow jobs to run only if sufficient memory (as requested by mem_free) is available for them on a given node. It will subtract that amount of memory from the available resources, once the job is running. This ensures that a node with 16 GB of memory will not run jobs totaling more than 16 GB. The default value is 1.8 GB per slot. Please note that breaking the mem_free limit will not automatically kill your job. Its aim is just to try to ensure that your job has available the memory you requested. Also note that this value is not intended to be use to reflect the memory peaks of your job. This request will impact the scheduling of your jobs, so it is highly recommended to tune it to fit your actual application memory usage.
5.2.2.3. Memory usage above 5G
For serial jobs requiring more than 5 GB of memory, submission requesting the highmem flag is necessary. Using this flag, the h_rss limit will be unset, but the requirement tuning described above still applies. If your group is allowed to request it, and your job needs 20GB of memory, you can request it as follows:
$ qsub -l highmem,mem_free=20G <jobfile>
- A job that might reach 30 GB and needs 20GB will be submitted as:
$ qsub -l highmem,h_rss=30G,mem_free=20G <jobfile>
For jobs needing more than these 5G using MPI, please refer to the Parallel job submission section.
5.2.2.4. Examples
- A job that needs to have 4 GB of memory assigned to it:
$ qsub -l mem_free=4G <jobfile>
- A job that might peak at 4 GB, but in its execution normally needs 3 GB:
$ qsub -l h_rss=4G,mem_free=3G <jobfile>
- A job that might reach 4 GB, and also needs 4 GB:
$ qsub -l h_rss=4G,mem_free=4G <jobfile>
5.2.3. Infiniband
If you are executing MPI parallel jobs you may benefit from the Infiniband interconnection available on the nodes. In order to do so, you must request the special resource infiniband:
$ qsub -l infiniband <jobfile>
5.3. Parallel jobs
Parallel jobs must be submitted to a parallel environment (pe), specifying the number of slots required. Depending on the used pe, SGE will allocate the slots in a different way.
$ qsub -pe mpi 8 <jobfile>
Please note that parallel jobs will be routed to queue parallel (see previous section). Also note that access to that queue is restricted to groups having requested it beforehand.
The following parallel environments are available:
PE Name | Node distribution |
smp | All slots in just 1 node |
mpi | All slots spread across available nodes. |
8mpi | All slots spread across available nodes, 8 slots on each node. The number of slots requested must be multiple of 8. |
5.4. Interactive jobs
Interactive, short lived and high priority jobs can be sent if your project has permission to do so (see the SUBMIT manual page (man submit)). This kind of jobs can only request a maximum of 1h of WALL clock time, see previous section for details about limiting the wall clock time of a job.
X11 forwarding is possible when using the qlogin command. Using X11 forwarding requires a valid DISPLAY, use ssh -X or ssh -Y to enable X11 forwarding in your ssh session when logging in the UI.
$ qlogin -P <project> -l h_rt=1:00:00
5.5. Pseudo-Interactive jobs
A special resource, called immediate is available for some users, that need fast scheduling for their short-lived batch jobs. This kind of jobs can only request a maximum of 1h of WALL clock time.
$ qsub -l immediate <jobfile>
Please note that you might not have access to these resources.
5.6. Resource quotas
Some limits may be enforced by the administrators in a user/group/project basis. To check the current resource quotas, the following command must be issued:
$ qconf -srqs
In order to know the current usage of the quotas defined above, the comand qquota must be used:
$ qquota -P <project>
5.7. Advanced reservation
Some users and/or projects might request a reservation of a set of resources in advance. This is called an "Advanced Reservation (AR). If your project needs such a reservation you should make a petition using the support helpdesk. You need to specify the following:
- Start datetime, and end datetime (or duration) of the reservation.
Duration of your job(s) (i.e. h_rt for the individual jobs).
- Computational resources needed (mem_free, number of slots).
Once the request has been made, the system administrators will give you the ID(s) of the AR created. You can submit your jobs whenever you want by issuing:
$ qsub -ar <reservation_id> <other_job_options>
You can submit your job(s) before the AR starts and also once it is started. However, you should take care of the duration of the reservation and your job' duration. If your job execution exceeds either the h_rt that it has requested or the duration of the AR it will be killed by the batch system.
You should also take into account that your reservation might not be created in the date and time that you requested if there are no resources available. In this case, it will be created whenever it is possible. To avoid this, please request your reservations well in advance.
Since the requested and reserved resources cannot be used for other jobs, those requested resources will be used for accounting purposes as if they were resources used by normal jobs (even in the case that the AR is unused). Please request only the resources that you need.
If you want to query the existing advance reservations, you can use the qrstat command. To query about an specific advance reservation, you can issue:
$ qrstat -ar ''<reservation_id>''
6. Shared areas
The $HOME directories (/home/$USER) are shared between the UIs and the computing nodes. There is a projects shared area (located at /gpfs/csic_projects/), also accessible from the UI and the computing nodes. If your group does not have this area, please open an Incidence ticket.
6.1. Usage
The shared directories are not intended for scratch, use the temporal areas of the local filesystems instead. In other words, instruct every job you send to copy the input from the shared directory to the local scratch ($TMPDIR), execute all operations there, then copy the output back to some shared area where you will be able to retrieve it comfortably from the UI.
As mentioned above, the contents of $TMPDIR are removed after job execution.
6.2. Disk quotas
Disk quotas are enabled on both user and projects filesystems. A message with this information should be shown upon login. If you need more quota on your user space (not in the project shared area), please contact the system administrators explaining your reasons.
If you wish to check your quota at a later time, you can use the commands mmlsquota gpfs_csic (for user quotas) and mmlsquota -g id -g gpfs_projects (for group quotas). A script reporting both quotas is located on /gpfs/csic_projects/utils/bin/rep-user-quotas.py. A sample output of the latter could be:
********************************************************************** INFORMATION ABOUT YOUR CURRENT DISK USAGE USER Used Soft Hard Doubt Grace Space (GB): 3.41 20.00 0.00 0.06 none Files (x1000): 64 0 0 0 none GROUP Used Soft Hard Doubt Grace Space (GB): 0.00 1000.00 1500.00 0.00 none Files (x1000): 0 0 0 0 none **********************************************************************
For a basic interpretation of this output, note that the "Used" column will tell you about how much disk space you are using, whereas "Soft" denotes the limit this "Used" amount should not exceed. The "Hard" column is the value of the limit "Used" plus "Doubt" should not cross. A healthy disk space management would require that you periodically delete unused files in your $HOME directory, keeping its usage below the limits at all times. In the event that the user exceeds a limit, a grace period will be shown in the "Grace" column. If the user does not correct the situation within the grace period, she will be banned from writing to the disk.
For further information you can read the mmlsquota command manual page.
7. Extra utils
Some extra packages as Python 2.6 and Intel Non-Commercial Compilers can be found on /gpfs/csic_projects/utils/.
Please note that these packages are provided as-is, without further support from IFCA staff.
8. Support
Before opening a new incidence, please check the Frequently Asked Questions page
Questions, support and/or feedback should be directed through the use the Helpdesk.