welcome: please sign in
location: Diff for "Cluster/Usage"
Differences between revisions 72 and 86 (spanning 14 versions)
Revision 72 as of 2013-11-26 10:12:03
Size: 9102
Editor: orviz
Comment:
Revision 86 as of 2017-02-17 08:58:31
Size: 6969
Editor: aloga
Comment:
Deletions are marked like this. Additions are marked like this.
Line 8: Line 8:
If you find any information that is out-dated, incorrect or incomplete, do not hesitate to [[#Support|Open a ticket]]. If you find any information that is out-dated, incorrect or incomplete, do not
hesitate to [[#Support|Open a ticket]].
Line 10: Line 11:
Line 11: Line 13:
The '''GridUI''' (Grid User Interface) cluster is the interactive gateway to the [[http://grid.ifca.es|Advanced Computing and e-Science]] resources at IFCA. This cluster is comprised of a pool of Scientific Linux machines reachable through a single entry point. The connections to the internal machines are managed by a director node that tries to ensure that proper balancing is made across the available nodes at a given moment.
Line 13: Line 14:
Please note that this cluster is ''not intended for the execution of CPU intensive tasks'', for this purpose use any of the available computing resources. Every process spawned are limited to a maximum CPU time of 2 hours. The '''GridUI''' (Grid User Interface) cluster is the interactive gateway to
the [[http://grid.ifca.es|Advanced Computing and e-Science]] resources at
IFCA. This cluster is comprised of a pool of machines reachable through a
single entry point. The connections to the internal machines are managed by
a director node that tries to ensure that proper balancing is made across
the available nodes at a given moment.
Line 15: Line 21:
Login on these machines is provided via [[http://en.wikipedia.org/wiki/Secure_Shell|Secure Shell]]. The RSA key fingerprint of them is `46:85:91:c1:eb:61:55:34:25:2c:d6:0a:08:22:1f:77`. Please note that this cluster is ''not intended for the execution of CPU
intensive tasks'', for this purpose use any of the available computing
resources. Every process spawned is limited to a maximum CPU time of 2 hours.
Line 17: Line 25:
Outgoing SSH connections are not allowed by default from this cluster. Inactive SSH sessions will be closed after 12h.
{{{#!rhtml
<table border="1" cellpadding="2" cellspacing="0" align="center">
<tr align=center bgcolor=#A0A0A0><td>Hostname</td><td>Port</td><td>Gives access to</td><td>Distribution and Architecture</td></tr>
<tr>
    <td><strong>gridui.ifca.es</strong></td>
    <td rowspan="2">22, 22000</td>
    <td rowspan="2">Balanced GRIDUI Cluster</td>
    <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td>
</tr>
<tr>
    <td><strong>griduisl5.ifca.es</strong></td>
</tr>
<tr>
    <td><strong>gridui.ifca.es</strong></td>
    <td rowspan="2">22001</td>
    <td rowspan="2">gridui01.ifca.es</td>
    <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td>
</tr>
<tr>
    <td><strong>griduisl5.ifca.es</strong></td>
</tr>
<tr>
    <td><strong>gridui.ifca.es</strong></td>
    <td rowspan="2">22002</td>
    <td rowspan="2">gridui02.ifca.es</td>
    <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td>
</tr>
<tr>
    <td><strong>griduisl5.ifca.es</strong></td>
</tr>
<tr>
    <td><strong>gridui.ifca.es</strong></td>
    <td rowspan="2">22003</td>
    <td rowspan="2">gridui03.ifca.es</td>
    <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td>
</tr>
<tr>
    <td><strong>griduisl5.ifca.es</strong></td>
</tr>
<tr>
    <td><strong>gridui.ifca.es</strong></td>
    <td rowspan="2">22004</td>
    <td rowspan="2">gridui04.ifca.es</td>
    <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td>
</tr>
<tr>
    <td><strong>griduisl5.ifca.es</strong></td>
</tr>
<tr>
    <td><strong>gridui.ifca.es</strong></td>
    <td rowspan="2">22005</td>
    <td rowspan="2">gridui05.ifca.es</td>
    <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td>
</tr>
<tr>
    <td><strong>griduisl5.ifca.es</strong></td>
</tr>
<tr>
    <td><strong>gridui.ifca.es</strong></td>
    <td rowspan="2">22006</td>
    <td rowspan="2">gridui06.ifca.es</td>
    <td rowspan="2" align="center">Scientific Linux CERN SLC release 5.5 (Boron), x86_64</td>
</tr>
<tr>
    <td><strong>griduisl5.ifca.es</strong></td>
</tr>
</table>
}}}
== Authentication ==
Authentication is centralized via secured LDAP. All the changes made to a user account in one node take immediate effect in the whole cluster. There is also a secured web interface, that allows a user to change his/her details, available at https://cerbero.ifca.es/. If you need to reset your account password, please contact the system administrators.
Login on these machines is provided via
[[http://en.wikipedia.org/wiki/Secure_Shell|Secure Shell]]. Outgoing SSH
connections are not allowed by default from this cluster. Inactive SSH
sessions may be closed after 12h. It is highly recommended that you set up
[[Cluster/Usage/SSHKeyManagement| SSH Keys]] for authentication, instead of
using your username and password.
Line 89: Line 32:
With this username you should be able to access also the ticketing system at http://support.ifca.es/   || '''Hostname''' || '''Operating System''' || '''SSH server key fingerprint''' ||
  || `gridui.ifca.es`, `griduisl6.ifca.es` || Scientific Linux 6.X || `29:80:9b:28:e7:8a:00:fe:6c:60:ef:e6:a6:71:33:bd` ||
Line 91: Line 35:
== Grid resources ==
As the several virtual organizations supported use its own set of resources, it is commonly required to set up the correct environment variables in order to use the grid tools. Environment scripts are located under `/nfs4/usr/etc/env/`:
== Authentication and user accounts ==

See [[Cluster/SSO]].

== Access to Scientific Linux 5 machines ==

After the
[[https://grid.ifca.es/sl5-user-interfaces-deprecation-plan2.html|Scientific
Linux 5 deprecation]] interactive access to Scientific Linux 5 is still
possible trough the batch system. In order to request a SLC5 machine you must
append the complex `scientificlinux5` to your request:
Line 95: Line 48:
$ source /nfs4/usr/etc/env/ibergrid-env.sh
user@cloudprv-10-0:~ $ qsub -l scientificlinux5=true (...)
Line 97: Line 51:
== PBS Cluster ==
PBS cluster was '''decommissioned''' in December 2009. Please use the new [[#SGE_Cluster|SGE Cluster]] instead.

If you want an interactive session, append the complex to your `qlogin` request:

{{{#!highlight console numbers=disable

user@cloudprv-10-0:~ $ qlogin -l scientificlinux5=true (...)
JSV "/nfs4/opt/gridengine/util/resources/jsv/jsv-IFCA.tcl" has been started
JSV "/nfs4/opt/gridengine/util/resources/jsv/jsv-IFCA.tcl" has been stopped
Your job 1822278 ("QLOGIN") has been submitted
waiting for interactive job to be scheduled ...
Your interactive job 1822278 has been successfully scheduled.
Establishing builtin session to host cloudprv-02-9.ifca.es ...
user@cloudprv-02-9:~$ cat /etc/redhat-release
Scientific Linux SL release 5.5 (Boron)
user@cloudprv-02-9:~$
}}}
Line 101: Line 69:
The SGE Cluster is based on `Scientific Linux CERN SLC release 5.5` machines, running on x86_64. The exact number or resources available is shown on the [[http://monitor.ifca.es/ganglia/?c=SGE%20Worker%20Nodes&m=&r=hour&s=descending&hc=4|monitorization page]].
The SGE Cluster is based on `Scientific Linux CERN SLC release 6.2` machines, running on x86_64.
Line 116: Line 85:
The `$HOME` directories (`/home/$USER`) are shared between the UIs and the computing nodes. There is a ''projects'' shared area (located at `/gpfs/csic_projects/`), also accessible from the UI and the computing nodes. If your group does not have this area, please open an [[http://support.ifca.es|Incidence ticket]].
The `$HOME` directories are shared between the UIs and the computing nodes. There is a ''projects'' shared area (located at `/gpfs/csic_projects/`), also accessible from the UI and the computing nodes. If your group does not have this area, please open an [[http://support.ifca.es|Incidence ticket]].
Line 119: Line 89:
Line 124: Line 95:
Line 146: Line 118:
Some extra packages as latest [[http://python.org|Python]] versions and [[http://software.intel.com/en-us/articles/non-commercial-software-development/|Intel Non-Commercial Compilers]] can be found at `/nfs4/opt/`. Here also is the preferred location for some other piece of software commonly used like:
Line 148: Line 119:
 * Matlab's like `octave` language for numerical anaylisis. Some extra packages can be found at `/nfs4/opt/`. This is the location for
some pieces of software commonly used like:

 * Matlab's like `octave` language for numerical analysis.
Line 155: Line 129:
Line 160: Line 135:
CategoryUserSupport CategoryLocalCluster CategoryUserSupport

IFCA Datacenter usage guidelines

If you find any information that is out-dated, incorrect or incomplete, do not hesitate to Open a ticket.

1. Introduction

The GridUI (Grid User Interface) cluster is the interactive gateway to the Advanced Computing and e-Science resources at IFCA. This cluster is comprised of a pool of machines reachable through a single entry point. The connections to the internal machines are managed by a director node that tries to ensure that proper balancing is made across the available nodes at a given moment.

Please note that this cluster is not intended for the execution of CPU intensive tasks, for this purpose use any of the available computing resources. Every process spawned is limited to a maximum CPU time of 2 hours.

Login on these machines is provided via Secure Shell. Outgoing SSH connections are not allowed by default from this cluster. Inactive SSH sessions may be closed after 12h. It is highly recommended that you set up SSH Keys for authentication, instead of using your username and password.

  • Hostname

    Operating System

    SSH server key fingerprint

    gridui.ifca.es, griduisl6.ifca.es

    Scientific Linux 6.X

    29:80:9b:28:e7:8a:00:fe:6c:60:ef:e6:a6:71:33:bd

2. Authentication and user accounts

See Cluster/SSO.

3. Access to Scientific Linux 5 machines

After the [[https://grid.ifca.es/sl5-user-interfaces-deprecation-plan2.html|Scientific Linux 5 deprecation]] interactive access to Scientific Linux 5 is still possible trough the batch system. In order to request a SLC5 machine you must append the complex scientificlinux5 to your request:

user@cloudprv-10-0:~ $ qsub -l scientificlinux5=true (...)

If you want an interactive session, append the complex to your qlogin request:

user@cloudprv-10-0:~ $ qlogin -l scientificlinux5=true (...)
JSV "/nfs4/opt/gridengine/util/resources/jsv/jsv-IFCA.tcl" has been started
JSV "/nfs4/opt/gridengine/util/resources/jsv/jsv-IFCA.tcl" has been stopped
Your job 1822278 ("QLOGIN") has been submitted
waiting for interactive job to be scheduled ...
Your interactive job 1822278 has been successfully scheduled.
Establishing builtin session to host cloudprv-02-9.ifca.es ...
user@cloudprv-02-9:~$ cat /etc/redhat-release
Scientific Linux SL release 5.5 (Boron)
user@cloudprv-02-9:~$  

4. SGE Cluster

The SGE Cluster is based on Scientific Linux CERN SLC release 6.2 machines, running on x86_64.

Local submission is allowed for certain projects. As stated below, there are some shared areas that can be accessed from the computing nodes. The underlying batch system is Son of Grid Engine 8.0.0d. Refer to the following sources for information:

IFCA Gridengine documentation has moved

The specific documentation for IFCA has been moved to a separate section.

5. Shared areas

The $HOME directories are shared between the UIs and the computing nodes. There is a projects shared area (located at /gpfs/csic_projects/), also accessible from the UI and the computing nodes. If your group does not have this area, please open an Incidence ticket.

5.1. Usage

The shared directories are not intended for scratch, use the temporal areas of the local filesystems instead. In other words, instruct every job you send to copy the input from the shared directory to the local scratch ($TMPDIR), execute all operations there, then copy the output back to some shared area where you will be able to retrieve it comfortably from the UI.

As mentioned above, the contents of $TMPDIR are removed after job execution.

5.2. Disk quotas

Disk quotas are enabled on both user and projects filesystems. A message with this information should be shown upon login. If you need more quota on your user space (not in the project shared area), please contact the system administrators explaining your reasons.

If you wish to check your quota at a later time, you can use the commands mmlsquota gpfs_csic (for user quotas) and mmlsquota -g id -g gpfs_projects (for group quotas). A script reporting both quotas is located on /nfs4/usr/bin/rep-user-quotas.py. A sample output of the latter could be:

**********************************************************************
                    INFORMATION ABOUT YOUR CURRENT DISK USAGE

USER                Used      Soft      Hard     Doubt     Grace
Space (GB):         3.41     20.00      0.00      0.06      none
Files (x1000):        64         0         0         0      none

GROUP               Used      Soft      Hard     Doubt     Grace
Space (GB):         0.00   1000.00   1500.00      0.00      none
Files (x1000):         0         0         0         0      none
**********************************************************************

For a basic interpretation of this output, note that the "Used" column will tell you about how much disk space you are using, whereas "Soft" denotes the limit this "Used" amount should not exceed. The "Hard" column is the value of the limit "Used" plus "Doubt" should not cross. A healthy disk space management would require that you periodically delete unused files in your $HOME directory, keeping its usage below the limits at all times. In the event that the user exceeds a limit, a grace period will be shown in the "Grace" column. If the user does not correct the situation within the grace period, she will be banned from writing to the disk.

For further information you can read the mmlsquota command manual page.

6. Extra utils/Software

Some extra packages can be found at /nfs4/opt/. This is the location for some pieces of software commonly used like:

  • Matlab's like octave language for numerical analysis.

  • Data plotting gnuplot program.

  • Profiling and debugging valgrind tools.

Please note that these packages are provided as-is, without further support from IFCA staff.

7. Support

Before opening a new incidence, please check the Frequently Asked Questions page

Questions, support and/or feedback should be directed through the use the Helpdesk.


CategoryUserSupport

eciencia: Cluster/Usage (last edited 2017-02-17 08:58:31 by aloga)