#acl EditorGroup:read,write,revert Known:read All:read #pragma section-numbers 2 = IFCA Datacenter usage guidelines = <> {{{#!wiki note If you find any information that is out-dated, incorrect or incomplete, do not hesitate to [[#Support|Open a ticket]]. }}} == Introduction == The '''GRIDUI''' (Grid User Interface) cluster is the interactive gateway to the [[http://grid.ifca.es|Advanced Computing and e-Science]] resources at IFCA. This cluster is formed by several identical Scientific Linux 5 hosts, that can be reached through a single entry point. The connections to the internal machines are managed by a director node that tries to ensure that proper balancing is made across the available nodes at a given moment. Nevertheless, direct access to a particular node can be obtained. Please note that this cluster is ''not intended for the execution of CPU intensive tasks'', for this purpose use any of the available computing resources. Every process spawned are limited to a maximum CPU time of 2 hours. Login on these machines is provided via [[http://en.wikipedia.org/wiki/Secure_Shell|Secure Shell]]. The RSA key fingerprint of them is `46:85:91:c1:eb:61:55:34:25:2c:d6:0a:08:22:1f:77`. Outgoing SSH connections are not allowed by default from this cluster. Inactive SSH sessions will be closed after 12h. {{{#!wiki caution '''Direct node access''' Note that even though is possible to bypass the director accessing a node directly this is ''not recommended nor advisable''. Fair usage of the resources and proper balancing of the connections cannot be guaranteed if any user commits any abuse of this feature. Please note also that the GRIDUI machines have exactly the same hardware and software. }}} {{{#!wiki warning '''Scientific Linux 4 cluster''' Note that from 3rd November 2010 no Scientific Linux Cern 4 version is available. }}} {{{#!rhtml
HostnamePortGives access toDistribution and Architecture
gridui.ifca.es 22, 22000 Balanced GRIDUI Cluster Scientific Linux CERN SLC release 5.5 (Boron), x86_64
griduisl5.ifca.es
gridui.ifca.es 22001 gridui01.ifca.es Scientific Linux CERN SLC release 5.5 (Boron), x86_64
griduisl5.ifca.es
gridui.ifca.es 22002 gridui02.ifca.es Scientific Linux CERN SLC release 5.5 (Boron), x86_64
griduisl5.ifca.es
gridui.ifca.es 22003 gridui03.ifca.es Scientific Linux CERN SLC release 5.5 (Boron), x86_64
griduisl5.ifca.es
gridui.ifca.es 22004 gridui04.ifca.es Scientific Linux CERN SLC release 5.5 (Boron), x86_64
griduisl5.ifca.es
gridui.ifca.es 22005 gridui05.ifca.es Scientific Linux CERN SLC release 5.5 (Boron), x86_64
griduisl5.ifca.es
gridui.ifca.es 22006 gridui06.ifca.es Scientific Linux CERN SLC release 5.5 (Boron), x86_64
griduisl5.ifca.es
}}} == Authentication == Authentication is centralized via secured LDAP. All the changes made to a user account in one node take immediate effect in the whole cluster. There is also a secured web interface, that allows a user to change his/her details, available at https://cerbero.ifca.es/. If you need to reset your account password, please contact the system administrators. With this username you should be able to access also the ticketing system at http://support.ifca.es/ == Grid resources == As the several virtual organizations supported use its own set of resources, it is commonly required to set up the correct environment variables in order to use the grid tools. Environment scripts are located under `/nfs4/usr/etc/env/`: {{{#!highlight console numbers=disable $ source /nfs4/usr/etc/env/ibergrid-env.sh }}} == PBS Cluster == PBS cluster was '''decommissioned''' in December 2009. Please use the new [[#SGE_Cluster|SGE Cluster]] instead. == SGE Cluster == The SGE Cluster is based on `Scientific Linux CERN SLC release 5.5` machines, running on x86_64. The exact number or resources available is shown on the [[http://monitor.ifca.es/ganglia/?c=SGE%20Worker%20Nodes&m=&r=hour&s=descending&hc=4|monitorization page]]. Local submission is allowed for certain projects. As stated below, there are some shared areas that can be accessed from the computing nodes. The underlying batch system is [[https://arc.liv.ac.uk/trac/SGE|Son of Grid Engine]] 8.0.0d. Refer to the following sources for information: * Using Grid Engine [[http://wikis.sun.com/display/GridEngine/Using+Sun+Grid+Engine|official documentation]]. * Some useful [[http://arc.liv.ac.uk/SGE/howto/|HOWTOS]]. * SGE [[http://grid.ifca.es/wiki/Cluster/SGE/howto/basic_usage.html|basic usage]]. {{{#!wiki important '''IFCA Gridengine documentation has moved''' The specific documentation for IFCA has been moved [[/GridEngine|to a separate section]]. }}} == Shared areas == The `$HOME` directories (`/home/$USER`) are shared between the UIs and the computing nodes. There is a ''projects'' shared area (located at `/gpfs/csic_projects/`), also accessible from the UI and the computing nodes. If your group does not have this area, please open an [[http://support.ifca.es|Incidence ticket]]. === Usage === The shared directories '''are not intended for scratch''', use the temporal areas of the local filesystems instead. In other words, instruct every job you send to copy the input from the shared directory to the local scratch (`$TMPDIR`), execute all operations there, then copy the output back to some shared area where you will be able to retrieve it comfortably from the UI. As mentioned above, the contents of `$TMPDIR` are removed after job execution. === Disk quotas === Disk quotas are enabled on both user and projects filesystems. A message with this information should be shown upon login. If you need more quota on your user space (not in the project shared area), please contact the system administrators explaining your reasons. If you wish to check your quota at a later time, you can use the commands `mmlsquota gpfs_csic` (for user quotas) and `mmlsquota -g `id -g` gpfs_projects` (for group quotas). A script reporting both quotas is located on `/nfs4/usr/bin/rep-user-quotas.py`. A sample output of the latter could be: {{{ ********************************************************************** INFORMATION ABOUT YOUR CURRENT DISK USAGE USER Used Soft Hard Doubt Grace Space (GB): 3.41 20.00 0.00 0.06 none Files (x1000): 64 0 0 0 none GROUP Used Soft Hard Doubt Grace Space (GB): 0.00 1000.00 1500.00 0.00 none Files (x1000): 0 0 0 0 none ********************************************************************** }}} For a basic interpretation of this output, note that the "Used" column will tell you about how much disk space you are using, whereas "Soft" denotes the limit this "Used" amount should not exceed. The "Hard" column is the value of the limit "Used" plus "Doubt" should not cross. A healthy disk space management would require that you periodically delete unused files in your `$HOME` directory, keeping its usage below the limits at all times. In the event that the user exceeds a limit, a grace period will be shown in the "Grace" column. If the user does not correct the situation within the grace period, she will be banned from writing to the disk. For further information you can read the [[http://www.nersc.gov/vendor_docs/ibm/gpfs/am3admst119.html|mmlsquota command manual page]]. == Extra utils/Software == Some extra packages as latest [[http://python.org|Python]] versions and [[http://software.intel.com/en-us/articles/non-commercial-software-development/|Intel Non-Commercial Compilers]] can be found at `/nfs4/opt/`. Here also is the preferred location for some other piece of software commonly used like: * Matlab's like `octave` language for numerical anaylisis. * Data plotting `gnuplot` program. * Profiling and debugging `valngrid` tools. Please note that these packages are provided as-is, without further support from IFCA staff. == Support == Before opening a new incidence, please check the [[Cluster/FAQ|Frequently Asked Questions page]] Questions, support and/or feedback should be directed through the use the [[https://support.ifca.es|Helpdesk]]. ---- CategoryUserSupport CategoryLocalCluster