5262
Comment:
|
12254
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
## page was renamed from Cluster/CloudUsage #pragma section-numbers 2 |
|
Line 8: | Line 10: |
<<TableOfContents>> |
|
Line 46: | Line 50: |
==== Size selection ==== You can choose the size of your machine (i.e. how many CPUs and how much memory) from the following: |
==== Instance types ==== You can choose the size of your machine (i.e. how many CPUs and how much memory) from the following instance types: ===== Standard machines ===== |
Line 57: | Line 63: |
===== High-memory machines ===== || '''Name''' || '''Memory''' || '''# CPU''' || '''Local storage''' || '''Swap''' || || m2.8g || 8192MG || 1 || 10GB || 0GB|| |
|
Line 68: | Line 79: |
Line 89: | Line 101: |
=== Authorize SSH connections and ping === If you decide not to use a VPN, but connect to your machines trough the GRIDUI cluster, you have to authorize such connections with: {{{ $ euca-authorize -P tcp -p 22 default $ euca-authorize -P icmp -t -1:-1 default }}} |
|
Line 104: | Line 125: |
== Creating a Machine with OpenStack == Go to http://portal.cloud.ifca.es to access to OpenStack system, which lets you to create a new image in the cloud. === Image and size selection === You should launch the image that you want to use (within a list of OS) and click “Launch”. A new popup window will be shown and you have to choose the configuration of the system (requirements, name of the server...). {{attachment:openstack2.png|alt text|width=600}} === Create SSH credentials === You must import or create a new key in order to access to that image. To do so go to “Access & Security” tab and click on Create or Import Keypair. {{attachment:openstack1.png|alt text|width=600}} === Connect to the server === In order to access throw ssh to the image, you must asign an IP to the instance. Click on “Access & Security” again and select “Allocate IP to project”. Choose the type of IP that you want to use and click “Allocate IP”. After that, you need to link that IP with your new image. Click on the button “Associate IP” of your new IP and select the instance that you have just created. {{attachment:openstack3.png|alt text|width=600}} ==== SSH Connection ==== Last step is to download the keypair that you have created or imported and move it to the machine that you will use to conect to the isntance. Change permission to 600 and use the following command to connect: {{{ $ Ssh -i clave.pem root@cloud.image.IP }}} Done == Using Cloud Storage == The storage in the cloud use volumes. Volumes are raw block devices that can be created dynamically with a desired size and associated with cloud images to be used as data disk. After use the data in the volume you can detach from the image and save for a later use of the persisted data. === Creating a Volume === To create a volume you have to run `euca-create-volume` command. For instance, to create a volume that is 100GB in size: {{{ $ euca-create-volume -s 100 -z nova VOLUME vol-00000001 10 creating 2015-11-29 }}} === Using a Volume in an instance === You may attach block volumes to instances using `euca-attach-volume`. You will need to specify the local block device name (this will be used inside the instance) and the identified instance. Currently the devices to attach the volume should be `/dev/xvdc`, `/dev/xvdd`,... `/dev/xvdz`. Attaching volume `vol-00000001` to `image i-00000001` in device /dev/xvdc si done with: {{{ $ euca-attach-volume -i i-00000001 -d /dev/xvdc vol-00000001 }}} You can see the volume attached to the image with the command `euca-describe-volumes`. {{{ $ euca-describe-volumes VOLUME vol-00000001 100 nova in-use 2015-11-29 ATTACHMENT vol-0000000c i-00000051 /dev/xvdc }}} You can then use the new volume inside your running instance. As an example, the usage of the volume as an `ext4` filesystem in a Ubuntu image is described below. 1. log into the image and check that the device is visible (as either `root` or as `ubuntu` user and use `sudo` for commands): . {{{ server-1 $ sudo fdisk -l | grep Disk Disk /dev/xvda doesn't contain a valid partition table Disk /dev/xvdb doesn't contain a valid partition table Disk /dev/xvdc doesn't contain a valid partition table Disk /dev/xvda: 10.7 GB, 10737418240 bytes Disk identifier: 0x00000000 Disk /dev/xvdb: 21.5 GB, 21474836480 bytes Disk identifier: 0x00000000 Disk /dev/xvdc: 107.4 GB, 107374182400 bytes Disk identifier: 0x00000000 }}} 1. Create a single ext4 partition on the device, an mount in the /srv mount point (-o sync is safe in case of image crashs): . {{{ server-1 $ sudo mkfs.ext4 /dev/xvdc (...) server-1 $ sudo mount -t ext4 -o sync /dev/xvdc /srv }}} 1. Check that the volume is visible as a mounted filesystem: . {{{ server-1 $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 9.9G 622M 8.8G 7% / none 996M 144K 995M 1% /dev none 1001M 0 1001M 0% /dev/shm none 1001M 48K 1001M 1% /var/run none 1001M 0 1001M 0% /var/lock none 1001M 0 1001M 0% /lib/init/rw /dev/xvdb 20G 173M 19G 1% /mnt /dev/xvdc 99G 188M 94G 1% /srv }}} After you are done with the volume yo can detach from image with (you should `umount` it first on your instance): {{{ $ euca-detach-volume vol-00000001 }}} You must detach a volume before terminating an instance or deleting a volume. If you fail to detach a volume, it may leave the volume in an inconsistent state and you risk losing data. === Reusing an Old Volume === Attach with the new image: {{{ $ euca-attach-volume -i i-00000002 -d /dev/xvdc vol-00000001 }}} Because the filesystem is already created on the volume, you only need to mount it to access: {{{ server-1 $ sudo mount -t ext4 -o sync /dev/xvdc /srv }}} After you are done with the volume yo can detach from image: {{{ $ euca-detach-volume vol-00000001 }}} === Other uses of Volumes === With volume you can create snapshots of the data, recover it, delete volumes,...etc. More on volumes [[http://open.eucalyptus.com/wiki/Euca2oolsStorage]] |
|
Line 110: | Line 263: |
#. Copy your `~.cloud` to the machine that you want to attach to your project's VPN. #. Install [[https://www.openvpn.net/|OpenVPN]] on that machine. #. Launch openvpn with the `nova-vpn.conf` configuration file. |
1. Copy your `~.cloud` to the machine that you want to attach to your project's VPN. 1. Install [[https://www.openvpn.net/|OpenVPN]] on that machine. 1. Launch openvpn with the `nova-vpn.conf` configuration file. |
Line 120: | Line 273: |
=== Manage multiple credentials === and checking that `NOVA_API`, `NOVA_CERT`, `NOVA_PROJECT`, `NOVA_URL`, `NOVA_USERNAME`, |
MacOS users may use [[http://code.google.com/p/tunnelblick/| Tunnelblick]] (a GUI interface to OpenVPN) that can use the `nova-vpn.conf` and certificate files without any changes. ==== VPN with Ubuntu 10.04 ==== 1. Install network-manager-openvpn package 1. Add to /etc/dbus-1/system.d/nm-openvpn-service.conf between `policy root` and `policy default`: {{{ <policy user=”at_console”> <allow own=”org.freedesktop.NetworkManager.vpnc”/> <allow send_destination=”org.freedesktop.NetworkManager.vpnc”/> </policy> }}} 1.#3 With the network configuration in gnome bar, add new VPN conection importing nova-vpn.conf 1. Edit VPN conection, inside routing options, use this conection only for own resource. 1. Restart computer to get all changes in. Now you can activate/deactivate VPN from gnome bar. ==== VPN with Windows ==== 1. Install OpenVPN Connect Client from http://openvpn.net. 1. Rename nova-vpn.conf to nova-vpn.ovpn. 1. From Access -> Profiles -> Import from Local File, load the file nova-vpn.ovpn. 1. To connect, press on new nova-vpn button. # === Manage multiple credentials === # and checking that `NOVA_API`, `NOVA_CERT`, `NOVA_PROJECT`, `NOVA_URL`, `NOVA_USERNAME`, |
Cloud Computing at IFCA
This is a beta service
Please note that we are currently deploying the Cloud infrastructure at IFCA, so work is still in progress. If you find any error, please open a ticket on the helpdesk.
Contents
1. Introduction
This is a beta service, since the deployment and development is ongoing. However, to test the functionality, access to the infrastructure can be granted to certain users.
We highly recommend to check this document frequently, since changes in the documentation may occur.
Managing the cloud is made from the GRIDUI Cluster. Ensure that you have the credentials properly installed by issuing the following command and checking that it returns something:
$ echo $NOVA_API_KEY
It should return a string. If you do not see anything, please open an incidence.
2. Create a machine
To create a machine you have to perform several steps:
- Decide which of the pre-built images you are going to use.
- Decide which with of the available sizes is suitable for you.
- Decide (and create if not ready) with keypair should be used to connect to the machine.
2.1. Image and size selection
2.1.1. Image selection
There are several pre-built images available. To check them, use the euca-describe-images command:
$ euca-describe-images IMAGE ami-00000008 None (cloudpipe) available public machine instance-store IMAGE ami-00000007 None (Debian Wheezy (2011-08)) available public machine instance-store IMAGE ami-00000006 None (lucid-server-uec-amd64.img) available public machine instance-store IMAGE ami-00000003 None (Scientific Linux 5.5) available public machine instance-store IMAGE ami-00000001 None (Scientific Linux 5.5) available public machine instance-store
Once you have decided with image to use, write down its identifier (ami-XXXXXXXX).
2.1.2. Instance types
You can choose the size of your machine (i.e. how many CPUs and how much memory) from the following instance types:
2.1.2.1. Standard machines
Name |
Memory |
# CPU |
Local storage |
Swap |
m1.tiny |
512MB |
1 |
0GB |
0GB |
m1.small |
2048MB |
1 |
20GB |
0GB |
m1.medium |
4096MB |
2 |
40GB |
0GB |
m1.large |
8192MB |
4 |
80GB |
0GB |
m1.xlarge |
16384MB |
8 |
160GB |
0GB |
2.1.2.2. High-memory machines
Name |
Memory |
# CPU |
Local storage |
Swap |
m2.8g |
8192MG |
1 |
10GB |
0GB |
2.2. Create SSH credentials
For most of the users this is a one-time step (although you can create as much SSH credentials as you want). You have to create an SSH-keypair so as to inject it to the newly created machine with the following command (it will create a keypair named cloudkey and store it under ~/.cloud/cloudkey.pem):
$ euca-add-keypair cloudkey > ~/.cloud/cloudkey.pem
Make sure that you keep safe the file ~/.cloud/cloudkey.pem, since it will contain the private key needed to access your cloud machines. You can check the name later with the euca-describe-keypairs command.
2.3. Launching the instance
To launch the instance, you have to issue euca-run-instances, specifying:
wich keypair to use (in the example cloudkey).
wich size should be used (in the example m1.tiny).
with image should be used (in the example ami-00000001).
$ euca-run-instances -k cloudkey -t m1.tiny ami-00000001 RESERVATION r-1zdwog0m ACES default INSTANCE i-00000048 ami-00000001 scheduling cloudkey (ACES, None) 2011-09-02T12:19:41Z None None
You can check its status with euca-describe-instances
$ euca-describe-instances i-00000048 RESERVATION r-vmfu1xq2 ACES default INSTANCE i-00000048 ami-00000001 172.16.1.8 172.16.1.8 blocked cloudkey (ACES, cloud01) 0 m1.tiny 2011-09-02T12:15:32Z nova
2.4. Connect to the server
2.5. Authorize SSH connections and ping
If you decide not to use a VPN, but connect to your machines trough the GRIDUI cluster, you have to authorize such connections with:
$ euca-authorize -P tcp -p 22 default $ euca-authorize -P icmp -t -1:-1 default
2.5.1. SSH Connection
You have to use the private identify file that you created before (~/.cloud/cloudkey.pem) and pass it to the SSH client. To check the IP to with you should connect, check it with euca-describe-instances
$ ssh -i ~/.cloud/cloudkey.pem root@172.16.1.8
2.6. Stopping the server
Images can be stopped with euca-terminate-instances
$ euca-terminate-instances i-00000048
3. Creating a Machine with OpenStack
Go to http://portal.cloud.ifca.es to access to OpenStack system, which lets you to create a new image in the cloud.
3.1. Image and size selection
You should launch the image that you want to use (within a list of OS) and click “Launch”. A new popup window will be shown and you have to choose the configuration of the system (requirements, name of the server...).
3.2. Create SSH credentials
You must import or create a new key in order to access to that image. To do so go to “Access & Security” tab and click on Create or Import Keypair.
3.3. Connect to the server
In order to access throw ssh to the image, you must asign an IP to the instance. Click on “Access & Security” again and select “Allocate IP to project”. Choose the type of IP that you want to use and click “Allocate IP”. After that, you need to link that IP with your new image. Click on the button “Associate IP” of your new IP and select the instance that you have just created.
3.3.1. SSH Connection
Last step is to download the keypair that you have created or imported and move it to the machine that you will use to conect to the isntance. Change permission to 600 and use the following command to connect:
$ Ssh -i clave.pem root@cloud.image.IP
Done
4. Using Cloud Storage
The storage in the cloud use volumes. Volumes are raw block devices that can be created dynamically with a desired size and associated with cloud images to be used as data disk. After use the data in the volume you can detach from the image and save for a later use of the persisted data.
4.1. Creating a Volume
To create a volume you have to run euca-create-volume command. For instance, to create a volume that is 100GB in size:
$ euca-create-volume -s 100 -z nova VOLUME vol-00000001 10 creating 2015-11-29
4.2. Using a Volume in an instance
You may attach block volumes to instances using euca-attach-volume. You will need to specify the local block device name (this will be used inside the instance) and the identified instance. Currently the devices to attach the volume should be /dev/xvdc, /dev/xvdd,... /dev/xvdz. Attaching volume vol-00000001 to image i-00000001 in device /dev/xvdc si done with:
$ euca-attach-volume -i i-00000001 -d /dev/xvdc vol-00000001
You can see the volume attached to the image with the command euca-describe-volumes.
$ euca-describe-volumes VOLUME vol-00000001 100 nova in-use 2015-11-29 ATTACHMENT vol-0000000c i-00000051 /dev/xvdc
You can then use the new volume inside your running instance. As an example, the usage of the volume as an ext4 filesystem in a Ubuntu image is described below.
log into the image and check that the device is visible (as either root or as ubuntu user and use sudo for commands):
server-1 $ sudo fdisk -l | grep Disk Disk /dev/xvda doesn't contain a valid partition table Disk /dev/xvdb doesn't contain a valid partition table Disk /dev/xvdc doesn't contain a valid partition table Disk /dev/xvda: 10.7 GB, 10737418240 bytes Disk identifier: 0x00000000 Disk /dev/xvdb: 21.5 GB, 21474836480 bytes Disk identifier: 0x00000000 Disk /dev/xvdc: 107.4 GB, 107374182400 bytes Disk identifier: 0x00000000
- Create a single ext4 partition on the device, an mount in the /srv mount point (-o sync is safe in case of image crashs):
server-1 $ sudo mkfs.ext4 /dev/xvdc (...) server-1 $ sudo mount -t ext4 -o sync /dev/xvdc /srv
- Check that the volume is visible as a mounted filesystem:
server-1 $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 9.9G 622M 8.8G 7% / none 996M 144K 995M 1% /dev none 1001M 0 1001M 0% /dev/shm none 1001M 48K 1001M 1% /var/run none 1001M 0 1001M 0% /var/lock none 1001M 0 1001M 0% /lib/init/rw /dev/xvdb 20G 173M 19G 1% /mnt /dev/xvdc 99G 188M 94G 1% /srv
After you are done with the volume yo can detach from image with (you should umount it first on your instance):
$ euca-detach-volume vol-00000001
You must detach a volume before terminating an instance or deleting a volume. If you fail to detach a volume, it may leave the volume in an inconsistent state and you risk losing data.
4.3. Reusing an Old Volume
Attach with the new image:
$ euca-attach-volume -i i-00000002 -d /dev/xvdc vol-00000001
Because the filesystem is already created on the volume, you only need to mount it to access:
server-1 $ sudo mount -t ext4 -o sync /dev/xvdc /srv
After you are done with the volume yo can detach from image:
$ euca-detach-volume vol-00000001
4.4. Other uses of Volumes
With volume you can create snapshots of the data, recover it, delete volumes,...etc. More on volumes http://open.eucalyptus.com/wiki/Euca2oolsStorage
5. Advanced topics
5.1. Attach to the project's VPN
Each project has a VPN assigned to it. You can attach any computer to it, thus having it connected to your project's internal network. So as to do so, you have to perform several steps (instructions only for GNU/Linux):
Copy your ~.cloud to the machine that you want to attach to your project's VPN.
Install OpenVPN on that machine.
Launch openvpn with the nova-vpn.conf configuration file.
# cd cloud_credentials # openvpn --config nova-vpn.conf
Please note that there are several paths in the nova-vpn.conf configuration file that are relative to the directory in which it is located. Should you wish to use different/separated paths, please edit nova-vpn.conf and adjust the cert, key and ca parameters.
MacOS users may use Tunnelblick (a GUI interface to OpenVPN) that can use the nova-vpn.conf and certificate files without any changes.
5.1.1. VPN with Ubuntu 10.04
- Install network-manager-openvpn package
Add to /etc/dbus-1/system.d/nm-openvpn-service.conf between policy root and policy default:
<policy user=”at_console”> <allow own=”org.freedesktop.NetworkManager.vpnc”/> <allow send_destination=”org.freedesktop.NetworkManager.vpnc”/> </policy>
- With the network configuration in gnome bar, add new VPN conection importing nova-vpn.conf
- Edit VPN conection, inside routing options, use this conection only for own resource.
- Restart computer to get all changes in.
Now you can activate/deactivate VPN from gnome bar.
5.1.2. VPN with Windows
Install OpenVPN Connect Client from http://openvpn.net.
- Rename nova-vpn.conf to nova-vpn.ovpn.
From Access -> Profiles -> Import from Local File, load the file nova-vpn.ovpn.
- To connect, press on new nova-vpn button.
# === Manage multiple credentials === # and checking that NOVA_API, NOVA_CERT, NOVA_PROJECT, NOVA_URL, NOVA_USERNAME,