welcome: please sign in
location: Diff for "Cloud/Usage"
Differences between revisions 1 and 16 (spanning 15 versions)
Revision 1 as of 2011-09-02 12:32:45
Size: 829
Editor: aloga
Comment:
Revision 16 as of 2011-11-29 09:06:16
Size: 7511
Editor: cabellos
Comment: brief de volumes
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
## page was renamed from Cluster/Cloud
Line 2: Line 3:
Line 16: Line 16:
== Start working with the cloud ==

=== Creating a machine ===

==== Create SSH credentials ====
==== Image and size selection ====
==== Launching the instance ====
==== Stopping the server ====

=== Advanced topics ===
==== Attach to the project's VPN ===
Managing the cloud is made from the [[Cluster/Usage|GRIDUI Cluster]]. Ensure that you have the credentials properly installed by issuing the following command and checking that it returns something:

{{{
$ echo $NOVA_API_KEY
}}}

It should return a string. If you do not see anything, please [[http://support.ifca.es|open an incidence]].

== Create a machine ==

To create a machine you have to perform several steps:

 * Decide which of the pre-built images you are going to use.
 * Decide which with of the available sizes is suitable for you.
 * Decide (and create if not ready) with keypair should be used to connect to the machine.

=== Image and size selection ===

==== Image selection ====
There are several pre-built images available. To check them, use the `euca-describe-images` command:
{{{
$ euca-describe-images
IMAGE ami-00000008 None (cloudpipe) available public machine instance-store
IMAGE ami-00000007 None (Debian Wheezy (2011-08)) available public machine instance-store
IMAGE ami-00000006 None (lucid-server-uec-amd64.img) available public machine instance-store
IMAGE ami-00000003 None (Scientific Linux 5.5) available public machine instance-store
IMAGE ami-00000001 None (Scientific Linux 5.5) available public machine instance-store
}}}

Once you have decided with image to use, write down its identifier (ami-XXXXXXXX).

==== Instance types ====

You can choose the size of your machine (i.e. how many CPUs and how much memory) from the following instance types:

===== Standard machines =====

|| '''Name''' || '''Memory''' || '''# CPU''' || '''Local storage''' || '''Swap''' ||
|| m1.tiny || 512MB || 1 || 0GB || 0GB||
|| m1.small || 2048MB || 1 || 20GB || 0GB||
|| m1.medium || 4096MB || 2 || 40GB || 0GB||
|| m1.large || 8192MB || 4 || 80GB || 0GB||
|| m1.xlarge || 16384MB || 8 || 160GB || 0GB||

===== High-memory machines =====

|| '''Name''' || '''Memory''' || '''# CPU''' || '''Local storage''' || '''Swap''' ||
|| m2.8g || 8192MG || 1 || 10GB || 0GB||

=== Create SSH credentials ===

For most of the users this is a one-time step (although you can create as much SSH credentials as you want). You have to create an SSH-keypair so as to inject it to the newly created machine with the following command (it will create a keypair named `cloudkey` and store it under `~/.cloud/cloudkey.pem`):

{{{
$ euca-add-keypair cloudkey > ~/.cloud/cloudkey.pem
}}}

Make sure that you keep safe the file `~/.cloud/cloudkey.pem`, since it will contain the private key needed to access your cloud machines. You can check the name later with the `euca-describe-keypairs` command.

=== Launching the instance ===

To launch the instance, you have to issue `euca-run-instances`, specifying:
 * wich keypair to use (in the example `cloudkey`).
 * wich size should be used (in the example `m1.tiny`).
 * with image should be used (in the example `ami-00000001`).

{{{
$ euca-run-instances -k cloudkey -t m1.tiny ami-00000001
RESERVATION r-1zdwog0m ACES default
INSTANCE i-00000048 ami-00000001 scheduling cloudkey (ACES, None) 2011-09-02T12:19:41Z None None
}}}

You can check its status with `euca-describe-instances`

{{{
$ euca-describe-instances i-00000048
RESERVATION r-vmfu1xq2 ACES default
INSTANCE i-00000048 ami-00000001 172.16.1.8 172.16.1.8 blocked cloudkey (ACES, cloud01) 0 m1.tiny 2011-09-02T12:15:32Z nova
}}}

=== Connect to the server ===

=== Authorize SSH connections and ping ===

If you decide not to use a VPN, but connect to your machines trough the GRIDUI cluster, you have to authorize such connections with:

{{{
$ euca-authorize -P tcp -p 22 default
$ euca-authorize -P icmp -t -1:-1 default
}}}

==== SSH Connection ====

You have to use the private identify file that you created before (`~/.cloud/cloudkey.pem`) and pass it to the SSH client. To check the IP to with you should connect, check it with `euca-describe-instances`

{{{
$ ssh -i ~/.cloud/cloudkey.pem root@172.16.1.8
}}}

=== Stopping the server ===

Images can be stopped with `euca-terminate-instances`
{{{
$ euca-terminate-instances i-00000048
}}}

== Using Cloud Storage ==

=== Creating a Volume ===

{{{
$ euca-create-volume -s 100 -z nova
}}}

=== Using a Volume in an image ===

{{{
$ euca-attach-volume -i i-00000001 -d /dev/xvdc vol-00000001
}}}

{{{
server-1 $ sudo fdisk -l | grep Disk
Disk /dev/xvda doesn't contain a valid partition table
Disk /dev/xvdb doesn't contain a valid partition table
Disk /dev/xvdc doesn't contain a valid partition table
Disk /dev/xvda: 10.7 GB, 10737418240 bytes
Disk identifier: 0x00000000
Disk /dev/xvdb: 21.5 GB, 21474836480 bytes
Disk identifier: 0x00000000
Disk /dev/xvdc: 107.4 GB, 107374182400 bytes
Disk identifier: 0x00000000
}}}

{{{
server-1 $ sudo mkfs.ext4 /dev/xvdc
(...)
server-1 $ sudo mount -t ext4 /dev/xvdc /srv
}}}

{{{
server-1 $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda 9.9G 622M 8.8G 7% /
none 996M 144K 995M 1% /dev
none 1001M 0 1001M 0% /dev/shm
none 1001M 48K 1001M 1% /var/run
none 1001M 0 1001M 0% /var/lock
none 1001M 0 1001M 0% /lib/init/rw
/dev/xvdb 20G 173M 19G 1% /mnt
/dev/xvdc 99G 188M 94G 1% /srv
}}}

{{{
$ euca-detach-volume vol-00000001
}}}

=== Reusing an Old Volume ===

{{{
$ euca-attach-volume -i i-00000001 -d /dev/xvdc vol-00000001
}}}

{{{
server-1 $ sudo mount -t ext4 /dev/xvdc /srv
}}}

{{{
$ euca-detach-volume vol-00000001
}}}

== Advanced topics ==

=== Attach to the project's VPN ===

Each project has a VPN assigned to it. You can attach any computer to it, thus having it connected to your project's internal network. So as to do so, you have to perform several steps (instructions only for GNU/Linux):

 1. Copy your `~.cloud` to the machine that you want to attach to your project's VPN.
 1. Install [[https://www.openvpn.net/|OpenVPN]] on that machine.
 1. Launch openvpn with the `nova-vpn.conf` configuration file.

{{{
# cd cloud_credentials
# openvpn --config nova-vpn.conf
}}}

Please note that there are several paths in the `nova-vpn.conf` configuration file that are relative to the directory in which it is located. Should you wish to use different/separated paths, please edit `nova-vpn.conf` and adjust the `cert`, `key` and `ca` parameters.

MacOS users may use [[http://code.google.com/p/tunnelblick/| Tunnelblick]] (a GUI interface to OpenVPN) that can use the `nova-vpn.conf` and certificate files without any changes.


# === Manage multiple credentials ===
# and checking that `NOVA_API`, `NOVA_CERT`, `NOVA_PROJECT`, `NOVA_URL`, `NOVA_USERNAME`,

Cloud Computing at IFCA

This is a beta service

Please note that we are currently deploying the Cloud infrastructure at IFCA, so work is still in progress. If you find any error, please open a ticket on the helpdesk.

Introduction

This is a beta service, since the deployment and development is ongoing. However, to test the functionality, access to the infrastructure can be granted to certain users.

We highly recommend to check this document frequently, since changes in the documentation may occur.

Managing the cloud is made from the GRIDUI Cluster. Ensure that you have the credentials properly installed by issuing the following command and checking that it returns something:

$ echo $NOVA_API_KEY

It should return a string. If you do not see anything, please open an incidence.

Create a machine

To create a machine you have to perform several steps:

  • Decide which of the pre-built images you are going to use.
  • Decide which with of the available sizes is suitable for you.
  • Decide (and create if not ready) with keypair should be used to connect to the machine.

Image and size selection

Image selection

There are several pre-built images available. To check them, use the euca-describe-images command:

$ euca-describe-images 
IMAGE   ami-00000008    None (cloudpipe)                available       public                  machine                 instance-store
IMAGE   ami-00000007    None (Debian Wheezy (2011-08))          available       public                  machine                 instance-store
IMAGE   ami-00000006    None (lucid-server-uec-amd64.img)               available       public                  machine                 instance-store
IMAGE   ami-00000003    None (Scientific Linux 5.5)             available       public                  machine                 instance-store
IMAGE   ami-00000001    None (Scientific Linux 5.5)             available       public                  machine                 instance-store

Once you have decided with image to use, write down its identifier (ami-XXXXXXXX).

Instance types

You can choose the size of your machine (i.e. how many CPUs and how much memory) from the following instance types:

Standard machines

Name

Memory

# CPU

Local storage

Swap

m1.tiny

512MB

1

0GB

0GB

m1.small

2048MB

1

20GB

0GB

m1.medium

4096MB

2

40GB

0GB

m1.large

8192MB

4

80GB

0GB

m1.xlarge

16384MB

8

160GB

0GB

High-memory machines

Name

Memory

# CPU

Local storage

Swap

m2.8g

8192MG

1

10GB

0GB

Create SSH credentials

For most of the users this is a one-time step (although you can create as much SSH credentials as you want). You have to create an SSH-keypair so as to inject it to the newly created machine with the following command (it will create a keypair named cloudkey and store it under ~/.cloud/cloudkey.pem):

$ euca-add-keypair cloudkey > ~/.cloud/cloudkey.pem

Make sure that you keep safe the file ~/.cloud/cloudkey.pem, since it will contain the private key needed to access your cloud machines. You can check the name later with the euca-describe-keypairs command.

Launching the instance

To launch the instance, you have to issue euca-run-instances, specifying:

  • wich keypair to use (in the example cloudkey).

  • wich size should be used (in the example m1.tiny).

  • with image should be used (in the example ami-00000001).

$ euca-run-instances -k cloudkey -t m1.tiny ami-00000001
RESERVATION     r-1zdwog0m      ACES    default
INSTANCE        i-00000048      ami-00000001                    scheduling      cloudkey (ACES, None)   2011-09-02T12:19:41Z    None    None

You can check its status with euca-describe-instances

$ euca-describe-instances i-00000048
RESERVATION     r-vmfu1xq2      ACES    default
INSTANCE        i-00000048      ami-00000001    172.16.1.8      172.16.1.8      blocked         cloudkey (ACES, cloud01)        0       m1.tiny         2011-09-02T12:15:32Z    nova

Connect to the server

Authorize SSH connections and ping

If you decide not to use a VPN, but connect to your machines trough the GRIDUI cluster, you have to authorize such connections with:

$ euca-authorize -P tcp -p 22 default
$ euca-authorize -P icmp -t -1:-1 default

SSH Connection

You have to use the private identify file that you created before (~/.cloud/cloudkey.pem) and pass it to the SSH client. To check the IP to with you should connect, check it with euca-describe-instances

$ ssh -i ~/.cloud/cloudkey.pem root@172.16.1.8

Stopping the server

Images can be stopped with euca-terminate-instances

$ euca-terminate-instances i-00000048

Using Cloud Storage

Creating a Volume

$ euca-create-volume -s 100 -z nova

Using a Volume in an image

$ euca-attach-volume -i i-00000001 -d /dev/xvdc vol-00000001

server-1 $ sudo fdisk -l | grep Disk
Disk /dev/xvda doesn't contain a valid partition table
Disk /dev/xvdb doesn't contain a valid partition table
Disk /dev/xvdc doesn't contain a valid partition table
Disk /dev/xvda: 10.7 GB, 10737418240 bytes
Disk identifier: 0x00000000
Disk /dev/xvdb: 21.5 GB, 21474836480 bytes
Disk identifier: 0x00000000
Disk /dev/xvdc: 107.4 GB, 107374182400 bytes
Disk identifier: 0x00000000

server-1 $ sudo mkfs.ext4 /dev/xvdc
(...)
server-1 $ sudo mount -t ext4 /dev/xvdc /srv

server-1 $ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda             9.9G  622M  8.8G   7% /
none                  996M  144K  995M   1% /dev
none                 1001M     0 1001M   0% /dev/shm
none                 1001M   48K 1001M   1% /var/run
none                 1001M     0 1001M   0% /var/lock
none                 1001M     0 1001M   0% /lib/init/rw
/dev/xvdb              20G  173M   19G   1% /mnt
/dev/xvdc              99G  188M   94G   1% /srv

$ euca-detach-volume vol-00000001

Reusing an Old Volume

$ euca-attach-volume -i i-00000001 -d /dev/xvdc vol-00000001

server-1 $ sudo mount -t ext4 /dev/xvdc /srv

$ euca-detach-volume vol-00000001

Advanced topics

Attach to the project's VPN

Each project has a VPN assigned to it. You can attach any computer to it, thus having it connected to your project's internal network. So as to do so, you have to perform several steps (instructions only for GNU/Linux):

  1. Copy your ~.cloud to the machine that you want to attach to your project's VPN.

  2. Install OpenVPN on that machine.

  3. Launch openvpn with the nova-vpn.conf configuration file.

# cd cloud_credentials
# openvpn --config nova-vpn.conf

Please note that there are several paths in the nova-vpn.conf configuration file that are relative to the directory in which it is located. Should you wish to use different/separated paths, please edit nova-vpn.conf and adjust the cert, key and ca parameters.

MacOS users may use Tunnelblick (a GUI interface to OpenVPN) that can use the nova-vpn.conf and certificate files without any changes.

# === Manage multiple credentials === # and checking that NOVA_API, NOVA_CERT, NOVA_PROJECT, NOVA_URL, NOVA_USERNAME,

eciencia: Cloud/Usage (last edited 2017-07-04 11:12:30 by aloga)