welcome: please sign in
location: Diff for "Storage/Ceph"
Differences between revisions 1 and 6 (spanning 5 versions)
Revision 1 as of 2016-04-14 10:00:17
Size: 168
Editor: cabrillo
Comment:
Revision 6 as of 2016-06-22 11:25:40
Size: 1840
Editor: cabrillo
Comment:
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
[[http:ceph.com/Ceph]] IFCA storage is based on two DELL storage Servers, providing a level 2 redundancy and about 50TB of available capacity  * [[http:ceph.com|Ceph]] IFCA storage is based on two DELL storage Servers, providing a level 2 redundancy and about 50TB of available capacity., monitors and a ceph volume machine integrating this mini cluster with OpenStack.
Line 5: Line 5:
* Client Configuration is made by puppet, but some little details has to be in mind.
  * Minor hypevisor version is 16.04
  * Minor libvirt version is 1.3.2 (be sure xen driver is enable)
  * Test rbd pools are accesible (rbd ls volumes)
    * If there is cephx error, the be sure /etc/profile.d/ceph-cinder.sh has been sourced (must be do it by puppet)
    * Be sure cinder.client.keyring is properly set under /etc/ceph (must be do it by puppet)
  * virsh secret-list show list:
    
    {uuid} ceph client.cinder secret
 
    If not you should add it manualy (restart libvitt-bin deletes this value)
     
    virsh secret-define --file /etc/libvirt/secrets/{uuid}.xml
    virsh secret-set-value --secret /etc/libvirt/secrets/{uuid}.xml --base64 $(cat /etc/libvirt/secrets/{uuid}.base64)
Line 6: Line 20:
== Install o Reinstall CEPH OSD node ==
 * exec from manager node:
  {{{
  ceph-deploy install "node_name"
  }}}
 * Be sure ceph user keys are installed properly on client and 20_ceph exits on sudoers.d :
  {{{
  Defaults:ceph !requiretty

  ceph ALL = (root) NOPASSWD:ALL
  }}}
 * If node allredy exits stop and delete the auth osds
   {{{
   sudo ceph osd tree
   }}}
   {{{
   for i in 0 2 4 6 8 10 12 14;do sudo ceph osd out osd.$i;done
   }}}
 * Remove it from crush map
   {{{
   for i in 0 2 4 6 8 10 12 14;do sudo ceph osd crush remove osd.$i;done
   }}}
 * Del auth
   {{{
   for i in 0 2 4 6 8 10 12 14;do sudo ceph auth del osd.$i;done
   }}}
 * Delete de OSD
   {{{
   for i in 0 2 4 6 8 10 12 14;do sudo ceph osd rm osd.$i;done
   }}}

Ceph at IFCA

* Ceph IFCA storage is based on two DELL storage Servers, providing a level 2 redundancy and about 50TB of available capacity., monitors and a ceph volume machine integrating this mini cluster with OpenStack.

* Client Configuration is made by puppet, but some little details has to be in mind.

  • Minor hypevisor version is 16.04
  • Minor libvirt version is 1.3.2 (be sure xen driver is enable)
  • Test rbd pools are accesible (rbd ls volumes)
    • If there is cephx error, the be sure /etc/profile.d/ceph-cinder.sh has been sourced (must be do it by puppet)
    • Be sure cinder.client.keyring is properly set under /etc/ceph (must be do it by puppet)
  • virsh secret-list show list:
    • {uuid} ceph client.cinder secret If not you should add it manualy (restart libvitt-bin deletes this value) virsh secret-define --file /etc/libvirt/secrets/{uuid}.xml virsh secret-set-value --secret /etc/libvirt/secrets/{uuid}.xml --base64 $(cat /etc/libvirt/secrets/{uuid}.base64)

Install o Reinstall CEPH OSD node

  • exec from manager node:
    •   ceph-deploy install "node_name"
  • Be sure ceph user keys are installed properly on client and 20_ceph exits on sudoers.d :
    •   Defaults:ceph !requiretty
      
        ceph ALL = (root) NOPASSWD:ALL
  • If node allredy exits stop and delete the auth osds
    •    sudo ceph osd tree
         for i in 0 2 4 6 8 10 12 14;do sudo ceph osd out osd.$i;done
  • Remove it from crush map
    •    for i in 0 2 4 6 8 10 12 14;do sudo ceph osd crush remove osd.$i;done
  • Del auth
    •    for i in 0 2 4 6 8 10 12 14;do sudo ceph auth del osd.$i;done
  • Delete de OSD
    •    for i in 0 2 4 6 8 10 12 14;do sudo ceph osd rm osd.$i;done

eciencia: Storage/Ceph (last edited 2016-06-22 11:25:40 by cabrillo)