welcome: please sign in
location: Diff for "Storage/Ceph"
Differences between revisions 5 and 6
Revision 5 as of 2016-06-22 11:13:46
Size: 1352
Editor: cabrillo
Comment:
Revision 6 as of 2016-06-22 11:25:40
Size: 1840
Editor: cabrillo
Comment:
Deletions are marked like this. Additions are marked like this.
Line 25: Line 25:
 * be sure ceph user keys are installed properly on client and 20_ceph exits on sudoers.d :  * Be sure ceph user keys are installed properly on client and 20_ceph exits on sudoers.d :
Line 31: Line 31:
 * If node allredy exits stop and delete the auth osds
   {{{
   sudo ceph osd tree
   }}}
   {{{
   for i in 0 2 4 6 8 10 12 14;do sudo ceph osd out osd.$i;done
   }}}
 * Remove it from crush map
   {{{
   for i in 0 2 4 6 8 10 12 14;do sudo ceph osd crush remove osd.$i;done
   }}}
 * Del auth
   {{{
   for i in 0 2 4 6 8 10 12 14;do sudo ceph auth del osd.$i;done
   }}}
 * Delete de OSD
   {{{
   for i in 0 2 4 6 8 10 12 14;do sudo ceph osd rm osd.$i;done
   }}}

Ceph at IFCA

* Ceph IFCA storage is based on two DELL storage Servers, providing a level 2 redundancy and about 50TB of available capacity., monitors and a ceph volume machine integrating this mini cluster with OpenStack.

* Client Configuration is made by puppet, but some little details has to be in mind.

  • Minor hypevisor version is 16.04
  • Minor libvirt version is 1.3.2 (be sure xen driver is enable)
  • Test rbd pools are accesible (rbd ls volumes)
    • If there is cephx error, the be sure /etc/profile.d/ceph-cinder.sh has been sourced (must be do it by puppet)
    • Be sure cinder.client.keyring is properly set under /etc/ceph (must be do it by puppet)
  • virsh secret-list show list:
    • {uuid} ceph client.cinder secret If not you should add it manualy (restart libvitt-bin deletes this value) virsh secret-define --file /etc/libvirt/secrets/{uuid}.xml virsh secret-set-value --secret /etc/libvirt/secrets/{uuid}.xml --base64 $(cat /etc/libvirt/secrets/{uuid}.base64)

Install o Reinstall CEPH OSD node

  • exec from manager node:
    •   ceph-deploy install "node_name"
  • Be sure ceph user keys are installed properly on client and 20_ceph exits on sudoers.d :
    •   Defaults:ceph !requiretty
      
        ceph ALL = (root) NOPASSWD:ALL
  • If node allredy exits stop and delete the auth osds
    •    sudo ceph osd tree
         for i in 0 2 4 6 8 10 12 14;do sudo ceph osd out osd.$i;done
  • Remove it from crush map
    •    for i in 0 2 4 6 8 10 12 14;do sudo ceph osd crush remove osd.$i;done
  • Del auth
    •    for i in 0 2 4 6 8 10 12 14;do sudo ceph auth del osd.$i;done
  • Delete de OSD
    •    for i in 0 2 4 6 8 10 12 14;do sudo ceph osd rm osd.$i;done

eciencia: Storage/Ceph (last edited 2016-06-22 11:25:40 by cabrillo)