Ceph at IFCA
* Ceph IFCA storage is based on two DELL storage Servers, providing a level 2 redundancy and about 50TB of available capacity., monitors and a ceph volume machine integrating this mini cluster with OpenStack.
* Client Configuration is made by puppet, but some little details has to be in mind.
- Minor hypevisor version is 16.04
- Minor libvirt version is 1.3.2 (be sure xen driver is enable)
- Test rbd pools are accesible (rbd ls volumes)
- If there is cephx error, the be sure /etc/profile.d/ceph-cinder.sh has been sourced (must be do it by puppet)
- Be sure cinder.client.keyring is properly set under /etc/ceph (must be do it by puppet)
- virsh secret-list show list:
- {uuid} ceph client.cinder secret If not you should add it manualy (restart libvitt-bin deletes this value) virsh secret-define --file /etc/libvirt/secrets/{uuid}.xml virsh secret-set-value --secret /etc/libvirt/secrets/{uuid}.xml --base64 $(cat /etc/libvirt/secrets/{uuid}.base64)
Install o Reinstall CEPH OSD node
- exec from manager node:
ceph-deploy install "node_name"
- Be sure ceph user keys are installed properly on client and 20_ceph exits on sudoers.d :
Defaults:ceph !requiretty ceph ALL = (root) NOPASSWD:ALL
- If node allredy exits stop and delete the auth osds
sudo ceph osd tree
for i in 0 2 4 6 8 10 12 14;do sudo ceph osd out osd.$i;done
- Remove it from crush map
for i in 0 2 4 6 8 10 12 14;do sudo ceph osd crush remove osd.$i;done
- Del auth
for i in 0 2 4 6 8 10 12 14;do sudo ceph auth del osd.$i;done
- Delete de OSD
for i in 0 2 4 6 8 10 12 14;do sudo ceph osd rm osd.$i;done