ceph, reloaded

(In english, as this might be of interesst for a broader readership.) Well, after my initial failure with ceph, I actually didn’t give up. Seems I deserve the painseeker attribute …

Okay, good news first:

[ceph@node01 ~]$ sudo ceph -s
    cluster aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
     health HEALTH_WARN clock skew detected on mon.node06, mon.node12
     monmap e3: 3 mons at {node01=10.99.2.1:6789/0,node06=10.99.2.6:6789/0,node12=10.99.2.12:6789/0}, election epoch 12, quorum 0,1,2 node01,node06,node12
     mdsmap e5: 1/1/1 up {0=node03=up:active}, 1 up:standby
     osdmap e31: 5 osds: 5 up, 5 in
      pgmap v73: 192 pgs, 3 pools, 9470 bytes data, 21 objects
            6066 MB used, 928 GB / 984 GB avail
                 192 active+clean

Read: I do have a ceph cluster up and running, 5 nodes á 197G gives a whopping 984G space — nice. Still wondering where replication etc. comes in …

Key to this success was: a) understanding how ceph does work — the quick-start with just copying random guides gave the impression ceph would magically work on the raw devices. Instead, it’s just using common Linux filesystems, ext4 in my case (despite I hate it), xfs usually, btrfs for those how like their steak bloody rare.
b) Trying to follow the Quickstart document. Since I now knew that ceph just store its data on standard filesystems, ceph-deploy not really dealing with LVMs wasn’t an issue anymore. cept-deploy still states some mkfs.xfs activity on the mounted path, but this obviously can safely be ignored ;) So, if you, like me, need to use LVM, don’t let ceph-deploy prepare your raw device, just prepare that beforehand, mount it, and point ceph-deply to it. Magically, it will work ;)
c) If you encounter errors: google! My setup was impacted by issue 5195, issue 6552, issue 6854 (that’s a nasty one: documentation tells you to do 1) then 2), but 2) does not necessarily heal FS changes done by 1), so you end up with really odd errors — as a rule of thumb from me: if you’re stuck somewhere, re-image and start from scratch on your current path!) to name a few …

If someone would ask me about ceph these days, I might bite their nose off and tell them “that’s how it feels with being on your own with ceph”. (Which also means: really, please, do not ask me in real life! ;)) Next steps will be to use this ceph setup for OpenStack, both for block storage as well as object storage — after all, that’s what is was built for, wasn’t it? :)

Mounting and using the file system at least actually works ;)

[root@node10 ~]# ceph-fuse /mnt/
ceph-fuse[20747]: starting ceph client
ceph-fuse[20747]: starting fuse
[root@node10 ~]# date >/mnt/foobar
[root@node10 ~]# cat /mnt/foobar
Tue Dec 17 23:52:35 CET 2013
[root@node10 ~]# ls -la /mnt
total 5
drwxr-xr-x.  1 root root    0 Dec 17 23:52 .
dr-xr-xr-x. 22 root root 4096 Dec 17 11:13 ..
-rw-r--r--.  1 root root   29 Dec 17 23:52 foobar
[root@node10 ~]# df -hP
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg00-lv00  9.7G  1.5G  7.7G  17% /
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sda1             485M   67M  393M  15% /boot
/dev/mapper/vg00-ceph  197G  1.2G  186G   1% /var/lib/ceph/osd/ceph-3
ceph-fuse             985G   56G  929G   6% /mnt