Ceph 介绍及对接 qemu 和 OpenStack 使用方法

发布时间: 更新时间: 总字数:1412 阅读时间:3m 作者: 分享

记录 ceph 常用命令;ceph 快照管理方法;qemu 使用 ceph rbd 块设备方法;rbd 对接 openstack cinder 和 glance 方法

进程相关命令

  • 查看集群的状态

ceph health (detail)或 ceph -w
  • ceph 进程操作

启动/停止/重启ceph进程(当前节点)

sudo /etc/init.d/ceph start/stop/restart

启动/停止/重启ceph进程(所有节点)

sudo /etc/init.d/ceph -a start/stop/restart

ceph守护进程类型:mon osd mds

  • 通过守护进程类型重启服务

语法:

sudo /etc/init.d/ceph [options] [start|restart] [daemonType|daemonID]

示例:

启动/停止 指定类型的ceph守护进程(当前节点)

sudo /etc/init.d/ceph start/stop mon/osd/mds

启动/停止 指定类型的ceph守护进程(所有节点)

sudo /etc/init.d/ceph -a start/stop mon/osd/mds

或采用:

sudo service ceph [options] [start|restart] [daemonType|daemonID]
  • 启动或停止一个守护进程

语法:

sudo /etc/init.d/ceph start/stop {daemon-type}.{instance}

sudo /etc/init.d/ceph -a start/stop {daemon-type}.{instance}

sudo service ceph start {daemon-type}.{instance}

示例:

当前节点

sudo /etc/init.d/ceph start osd.0

所有节点

sudo /etc/init.d/ceph -a start osd.0

monitor相关命令

  • 监控包括:osd,mon,pg,mds

交互模式命令如下:

# ceph
ceph> health
ceph> status
ceph> quorum_status
ceph> mon_status
ceph quorum_status --format json-pretty
  • 监视集群状态

ceph -w
  • 检测集群资源使用状态

ceph df
ceph status
ceph -s
  • 检测osd的状态命令

ceph osd stat
ceph osd dump
ceph osd tree
  • 检测mon的状态命令

ceph mon stat
ceph mon dump

查看mon选举状态:

ceph quorum_status
  • 检测mds的状态命令

ceph mds stat
ceph mds dump

注释:metadta services 有两个状态集 up|down active|inactive

rbd相关命令

准备

ceph osd  pool create xiexianbin 16
ceph osd pool set rbd pgp_num 128 pg_num 128
  • 创建block device image

rbd create bar --size 10240 --pool xiexianbin
  • list block device image

rbd ls {poolname}
rbd ls xiexianbin
  • 查看指定image的详细信息

rbd --image {image-name} -p {pool-name} info
rbd --image bar -p xiexianbin info
  • resize a block device image

rbd resize --image bar -p xiexianbin --size 20480
  • removing a block device image

rbd rm {image-name} -p {pool-name}
rbd rm bar -p xiexianbin

内核模块操作

准备:

rbd create image-block --size 10240  --pool xiexianbin
  • 列出指定pool中的所有images

rbd ls/list {poolname}
  • 映射块设备

sudo rbd map {image-name} --pool {pool-name} --id {user-name} --keyring {keyring_path}
sudo rbd map --pool xiexianbin image-block

存在问题:若块设备名称为image-block-1将会失败。命令如下:

sudo rbd map image-block-1 --pool xiexianbin(错误)
  • 查看块设备映射

rbd showmapped
  • 解除块设备映射

sudo rbd unmap /dev/rbd/{poolname}/{imagename}
sudo rbd unmap /dev/rbd1
  • mount命令

sudo mkfs.ext4 /dev/rbd1
mkdir /home/ceph/xxb/rbd1
sudo mount /dev/rbd1 /home/ceph/xxb/rbd1

快照

注意:为image创建快照是,image应处于一个稳定的状态(即没有数据操作)。

  • 创建快照

rbd --pool {pool-name} snap create --snap {snap-name} {image-name}
rbd snap create {pool-name}/{image-name}@{snap-name}

例如:

rbd --pool xiexianbin snap create --snap bak1 image-block
rbd snap create xiexianbin/image-block@bak2
  • 查询快照

rbd --pool {pool-name} snap ls {image-name}
rbd snap ls {pool-name}/{image-name}
rbd --pool xiexianbin snap ls/list image-block
rbd snap ls xiexianbin/image-block
  • 回滚快照

rbd --pool {pool-name} snap rollback --snap {snap-name} {image-name}
rbd snap rollback {pool-name}/{image-name}@{snap-name}
rbd --pool xiexianbin snap rollback --snap bak1 image-block
rbd snap rollback xiexianbin/image-block@bak2
  • 删除快照

rbd --pool {pool-name} snap rm --snap {snap-name} {image-name}
rbd snap rm {pool-name}/{image-name}@{snap-name}
rbd --pool xiexianbin snap rm --snap bak2 image-block
rbd snap rm xiexianbin/image-block@bak1
  • 清空快照


rbd --pool {pool-name} snap purge {image-name}
rbd snap purge {pool-name}/{image-name}
rbd --pool xiexianbin snap purge image-block
rbd snap purge xiexianbin/image-block

qemu

  • 常用命令

qemu-img {command} [options] rbd:glance-pool/maipo:id=glance:conf=/etc/ceph/ceph.conf

qemu-img create -f raw rbd:{pool-name}/{image-name} {size}

qemu-img create -f raw rbd:data/foo 10G

qemu-img resize rbd:{pool-name}/{image-name} {size}

qemu-img resize rbd:data/foo 10G

qemu-img info rbd:{pool-name}/{image-name}

qemu-img info rbd:data/foo
  • RUNNING QEMU WITH RBD

You can use qemu-img to convert existing virtual machine images to Ceph block device images. For example, if you have a qcow2 image, you could run:

qemu-img convert -f qcow2 -O raw debian_squeeze.qcow2 rbd:data/squeeze

To run a virtual machine booting from that image, you could run:

qemu -m 1024 -drive format=raw,file=rbd:data/squeeze

RBD caching can significantly improve performance. Since QEMU 1.2, QEMU’s cache options control librbd caching:

qemu -m 1024 -drive format=rbd,file=rbd:data/squeeze,cache=writeback

libvirt

configure ceph

http://docs.ceph.com/docs/v0.94.4/rbd/libvirt/

  • Create a pool

ceph osd pool create libvirt-pool 64 64
  • Create a Ceph User

ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool'
  • Use QEMU to create an image in your RBD pool

qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 2G

rbd 对接 openstack

创建池

ceph osd pool create volumes 32 32
ceph osd pool create images 16 16
ceph osd pool create backups 32 32
ceph osd pool create vms 32 32

配置OpenStack ceph客户端

http://docs.ceph.com/docs/v0.94.4/rbd/rbd-openstack/

ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
  • 对于glance-api端,安装rbd

sudo yum install python-rbd

对于nova-compute、cinder-backups和cinder-volume端安装ceph

sudo yum install ceph
  • 创建用户

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
  • 将keyring同步到指定机器

ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
  • 获取 key

ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
  • 配置 secret

# uuidgen
457eb676-33da-42ec-9a8c-9293d545c337
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml

ceph auth 常用命令

  • 查看ceph集群中认证的用户级相关key

ceph auth list
  • 查看集群的详细配置

ceph daemon mon.compute-192-168-2-202 config show | more
  • 查看ceph log日志所在的目录

ceph-conf --name mon.compute-192-168-2-202 --show-config-value log_file

参照:

http://docs.ceph.com/docs/master/rados/operations/operating/#running-ceph-with-sysvinit

Home Archives Categories Tags Docs