Ceph 命令介绍
记录 ceph 常用命令;ceph 快照管理方法;qemu 使用 ceph rbd 块设备方法;rbd 对接 openstack cinder 和 glance 方法
进程相关命令
查看集群的状态
ceph health (detail)或 ceph -wceph 进程操作
启动/停止/重启 ceph 进程(当前节点)
sudo /etc/init.d/ceph start/stop/restart启动/停止/重启 ceph 进程(所有节点)
sudo /etc/init.d/ceph -a start/stop/restartceph 守护进程类型:mon osd mds
通过守护进程类型重启服务
语法:
sudo /etc/init.d/ceph [options] [start|restart] [daemonType|daemonID]示例:
启动/停止 指定类型的 ceph 守护进程(当前节点)
sudo /etc/init.d/ceph start/stop mon/osd/mds启动/停止 指定类型的 ceph 守护进程(所有节点)
sudo /etc/init.d/ceph -a start/stop mon/osd/mds或采用:
sudo service ceph [options] [start|restart] [daemonType|daemonID]启动或停止一个守护进程
语法:
sudo /etc/init.d/ceph start/stop {daemon-type}.{instance}或
sudo /etc/init.d/ceph -a start/stop {daemon-type}.{instance}或
sudo service ceph start {daemon-type}.{instance}示例:
当前节点
sudo /etc/init.d/ceph start osd.0所有节点
sudo /etc/init.d/ceph -a start osd.0monitor 相关命令
监控包括:osd,mon,pg,mds
交互模式命令如下:
# ceph
ceph> health
ceph> status
ceph> quorum_status
ceph> mon_status
ceph quorum_status --format json-pretty监视集群状态
ceph -w检测集群资源使用状态
ceph df
ceph status
ceph -s检测 osd 的状态命令
ceph osd stat
ceph osd dump
ceph osd tree
ceph osd df检测 mon 的状态命令
ceph mon stat
ceph mon dump查看 mon 选举状态
ceph quorum_status检测 mds 的状态命令
ceph mds stat
ceph mds dump注释:metadta services 有两个状态集 up|down active|inactive
rbd 相关命令
准备
ceph osd pool create xiexianbin 16
ceph osd pool set rbd pgp_num 128 pg_num 128- 创建 block device image
rbd create bar --size 10240 --pool xiexianbin- list block device image
rbd ls {poolname}
rbd ls xiexianbin- 查看指定 image 的详细信息
rbd --image {image-name} -p {pool-name} info
rbd --image bar -p xiexianbin info- resize a block device image
rbd resize --image bar -p xiexianbin --size 20480- removing a block device image
rbd rm {image-name} -p {pool-name}
rbd rm bar -p xiexianbin内核模块操作
准备:
rbd create image-block --size 10240 --pool xiexianbin- 列出指定 pool 中的所有 images
rbd ls/list {poolname}- 映射块设备
sudo rbd map {image-name} --pool {pool-name} --id {user-name} --keyring {keyring_path}
sudo rbd map --pool xiexianbin image-block存在问题:若块设备名称为 image-block-1 将会失败。命令如下:
sudo rbd map image-block-1 --pool xiexianbin(错误)- 查看块设备映射
rbd showmapped- 解除块设备映射
sudo rbd unmap /dev/rbd/{poolname}/{imagename}
sudo rbd unmap /dev/rbd1- mount 命令
sudo mkfs.ext4 /dev/rbd1
mkdir /home/ceph/xxb/rbd1
sudo mount /dev/rbd1 /home/ceph/xxb/rbd1快照
注意:为 image 创建快照是,image 应处于一个稳定的状态(即没有数据操作)。
创建
rbd --pool {pool-name} snap create --snap {snap-name} {image-name}
rbd snap create {pool-name}/{image-name}@{snap-name}例如:
rbd --pool xiexianbin snap create --snap bak1 image-block
rbd snap create xiexianbin/image-block@bak2查询
rbd --pool {pool-name} snap ls {image-name}
rbd snap ls {pool-name}/{image-name}
rbd --pool xiexianbin snap ls/list image-block
rbd snap ls xiexianbin/image-block回滚
rbd --pool {pool-name} snap rollback --snap {snap-name} {image-name}
rbd snap rollback {pool-name}/{image-name}@{snap-name}
rbd --pool xiexianbin snap rollback --snap bak1 image-block
rbd snap rollback xiexianbin/image-block@bak2删除
rbd --pool {pool-name} snap rm --snap {snap-name} {image-name}
rbd snap rm {pool-name}/{image-name}@{snap-name}
rbd --pool xiexianbin snap rm --snap bak2 image-block
rbd snap rm xiexianbin/image-block@bak1清空快照
rbd --pool {pool-name} snap purge {image-name}
rbd snap purge {pool-name}/{image-name}
rbd --pool xiexianbin snap purge image-block
rbd snap purge xiexianbin/image-blockqemu
常用命令
qemu-img {command} [options] rbd:glance-pool/maipo:id=glance:conf=/etc/ceph/ceph.conf
qemu-img create -f raw rbd:{pool-name}/{image-name} {size}
qemu-img create -f raw rbd:data/foo 10G
qemu-img resize rbd:{pool-name}/{image-name} {size}
qemu-img resize rbd:data/foo 10G
qemu-img info rbd:{pool-name}/{image-name}
qemu-img info rbd:data/fooRUNNING QEMU WITH RBD
You can use qemu-img to convert existing virtual machine images to Ceph block device images. For example, if you have a qcow2 image, you could run:
qemu-img convert -f qcow2 -O raw debian_squeeze.qcow2 rbd:data/squeezeTo run a virtual machine booting from that image, you could run:
qemu -m 1024 -drive format=raw,file=rbd:data/squeezeRBD caching can significantly improve performance. Since QEMU 1.2, QEMU’s cache options control librbd caching:
qemu -m 1024 -drive format=rbd,file=rbd:data/squeeze,cache=writebacklibvirt
configure ceph
http://docs.ceph.com/docs/v0.94.4/rbd/libvirt/
- Create a pool
ceph osd pool create libvirt-pool 64 64- Create a Ceph User
ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool'- Use QEMU to create an image in your RBD pool
qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 2Grbd 对接 openstack
创建池
ceph osd pool create volumes 32 32
ceph osd pool create images 16 16
ceph osd pool create backups 32 32
ceph osd pool create vms 32 32配置 OpenStack ceph 客户端
http://docs.ceph.com/docs/v0.94.4/rbd/rbd-openstack/
ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf- 对于 glance-api 端,安装 rbd
sudo yum install python-rbd对于 nova-compute、cinder-backups 和 cinder-volume 端安装 ceph
sudo yum install ceph- 创建用户
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'- 将 keyring 同步到指定机器
ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring- 获取 key
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key- 配置 secret
# uuidgen
457eb676-33da-42ec-9a8c-9293d545c337cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOFsudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 createdsudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml其他
查看集群的详细配置
ceph daemon mon.compute-192-168-2-202 config show | more查看 ceph log 日志所在的目录
ceph-conf --name mon.compute-192-168-2-202 --show-config-value log_file参照:
http://docs.ceph.com/docs/master/rados/operations/operating/#running-ceph-with-sysvinit
最近更新
最新评论