C/C++教程

基于centos7的企业级ceph集群搭建

本文主要是介绍基于centos7的企业级ceph集群搭建,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

集群规划

本案例通过ceph-deploy方式部署

主机名 配置 外网IP / 内网IP 角色 系统版本
ceph-node01 磁盘x3 50G 192.168.3.101/24
172.16.1.101/16
mon osd ceph-deploy CentOS Linux release 7.9.2009
ceph-node02 磁盘x3 50G 192.168.3.102/24
172.16.1.102/16
mon osd CentOS Linux release 7.9.2009
ceph-node03 磁盘x3 50G 192.168.3.103/24
172.16.1.103/16
mon osd CentOS Linux release 7.9.2009
ceph-node04 磁盘x3 50G 192.168.3.104/24
172.16.1.104/16
mgr CentOS Linux release 7.9.2009
ceph-node05 磁盘x3 50G 192.168.3.105/24
172.16.1.105/16
mgr CentOS Linux release 7.9.2009
ceph-node06 磁盘x3 50G 192.168.3.106/24
172.16.1.106/16
node CentOS Linux release 7.9.2009
ceph-node07 磁盘x3 50G 192.168.3.107/24
172.16.1.106/16
node CentOS Linux release 7.9.2009

基础优化

1. 主机名规划:
ip=`ip a |grep -w "global eth0" |awk -F 'inet ' '{print$2}'|awk -F '/' '{print$1}'`
Ipformat=`echo ${ip//./-}`
startwith="ceph"
hostname="$startwith-$Ipformat"
echo $hostname
hostnamectl set-hostname $hostname


2. 关闭selinux和网卡s管理:

setenforce 0
getenforce
sed -i 's#^SELINUX=.*$#SELINUX=disabled#g' /etc/selinux/config
systemctl stop NetworkManager.service 
systemctl disable NetworkManager.service 
systemctl stop firewalld
systemctl disable firewalld



2. 安装ceph源配置:
yum install -y  epel-release
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
#ceph源
rpm -ivh https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ceph-release-1-1.el7.noarch.rpm
yum makecache



3. 时间同步[内网时间同步]
   ceph-admin 作为时间服务器,其他服务器进行同步

  3.1. 安装时间同步[全部执行]:
[ceph@ceph-mon01 ~]$  exit     #这里是从ceph用户退回root用户
[root@ceph-mon01 ~]#  yum install -y ntp
[root@ceph-mon01 ~]#  systemctl start ntpd
[root@ceph-mon01 ~]#  systemctl enable ntpd
[root@ceph-mon01 ~]#  timedatectl set-timezone Asia/Shanghai    #时区设置为上海


# 时间服务器配置[ 为了方便全部使用阿里云时间服务器  每台都配置,如果内网就做内网时间授权]:
sed -i  "s/^server 0.*/server ntp1.aliyun.com/g" /etc/ntp.conf
sed -i  "s/^server 1/#&/g" /etc/ntp.conf
sed -i  "s/^server 2/#&/g" /etc/ntp.conf
sed -i  "s/^server 3/#&/g" /etc/ntp.conf

#重启ntp:
systemctl restart ntpd
ntpq -pn


#查看时间同步: ntpq -pn
[root@ceph-admin ~]# ntpq -pn
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*120.25.115.20   10.137.53.7      2 u    1   64    1   36.838    5.072   0.622


#通过计划任务来同步外网时间[可选]
sudo echo '*/5 * * * * /usr/sbin/ntpd -pn' >>/var/spool/cron/root
sudo systemctl restart crond.service
crontab -l



#3.2 添加文件描述符[所有服务器]
cat >> /etc/security/limits.conf <<EOF
* soft     nproc          102400
* hard     nproc          102400
* soft     nofile         102400
* hard     nofile         102400
root soft  nproc          102400
root hard  nproc          102400
root soft  nofile         102400
root hard  nofile         102400
EOF
sysctl -p


#1.4配置hosts:
192.168.3.101 ceph-mon01.example ceph-mon01  ceph-osd01 ceph-deploy 
192.168.3.102 ceph-mon02.example ceph-mon02  ceph-osd02
192.168.3.103 ceph-mon03.example ceph-mon03  ceph-osd03
192.168.3.104 ceph-mgr01.example ceph-mgr01
192.168.3.105 ceph-mgr02.example ceph-mgr02
192.168.3.106 ceph-node01.example ceph-node01
192.168.3.107 ceph-node02.example ceph-node02
 
#1.5 配置主机名:
#按照host文件设置主机名,切记一定要设置为 域名类型的 如: ceph-mon01.example 否则会出现无法安装的问题
#如  192.168.3.101 设置如下,其他的自己都配置一下,这里不一一列举
hostnamectl set-hostname ceph-mon01.example
bash



# 2. sudo权限配置:

# 2.1 所有用户[所有机器]:
 groupadd -r -g 2001 ceph && useradd -r -m -s /bin/bash -u 2001 -g 2001 ceph
 echo '123456'|passwd --stdin ceph
  

 # 2.2 sudo配置[所有机器]:
  echo 'ceph ALL=(root) NOPASSWD:ALL' |sudo tee /etc/sudoers.d/ceph
  chmod 0440 /etc/sudoers.d/ceph
  cat /etc/sudoers.d/ceph
  
  

# 3.1 免密配置 [应该使用内网IP来做免密,使用部署机配置对集群主机的免密 ,该操作在部署机切换到ceph执行]

#定义变量
su - ceph
CEPH=(172.16.1.101 172.16.1.102 172.16.1.103 172.16.1.104 172.16.1.105 172.16.1.106 172.16.1.107)

#配置免密 
ssh-keygen
for ip in ${CEPH[@]}; do
  ssh-copy-id -o StrictHostKeyChecking=no ${ip}
done

# 2.4验证免密
su - ceph
for ip in ${CEPH[@]}; do
  ssh ${ip} 'hostname'
done

初始化集群:

#1. node1 作为ceph-deploy部署节点[所有节点都安装,此命令请使用root用户执行]:
# 这里如果不安装,后面初始化也会帮你装,但是会浪费很多时间
su - root
yum install -y ceph-deploy python-setuptools python2-subprocess3


#2. 切换到ceph用户进行初始化:
[root@ceph-node01 ~]# su - ceph
[ceph@ceph-mon01 ~]$ mkdir ceph-cluster
[ceph@ceph-mon01 ~]$ cd ceph-cluster/
#[ubuntu需要额外安装python2.7,centos不用]


#3. 初始化ceph集群:
# 设置得mon节点是 node02 需要查看主机名是什么 然后再通过主机名来安装
[root@ceph-node02 ~]# uname -n
ceph-node02

[ceph@ceph-node01 ceph-cluster]$ ceph-deploy --help
optional arguments:
  --no-ssh-copykey      do not attempt to copy SSH keys
  --fsid FSID           provide an alternate FSID for ceph.conf generation
  --cluster-network CLUSTER_NETWORK  #集群内部网络
  --public-network PUBLIC_NETWORK    #外部网络,给客户端访问的,能够上外网的

#参数解释:
#环境规划中,172.16.1.0/16网段是 --cluster-network 网络[仅内网访问]
#          192.168.3.0/24网段是 --public-network 网络[可上外网]



#部署mon
[ceph@ceph-mon01 ceph-cluster]$ grep mon /etc/hosts
192.168.3.101 ceph-mon01.example ceph-mon01  ceph-osd01 ceph-deploy
192.168.3.102 ceph-mon02.example ceph-mon02  ceph-osd02
192.168.3.103 ceph-mon03.example ceph-mon03  ceph-osd03


#集群网络配置
[ceph@ceph-node01 ceph-cluster]$  ceph-deploy new --cluster-network 172.16.1.0/16 --public-network 192.168.3.0/24 ceph-mon01


#输出信息:
[ceph@ceph-node01 ceph-cluster]$ ceph-deploy new --cluster-network 172.16.1.0/16 --public-network 192.168.3.0/24 ceph-mon01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy new --cluster-network 172.16.1.0/16 --public-network 192.168.3.0/24 ceph-mon01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7efe0a1d2d70>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7efe0994a878>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-mon01']
[ceph_deploy.cli][INFO  ]  public_network                : 192.168.3.0/24
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : 172.16.1.0/16
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-mon01][DEBUG ] connected to host: ceph-node01 
[ceph-mon01][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-mon01
[ceph-mon01][DEBUG ] connection detected need for sudo
[ceph-mon01][DEBUG ] connected to host: ceph-mon01 
[ceph-mon01][DEBUG ] detect platform information from remote host
[ceph-mon01][DEBUG ] detect machine type
[ceph-mon01][DEBUG ] find the location of an executable
[ceph-mon01][INFO  ] Running command: sudo /usr/sbin/ip link show
[ceph-mon01][INFO  ] Running command: sudo /usr/sbin/ip addr show
[ceph-mon01][DEBUG ] IP addresses found: [u'192.168.3.102', u'172.16.1.102']
[ceph_deploy.new][DEBUG ] Resolving host ceph-mon01
[ceph_deploy.new][DEBUG ] Monitor ceph-mon01 at 192.168.3.102
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-mon01']
[ceph_deploy.new][DEBUG ] Monitor addrs are [u'192.168.3.102']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...  #这步主要就是为了生成key与conf
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...          #这步主要就是为了生成key与conf   


#输出信息:
#此步骤是为了生成3个配置文件 : key与conf
[ceph@ceph-mon01 ceph-cluster]$ ll
total 12
-rw-rw-r-- 1 ceph ceph  265 Sep  1 22:41 ceph.conf
-rw-rw-r-- 1 ceph ceph 3174 Sep  1 22:41 ceph-deploy-ceph.log
-rw------- 1 ceph ceph   73 Sep  1 22:41 ceph.mon.keyring



#检查生成的两个配置文件:
1. ceph.mon.keyring:
[ceph@ceph-node01 ceph-cluster]$ cat ceph.mon.keyring
[mon.]
key = AQAofCxhAAAAABAAP9Lm99bU7k28i/omhv13Jw==    #mon key信息
caps mon = allow *                                #权限信息


2. ceph.conf: 

[ceph@ceph-mon01 ceph-cluster]$ cat ceph.conf
[global]
fsid = ed9c5f24-611a-44ca-81e8-f3d035e494e8  #集群ID
public_network = 192.168.3.0/24              #外部网络
cluster_network = 172.16.1.0/16              #内部网络
mon_initial_members = ceph-mon01             #mon服务器主机名
mon_host = 192.168.3.101                     #mon服务器IP
auth_cluster_required = cephx                #认证信息
auth_service_required = cephx
auth_client_required = cephx

#内部网络需要保证都能ping通,否则会出问题,此时可以先去检查ping,然后再来执行下一步

[ceph@ceph-mon01 ceph-cluster]$ ping 172.16.1.102
PING 172.16.1.102 (172.16.1.102) 56(84) bytes of data.
64 bytes from 172.16.1.102: icmp_seq=1 ttl=64 time=0.224 ms

mon节点初始化

# 1. 知识扫盲:
#---------------------------------------------------------------------------#
#ceph-deploy工具命令扫盲:
ceph-deploy install    #在远程主机安装ceph相关软件包,可通过--release来指定版本号
ceph-deploy new        #创建一个新的集群
ceph-deploy rgw        #管理RGW守护程序(RADOSGW,对象存储网关)
ceph-deploy mgr        #管理MGR守护程序(ceph-mgr,ceph Manager DaemonCeph 管理器守护程序)
ceph-deploy mds        #管理MDS守护程序(ceph metadata server,ceph源数据服务器)
ceph-deploy mon        #管理MON守护程序(ceph-mon,ceph监视器)
gatherkeys	           #从指定获取提供新节点的验证keys,这些keys会在新的MON/OSD/MD加入的时候使用


ceph-deploy zap       #擦除数据,并且创建文件系统分区(相当于格式化为ceph的文件系统)
ceph-deploy list      #列出远端服务器的磁盘,列出远程服务器的osd
ceph-deploy osd       #管理远端磁盘的,通常用于加入ceph集群,并作为什么样的数据类型
ceph-deploy repo      #ceph的仓库
ceph-deploy config    #拷贝ceph.conf的配置文件到远端主机[很少用]
ceph-deploy uninstall #删除远端ceph主机安装包,但会保留数据
ceph-deploy purge     #删除包的同时删除所有数据
ceph-deploy purgedata #删除 /var/lib/ceph的所有数据
ceph-deploy forgetkey #删除本地主机所有的ceph验证key,包括 client,admin,monitor,bootstarp等认证key
ceph-deploy pkg       #管理远端主机的安装包
ceph-deploy calamari  #安装并配置一个calamari web节点,calamari是一个web监控平台



# 2. ceph mon节点部署:
mon节点部署[使用root安装]:
环境规划中:192.168.3.102,192.168.3.103,192.168.3.104 为mon节点

#确认环境信息中的mon节点:
[ceph@ceph-mon01 ceph-cluster]$ grep mon /etc/hosts
192.168.3.101 ceph-mon01.example ceph-mon01  ceph-osd01 ceph-deploy 
192.168.3.102 ceph-mon02.example ceph-mon02  ceph-osd02
192.168.3.103 ceph-mon03.example ceph-mon03  ceph-osd03




#mon节点初始化注意事项:
#host文件中规划了3个mon节点,这里就写这3个节点的主机名就好
#如果执行过需要重新执行可以添加 --overwrite-conf 参数重新分发配置文件
#--no-adjust-repos,在不添加这个参数的时候,每次执行命令都会重新安装包,添加后会进行检查,已经安装过的不再安装

# 如果是ubuntu系统,这里需要多安装python2.7包,并做软连接
# apt install python2.7 -y
# ln -sv /usr/bin/python2.7 /usr/bin/python2



mon服务器初始化:
[ceph@ceph-mon01 ~]$ cd /home/ceph/ceph-cluster
[ceph@ceph-mon01 ceph-cluster]$ ceph-deploy mon create-initial

# 输出信息:
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8ae9448b90>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7f8ae9435398>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon01
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon01 ...
[ceph-mon01][DEBUG ] connection detected need for sudo
[ceph-mon01][DEBUG ] connected to host: ceph-mon01 
[ceph-mon01][DEBUG ] detect platform information from remote host
[ceph-mon01][DEBUG ] detect machine type
[ceph-mon01][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.6.1810 Core
[ceph-mon01][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon01][DEBUG ] get remote short hostname
[ceph-mon01][DEBUG ] deploying mon to ceph-mon01
[ceph-mon01][DEBUG ] get remote short hostname
[ceph-mon01][DEBUG ] remote hostname: ceph-mon01
[ceph-mon01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon01][DEBUG ] create the mon path if it does not exist
[ceph-mon01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon01/done
[ceph-mon01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-mon01/done
[ceph-mon01][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-mon01.mon.keyring
[ceph-mon01][DEBUG ] create the monitor keyring file
[ceph-mon01][INFO  ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph-mon01 --keyring /var/lib/ceph/tmp/ceph-ceph-mon01.mon.keyring --setuser 2001 --setgroup 2001
[ceph-mon01][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-mon01.mon.keyring
[ceph-mon01][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon01][DEBUG ] create the init path if it does not exist
[ceph-mon01][INFO  ] Running command: sudo systemctl enable ceph.target
[ceph-mon01][INFO  ] Running command: sudo systemctl enable ceph-mon@ceph-mon01
[ceph-mon01][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph-mon01.service to /usr/lib/systemd/system/ceph-mon@.service.
[ceph-mon01][INFO  ] Running command: sudo systemctl start ceph-mon@ceph-mon01
[ceph-mon01][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon01.asok mon_status
[ceph-mon01][DEBUG ] ********************************************************************************
[ceph-mon01][DEBUG ] status for monitor: mon.ceph-mon01
[ceph-mon01][DEBUG ] {
[ceph-mon01][DEBUG ]   "election_epoch": 3, 
[ceph-mon01][DEBUG ]   "extra_probe_peers": [], 
[ceph-mon01][DEBUG ]   "feature_map": {
[ceph-mon01][DEBUG ]     "mon": [
[ceph-mon01][DEBUG ]       {
[ceph-mon01][DEBUG ]         "features": "0x3f01cfb8ffedffff", 
[ceph-mon01][DEBUG ]         "num": 1, 
[ceph-mon01][DEBUG ]         "release": "luminous"
[ceph-mon01][DEBUG ]       }
[ceph-mon01][DEBUG ]     ]
[ceph-mon01][DEBUG ]   }, 
[ceph-mon01][DEBUG ]   "features": {
[ceph-mon01][DEBUG ]     "quorum_con": "4540138292840890367", 
[ceph-mon01][DEBUG ]     "quorum_mon": [
[ceph-mon01][DEBUG ]       "kraken", 
[ceph-mon01][DEBUG ]       "luminous", 
[ceph-mon01][DEBUG ]       "mimic", 
[ceph-mon01][DEBUG ]       "osdmap-prune", 
[ceph-mon01][DEBUG ]       "nautilus", 
[ceph-mon01][DEBUG ]       "octopus"
[ceph-mon01][DEBUG ]     ], 
[ceph-mon01][DEBUG ]     "required_con": "2449958747315978244", 
[ceph-mon01][DEBUG ]     "required_mon": [
[ceph-mon01][DEBUG ]       "kraken", 
[ceph-mon01][DEBUG ]       "luminous", 
[ceph-mon01][DEBUG ]       "mimic", 
[ceph-mon01][DEBUG ]       "osdmap-prune", 
[ceph-mon01][DEBUG ]       "nautilus", 
[ceph-mon01][DEBUG ]       "octopus"
[ceph-mon01][DEBUG ]     ]
[ceph-mon01][DEBUG ]   }, 
[ceph-mon01][DEBUG ]   "monmap": {
[ceph-mon01][DEBUG ]     "created": "2021-09-01T15:17:28.323291Z", 
[ceph-mon01][DEBUG ]     "epoch": 1, 
[ceph-mon01][DEBUG ]     "features": {
[ceph-mon01][DEBUG ]       "optional": [], 
[ceph-mon01][DEBUG ]       "persistent": [
[ceph-mon01][DEBUG ]         "kraken", 
[ceph-mon01][DEBUG ]         "luminous", 
[ceph-mon01][DEBUG ]         "mimic", 
[ceph-mon01][DEBUG ]         "osdmap-prune", 
[ceph-mon01][DEBUG ]         "nautilus", 
[ceph-mon01][DEBUG ]         "octopus"
[ceph-mon01][DEBUG ]       ]
[ceph-mon01][DEBUG ]     }, 
[ceph-mon01][DEBUG ]     "fsid": "ed9c5f24-611a-44ca-81e8-f3d035e494e8", 
[ceph-mon01][DEBUG ]     "min_mon_release": 15, 
[ceph-mon01][DEBUG ]     "min_mon_release_name": "octopus", 
[ceph-mon01][DEBUG ]     "modified": "2021-09-01T15:17:28.323291Z", 
[ceph-mon01][DEBUG ]     "mons": [
[ceph-mon01][DEBUG ]       {
[ceph-mon01][DEBUG ]         "addr": "192.168.3.101:6789/0", 
[ceph-mon01][DEBUG ]         "name": "ceph-mon01", 
[ceph-mon01][DEBUG ]         "priority": 0, 
[ceph-mon01][DEBUG ]         "public_addr": "192.168.3.101:6789/0", 
[ceph-mon01][DEBUG ]         "public_addrs": {
[ceph-mon01][DEBUG ]           "addrvec": [
[ceph-mon01][DEBUG ]             {
[ceph-mon01][DEBUG ]               "addr": "192.168.3.101:3300", 
[ceph-mon01][DEBUG ]               "nonce": 0, 
[ceph-mon01][DEBUG ]               "type": "v2"
[ceph-mon01][DEBUG ]             }, 
[ceph-mon01][DEBUG ]             {
[ceph-mon01][DEBUG ]               "addr": "192.168.3.101:6789", 
[ceph-mon01][DEBUG ]               "nonce": 0, 
[ceph-mon01][DEBUG ]               "type": "v1"
[ceph-mon01][DEBUG ]             }
[ceph-mon01][DEBUG ]           ]
[ceph-mon01][DEBUG ]         }, 
[ceph-mon01][DEBUG ]         "rank": 0, 
[ceph-mon01][DEBUG ]         "weight": 0
[ceph-mon01][DEBUG ]       }
[ceph-mon01][DEBUG ]     ]
[ceph-mon01][DEBUG ]   }, 
[ceph-mon01][DEBUG ]   "name": "ceph-mon01", 
[ceph-mon01][DEBUG ]   "outside_quorum": [], 
[ceph-mon01][DEBUG ]   "quorum": [
[ceph-mon01][DEBUG ]     0
[ceph-mon01][DEBUG ]   ], 
[ceph-mon01][DEBUG ]   "quorum_age": 2, 
[ceph-mon01][DEBUG ]   "rank": 0, 
[ceph-mon01][DEBUG ]   "state": "leader", 
[ceph-mon01][DEBUG ]   "sync_provider": []
[ceph-mon01][DEBUG ] }
[ceph-mon01][DEBUG ] ********************************************************************************
[ceph-mon01][INFO  ] monitor: mon.ceph-mon01 is running
[ceph-mon01][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon01.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph-mon01
[ceph-mon01][DEBUG ] connection detected need for sudo
[ceph-mon01][DEBUG ] connected to host: ceph-mon01 
[ceph-mon01][DEBUG ] detect platform information from remote host
[ceph-mon01][DEBUG ] detect machine type
[ceph-mon01][DEBUG ] find the location of an executable
[ceph-mon01][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon01.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph-mon01 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpB8oAJZ
[ceph-mon01][DEBUG ] connection detected need for sudo
[ceph-mon01][DEBUG ] connected to host: ceph-mon01 
[ceph-mon01][DEBUG ] detect platform information from remote host
[ceph-mon01][DEBUG ] detect machine type
[ceph-mon01][DEBUG ] get remote short hostname
[ceph-mon01][DEBUG ] fetch remote file
[ceph-mon01][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-mon01.asok mon_status
[ceph-mon01][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon01/keyring auth get client.admin
[ceph-mon01][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon01/keyring auth get client.bootstrap-mds
[ceph-mon01][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon01/keyring auth get client.bootstrap-mgr
[ceph-mon01][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon01/keyring auth get client.bootstrap-osd
[ceph-mon01][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon01/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpB8oAJZ






mon 节点添加:
# ubuntu可以提前安装必备软件: apt install -y ceph-mon 
#机器添加多的情况下,如果部分安装失败,可以重新执行此命令,直到全部安装完成
#这里可能会有多个输入 yes 确认的过程
#参数: --no-adjust-repos 推送的过程中不更改源
#参数: --nogpgcheck      安装包时不使用gpgcheck检查
[ceph@ceph-node01 ceph-cluster]$ ceph-deploy install  ceph-mon01 ceph-mon02 ceph-mon03



# 输出信息:
[ceph@ceph-node01 ~]$ cd ceph-cluster/
[ceph@ceph-node01 ceph-cluster]$ ceph-deploy install --no-adjust-repos ceph-mon01 ceph-mon02 ceph-mon03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy install --no-adjust-repos ceph-mon01 ceph-mon02 ceph-mon03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f2ef906afc8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : False
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0x7f2ef9b44578>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-mon01', 'ceph-mon02', 'ceph-mon03']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : False
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version mimic on cluster ceph hosts ceph-mon01 ceph-mon02 ceph-mon03
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-mon01 ...
[ceph-mon01][DEBUG ] connection detected need for sudo
[ceph-mon01][DEBUG ] connected to host: ceph-mon01 
[ceph-mon01][DEBUG ] detect platform information from remote host
[ceph-mon01][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph-mon01][INFO  ] installing Ceph on ceph-mon01
[ceph-mon01][INFO  ] Running command: sudo yum clean all
[ceph-mon01][DEBUG ] Loaded plugins: fastestmirror
[ceph-mon01][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[ceph-mon01][DEBUG ] Cleaning up list of fastest mirrors
[ceph-mon01][INFO  ] Running command: sudo yum -y install ceph ceph-radosgw
[ceph-mon01][DEBUG ] Loaded plugins: fastestmirror
[ceph-mon01][DEBUG ] Determining fastest mirrors
[ceph-mon01][DEBUG ] Package 2:ceph-15.2.14-0.el7.x86_64 already installed and latest version
[ceph-mon01][DEBUG ] Package 2:ceph-radosgw-15.2.14-0.el7.x86_64 already installed and latest version
[ceph-mon01][DEBUG ] Nothing to do
[ceph-mon01][INFO  ] Running command: sudo ceph --version
[ceph-mon01][DEBUG ] ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-mon02 ...
The authenticity of host 'ceph-mon02 (192.168.3.103)' can't be established.
ECDSA key fingerprint is SHA256:/gjYMUieZ9T64qVAezFmqdvUlU+zWPGjJeGKlC6251Y.
ECDSA key fingerprint is MD5:1c:d0:a0:80:8e:4b:4a:32:74:96:b6:f7:27:90:21:7f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph-mon02' (ECDSA) to the list of known hosts.
[ceph-mon02][DEBUG ] connection detected need for sudo
[ceph-mon02][DEBUG ] connected to host: ceph-mon02 
[ceph-mon02][DEBUG ] detect platform information from remote host
[ceph-mon02][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph-mon02][INFO  ] installing Ceph on ceph-mon02
[ceph-mon02][INFO  ] Running command: sudo yum clean all
[ceph-mon02][DEBUG ] Loaded plugins: fastestmirror
[ceph-mon02][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[ceph-mon02][DEBUG ] Cleaning up list of fastest mirrors
[ceph-mon02][INFO  ] Running command: sudo yum -y install ceph ceph-radosgw
[ceph-mon02][DEBUG ] Loaded plugins: fastestmirror
[ceph-mon02][DEBUG ] Determining fastest mirrors
[ceph-mon02][DEBUG ] Package 2:ceph-15.2.14-0.el7.x86_64 already installed and latest version
[ceph-mon02][DEBUG ] Package 2:ceph-radosgw-15.2.14-0.el7.x86_64 already installed and latest version
[ceph-mon02][DEBUG ] Nothing to do
[ceph-mon02][INFO  ] Running command: sudo ceph --version
[ceph-mon02][DEBUG ] ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-mon03 ...
The authenticity of host 'ceph-mon03 (192.168.3.104)' can't be established.
ECDSA key fingerprint is SHA256:Scs2Z4hJZ7wOVQXKjNoD0iIVRYLx2horw9GI54d97Vw.
ECDSA key fingerprint is MD5:6e:c0:7b:61:22:4e:d3:fc:4b:01:a0:ee:ff:8f:20:27.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph-mon03' (ECDSA) to the list of known hosts.
[ceph-mon03][DEBUG ] connection detected need for sudo
[ceph-mon03][DEBUG ] connected to host: ceph-mon03 
[ceph-mon03][DEBUG ] detect platform information from remote host
[ceph-mon03][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph-mon03][INFO  ] installing Ceph on ceph-mon03
[ceph-mon03][INFO  ] Running command: sudo yum clean all
[ceph-mon03][DEBUG ] Loaded plugins: fastestmirror
[ceph-mon03][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[ceph-mon03][DEBUG ] Cleaning up list of fastest mirrors
[ceph-mon03][INFO  ] Running command: sudo yum -y install ceph ceph-radosgw
[ceph-mon03][DEBUG ] Loaded plugins: fastestmirror
[ceph-mon03][DEBUG ] Determining fastest mirrors
[ceph-mon03][DEBUG ] Package 2:ceph-15.2.14-0.el7.x86_64 already installed and latest version
[ceph-mon03][DEBUG ] Package 2:ceph-radosgw-15.2.14-0.el7.x86_64 already installed and latest version
[ceph-mon03][DEBUG ] Nothing to do
[ceph-mon03][INFO  ] Running command: sudo ceph --version
[ceph-mon03][DEBUG ] ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)



#各个节点进行检查mon部署情况:

 ps -ef|grep mon
 
[ceph@ceph-mon01 ~]$ ps -ef|grep mon
dbus        8911       1  0 21:42 ?        00:00:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
ceph       21548       1  0 23:17 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon01 --setuser ceph --setgroup ceph
ceph       21838   21786  0 23:35 pts/0    00:00:00 grep --color=auto mon


[root@ceph-mon02 ~]#  ps -ef|grep mon
dbus        9019       1  0 21:42 ?        00:00:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root       10426   10009  0 23:35 pts/0    00:00:00 grep --color=auto mon


[root@ceph-mon03 ~]#  ps -ef|grep mon
dbus        8973       1  0 21:42 ?        00:00:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root       20092   19686  0 23:35 pts/0    00:00:00 grep --color=auto mon
#确认3个mon节点安装完毕

ceph 添加节点

#添加节点之前,确保被添加节点安装了ceph-common包
#node节点安装软件[建议所有节点安装]:
yum install -y ceph-common


[root@ceph-mon01 ~]# yum install -y ceph-common
[root@ceph-mon01 ~]# su - ceph
[ceph@ceph-mon01 ~]$ cd ceph-cluster/
[ceph@ceph-mon01 ceph-cluster]$ ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  
ceph.client.admin.keyring   ceph-deploy-ceph.log
ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  
ceph.conf                   ceph.mon.keyring

#mon节点拷贝认证文件到 /var/lib/ceph/  ,如果不拷贝,可能会出现文件找不到而导致无法安装的问题
#如果 mon节点和部署节点分开,应该从部署节点拷贝到 mon节点的 /var/lib/ceph/ 目录
[ceph@ceph-mon01 ceph-cluster]$ cp * /var/lib/ceph/


#推送文件,用于管理ceph集群,这里使用ceph-deploy服务器进行管理的 所以推送到 ceph-deploy服务器:
#如果其他服务器需要,也可以使用这个命令进行推送,如: ceph-deoloy admin ceph-mon02

#推送结果:
[ceph@ceph-mon01 ceph-cluster]$ ceph-deploy admin  ceph-deploy
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy admin ceph-deploy
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f86a09a4290>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-deploy']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f86a14cd1b8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-deploy
The authenticity of host 'ceph-deploy (192.168.3.101)' can't be established.
ECDSA key fingerprint is SHA256:t7QsxmVjOU4ekBPH+WbFJuNTVl90moPspPvfWogPBlI.
ECDSA key fingerprint is MD5:04:f6:f3:d3:69:ce:21:3c:8c:26:eb:59:f2:52:72:55.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph-deploy' (ECDSA) to the list of known hosts.
[ceph-deploy][DEBUG ] connection detected need for sudo
[ceph-deploy][DEBUG ] connected to host: ceph-deploy 
[ceph-deploy][DEBUG ] detect platform information from remote host
[ceph-deploy][DEBUG ] detect machine type
[ceph-deploy][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf




#建议推送到所有主机,便于管理使用:
[ceph@ceph-mon01 ceph-cluster]$ ceph-deploy admin ceph-deploy ceph-mon01 ceph-mon02 ceph-mon03 ceph-mgr01 ceph-mgr02 ceph-node01 ceph-node02


推送完成后ceph文件夹会多出配置文件:
[ceph@ceph-mon01 ceph-cluster]$ ls /etc/ceph/
ceph.client.admin.keyring  ceph.conf  rbdmap  tmpgxwgoH


#查看权限:
[ceph@ceph-mon01 ceph-cluster]$ ll /etc/ceph/
total 12
-rw------- 1 root root 151 Sep  1 02:32 ceph.client.admin.keyring
-rw-r--r-- 1 root root 265 Sep  1 02:32 ceph.conf
-rw-r--r-- 1 root root  92 Aug  6 01:41 rbdmap
-rw------- 1 root root   0 Sep  1 02:20 tmpgxwgoH
#此时还没权限,需要增加权限给ceph

#授权认证文件[只要是推送过的主机都需要赋权]:
[ceph@ceph-mon01 ceph-cluster]$ sudo setfacl -m u:ceph:rw /etc/ceph/ceph.client.admin.keyring



#此时验证命令是否生效:
[ceph@ceph-mon01 ceph-cluster]$ ceph -s
  cluster:
    id:     aa1d8a39-b832-46fb-91f4-f0a200cc7d85
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
 
  services:
    mon: 1 daemons, quorum ceph-mon01 (age 16m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     


#这样就可以看到集群信息了,并且是因为推送到了所有主机,所以集群内所有主机都可以使用ceph-deploy命令进行管理整个集群

#如果新增服务器也需要使用到ceph命令,那么也需要进行ceph授权:
ceph-deploy admin ceph-deploy [服务器名]
#进入这个服务器修改配置文件权限:
sudo setfacl -m u:ceph:rw /etc/ceph/ceph.client.admin.keyring

#如: 我想让node01 node02 可以用,那么可以这样操作:
1. ceph-deploy操作:
[root@ceph-deploy ~]# ceph-deploy admin ceph-deploy
2. ceph-node01操作:
[root@ceph-node01 ~]# sudo setfacl -m u:ceph:rw /etc/ceph/ceph.client.admin.keyring
3. 权限获取成功:
[root@ceph-node01 ~]# ceph -s
  cluster:
    id:     aa1d8a39-b832-46fb-91f4-f0a200cc7d85
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
 
  services:
    mon: 1 daemons, quorum ceph-mon01 (age 18m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     




# 加集群后出现的问题:
[root@ceph-node02 ~]# ceph -s
  cluster:
    id:     aa1d8a39-b832-46fb-91f4-f0a200cc7d85
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim     # 这里提示是非安全的
 
  services:
    mon: 1 daemons, quorum ceph-mon01 (age 18m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     

#解决 mon is allowing insecure global_id reclaim 告警 问题,请往后看,后面有

部署mgr

#查看mgr服务器:
[ceph@ceph-mon01 ceph-cluster]$ grep mgr /etc/hosts
192.168.3.104 ceph-mgr01.example ceph-mgr01
192.168.3.105 ceph-mgr02.example ceph-mgr02


#在mgr服务器安装mgr软件:
ssh ceph-mgr01 "yum install -y ceph-mgr"
ssh ceph-mgr02 "yum install -y ceph-mgr"

#检查:
ssh ceph-mgr01 "rpm -qa ceph-mgr"
ssh ceph-mgr02 "rpm -qa ceph-mgr"


#在部署机安装mgr角色:
[ceph@ceph-mon01 ceph-cluster]$ ceph-deploy mgr create ceph-mgr01 ceph-mgr02

#命令输出信息:
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mgr create ceph-mgr01 ceph-mgr02
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph-mgr01', 'ceph-mgr01'), ('ceph-mgr02', 'ceph-mgr02')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fcd4cb26ab8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7fcd4d3a10c8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-mgr01:ceph-mgr01 ceph-mgr02:ceph-mgr02
[ceph-mgr01][DEBUG ] connection detected need for sudo
[ceph-mgr01][DEBUG ] connected to host: ceph-mgr01 
[ceph-mgr01][DEBUG ] detect platform information from remote host
[ceph-mgr01][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-mgr01
[ceph-mgr01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mgr01][WARNIN] mgr keyring does not exist yet, creating one
[ceph-mgr01][DEBUG ] create a keyring file
[ceph-mgr01][DEBUG ] create path recursively if it doesn't exist
[ceph-mgr01][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-mgr01 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-mgr01/keyring
[ceph-mgr01][INFO  ] Running command: sudo systemctl enable ceph-mgr@ceph-mgr01
[ceph-mgr01][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-mgr01.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-mgr01][INFO  ] Running command: sudo systemctl start ceph-mgr@ceph-mgr01
[ceph-mgr01][INFO  ] Running command: sudo systemctl enable ceph.target
[ceph-mgr02][DEBUG ] connection detected need for sudo
[ceph-mgr02][DEBUG ] connected to host: ceph-mgr02 
[ceph-mgr02][DEBUG ] detect platform information from remote host
[ceph-mgr02][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-mgr02
[ceph-mgr02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mgr02][WARNIN] mgr keyring does not exist yet, creating one
[ceph-mgr02][DEBUG ] create a keyring file
[ceph-mgr02][DEBUG ] create path recursively if it doesn't exist
[ceph-mgr02][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-mgr02 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-mgr02/keyring
[ceph-mgr02][INFO  ] Running command: sudo systemctl enable ceph-mgr@ceph-mgr02
[ceph-mgr02][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-mgr02.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-mgr02][INFO  ] Running command: sudo systemctl start ceph-mgr@ceph-mgr02
[ceph-mgr02][INFO  ] Running command: sudo systemctl enable ceph.target




#部署完mgr后检查:
#1. 检查进程:
[ceph@ceph-mon01 ceph-cluster]$ ssh ceph-mgr01
[ceph@ceph-mgr01 ~]$ ps -fe|grep mgr
ceph      20151      1  0 00:03 ?        00:00:04 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-mgr01 --setuser ceph --setgroup ceph
ceph      20317  20295  0 00:21 pts/1    00:00:00 grep --color=auto mgr

[ceph@ceph-mon01 ceph-cluster]$ ssh ceph-mgr02
[ceph@ceph-mgr02 ~]$ ps -fe|grep mgr
ceph      10393      1  0 00:03 ?        00:00:02 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-mgr02 --setuser ceph --setgroup ceph
ceph      10518  10496  0 00:21 pts/1    00:00:00 grep --color=auto mgr



#2. 检查集群状态: 
[ceph@ceph-mon01 ceph-cluster]$ ceph -s
  cluster:
    id:     ed9c5f24-611a-44ca-81e8-f3d035e494e8
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
            Module 'restful' has failed dependency: No module named 'pecan'
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 1 daemons, quorum ceph-mon01 (age 46m)
    mgr: ceph-mgr01(active, since 50s), standbys: ceph-mgr02  #<------ 可以看到mgr角色成功添加
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     

添加OSD:

#安装osd所需包 每个node节点 [必须执行]:
ceph-deploy install --no-adjust-repos ceph-mon01 ceph-mon02 ceph-mon03
ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node01 ceph-node02

#修改安全配置:

#ture 允许非安全id回收 [这样会消除 ceph-s中的警告]
[ceph@ceph-mon01 ceph-cluster]$ ceph config set mon auth_allow_insecure_global_id_reclaim true
[ceph@ceph-mon01 ceph-cluster]$ ceph -s
  cluster:
    id:     ed9c5f24-611a-44ca-81e8-f3d035e494e8
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim    # 警告状态
 
  services:
    mon: 1 daemons, quorum ceph-mon01 (age 4h)
    mgr: ceph-mgr02(active, since 10h), standbys: ceph-mgr01, ceph-node01
    osd: 4 osds: 4 up (since 106m), 4 in (since 8h); 1 remapped pgs
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   4.0 GiB used, 396 GiB / 400 GiB avail
    pgs:     1 active+clean+remapped

#false 不允许非安全id回收[默认值,但是建议改为false,用于消除警告]:
[ceph@ceph-mon01 ceph-cluster]$ ceph config set mon auth_allow_insecure_global_id_reclaim false
[ceph@ceph-mon01 ceph-cluster]$ ceph -s
  cluster:
    id:     ed9c5f24-611a-44ca-81e8-f3d035e494e8
    health: HEALTH_OK             #警告消除
 
  services:
    mon: 1 daemons, quorum ceph-mon01 (age 4h)
    mgr: ceph-mgr02(active, since 10h), standbys: ceph-mgr01, ceph-node01
    osd: 4 osds: 4 up (since 105m), 4 in (since 8h); 1 remapped pgs
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   4.0 GiB used, 396 GiB / 400 GiB avail
    pgs:     1 active+clean+remapped





#初始化磁盘:

#我是两个node节点  ceph-node01 ceph-node02
[root@ceph-mon01 ~]# su - ceph
[ceph@ceph-mon01 ~]$ cd ceph-cluster
[ceph@ceph-mon01 ceph-cluster]$ ceph-deploy disk list ceph-node01 ceph-node02
#输出结果:
[ceph@ceph-mon01 ceph-cluster]$ ceph-deploy disk list ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy disk list ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f0594b97cb0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-node01']
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f0594b6a938>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph-node01][DEBUG ] connection detected need for sudo
[ceph-node01][DEBUG ] connected to host: ceph-node01 
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] find the location of an executable
[ceph-node01][INFO  ] Running command: sudo fdisk -l
[ceph-node01][INFO  ] Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node01][INFO  ] Disk /dev/sdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node01][INFO  ] Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node01][INFO  ] Disk /dev/mapper/centos-root: 107.2 GB, 107160272896 bytes, 209297408 sectors
[ceph@ceph-mon01 ceph-cluster]$ ceph-deploy disk list ceph-node01 ceph-node02
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy disk list ceph-node01 ceph-node02
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fe988749cb0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-node01', 'ceph-node02']
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7fe98871c938>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph-node01][DEBUG ] connection detected need for sudo
[ceph-node01][DEBUG ] connected to host: ceph-node01 
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] find the location of an executable
[ceph-node01][INFO  ] Running command: sudo fdisk -l
[ceph-node01][INFO  ] Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node01][INFO  ] Disk /dev/sdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node01][INFO  ] Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node01][INFO  ] Disk /dev/mapper/centos-root: 107.2 GB, 107160272896 bytes, 209297408 sectors
[ceph-node02][DEBUG ] connection detected need for sudo
[ceph-node02][DEBUG ] connected to host: ceph-node02 
[ceph-node02][DEBUG ] detect platform information from remote host
[ceph-node02][DEBUG ] detect machine type
[ceph-node02][DEBUG ] find the location of an executable
[ceph-node02][INFO  ] Running command: sudo fdisk -l
[ceph-node02][INFO  ] Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node02][INFO  ] Disk /dev/sdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node02][INFO  ] Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node02][INFO  ] Disk /dev/mapper/centos-root: 107.2 GB, 107160272896 bytes, 209297408 sectors



可以看到被列出的磁盘:
#ceph-node01磁盘:
[ceph-node01][INFO  ] Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node01][INFO  ] Disk /dev/sdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node01][INFO  ] Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors

#ceph-node02磁盘:
[ceph-node02][INFO  ] Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node02][INFO  ] Disk /dev/sdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node02][INFO  ] Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors




#添加OSD之前安装基础环境[node节点安装]:
ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node01 ceph-node02

#如果出现故障:
# 故障机器执行以下命令,再次回到部署机
yum remove   ceph-release-1-1.el7.noarch.rpm
yum instlal https://mirrors.163.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
yum clean all
yum makecache
yum install ceph-release
yum install ceph ceph-radosgw



#注意: 如果出现安装ceph-adosgw报错,可以这样解决
#原因可能因为你装过阿里云这个源 
  https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
              
#导致错误问题:
            ceph-osd = 2:13.2.8-0.el7
           Available: 2:ceph-osd-13.2.9-0.el7.x86_64 (Ceph)
               ceph-osd = 2:13.2.9-0.el7
           Available: 2:ceph-osd-13.2.10-0.el7.x86_64 (Ceph)
               ceph-osd = 2:13.2.10-0.el7
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
#这个错误是因为源应该配置为 nautilus 版,但是我配置成了其他版本
# 为了避免这个问题出现,建议直接安装163得源
yum instlal https://mirrors.163.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm

#安装失败的node01 node02 修改为163的ceph.repo源 [安装之前装的源是163的 就不会出现上面问题]:
修改 rpm-minic  改为: nautilus
vim ceph.repo
%s#rpm-mimic#rpm-nautilus#g

sed替换:
sed -i 's#rpm-mimic#rpm-nautilus#g' /etc/yum.repos.d/ceph.repo

#改好的ceph.repo配置:
cat  /etc/yum.repos.d/ceph.repo:
#------------------------------------------------------------------------#
[ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
priority=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
priority=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/SRPMS
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1
#------------------------------------------------------------------------#

#ceph-deploy部署机执行:
export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-nautilus/el7
export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc

# 指定版本安装:
 --release=14.2.22  --release=nautilus


# 这个问题出现在centos中,默认安装的版本不稳定,导致后续无法安装,所以这里就让他安装稳定的 14.2.22版本
# 
然后再次执行就可以正常安装了:
ceph-deploy install  ceph-node01 ceph-node02

#指定版本:
ceph-deploy install --release=14.2.22 ceph-node01 ceph-node02
#所以还是建议一开始就装 163 源 这样就不会出现这样的问题了


# 擦除磁盘
# node01 节点 查看有几块磁盘:
[root@ceph-node01 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk 
├─sda1            8:1    0  200M  0 part /boot
└─sda2            8:2    0 99.8G  0 part 
  └─centos-root 253:0    0 99.8G  0 lvm  /
sdb               8:16   0  100G  0 disk 
sdc               8:32   0  100G  0 disk 

#擦除一块
[ceph@ceph-mon01 ceph-cluster]$ ceph-deploy disk zap  ceph-node01  /dev/sdb


# node02 节点 查看有几块磁盘:
[root@ceph-node02 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk 
├─sda1            8:1    0  200M  0 part /boot
└─sda2            8:2    0 99.8G  0 part 
  └─centos-root 253:0    0 99.8G  0 lvm  /
sdb               8:16   0  100G  0 disk 
sdc               8:32   0  100G  0 disk 
sr0              11:0    1  918M  0 rom  

#擦除一块
[ceph@ceph-mon01 ceph-cluster]$ ceph-deploy disk zap  ceph-node02  /dev/sdb


#如果有多块磁盘可以擦除多块
#如果已经做过一次擦除 这里报错了,就因该去那台服务器格式化磁盘这里再次操作
例子:
已经做过操作的磁盘:
[root@ceph-node01 ~]# lsblk 
NAME                                                                                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                     8:0    0  100G  0 disk 
├─sda1                                                                                                  8:1    0  200M  0 part /boot
└─sda2                                                                                                  8:2    0 99.8G  0 part 
  └─centos-root                                                                                       253:0    0 99.8G  0 lvm  /
sdb                                                                                                     8:16   0  100G  0 disk 
└─ceph--6b77d12c--xx  253:1    0  100G  0 lvm    #这是ceph已经操作过的磁盘
sdc                                                                                                     8:32   0  100G  0 disk 
└─ceph--e68f958f--xx  253:2    0  100G  0 lvm   #这是ceph已经操作过的磁盘
sr0                                                                                                    11:0    1  918M  0 rom  


#格式化:
ceph-volume lvm zap --destroy /dev/sdb
ceph-volume lvm zap --destroy /dev/sdc

如:
#仅在重新格式化卸载磁盘时使用
[root@ceph-node01 ~]# ceph-volume lvm zap --destroy /dev/sdc
--> Zapping: /dev/sdc
--> Zapping lvm member /dev/sdc. lv_path is /dev/ceph-e68f958f-c0a9-435e-9978-1b297f85d608/osd-block-14b2a2b4-690f-4a41-936c-ad63be1031ec
Running command: /usr/bin/dd if=/dev/zero of=/dev/ceph-e68f958f-c0a9-435e-9978-1b297f85d608/osd-block-14b2a2b4-690f-4a41-936c-ad63be1031ec bs=1M count=10 conv=fsync
 stderr: 10+0 records in
10+0 records out
10485760 bytes (10 MB) copied
 stderr: , 0.0133396 s, 786 MB/s
--> Only 1 LV left in VG, will proceed to destroy volume group ceph-e68f958f-c0a9-435e-9978-1b297f85d608
Running command: /usr/sbin/vgremove -v -f ceph-e68f958f-c0a9-435e-9978-1b297f85d608
 stderr: Removing ceph--e68f958f--c0a9--435e--9978--1b297f85d608-osd--block--14b2a2b4--690f--4a41--936c--ad63be1031ec (253:2)
 stderr: Archiving volume group "ceph-e68f958f-c0a9-435e-9978-1b297f85d608" metadata (seqno 5).
 stderr: Releasing logical volume "osd-block-14b2a2b4-690f-4a41-936c-ad63be1031ec"
 stderr: Creating volume group backup "/etc/lvm/backup/ceph-e68f958f-c0a9-435e-9978-1b297f85d608" (seqno 6).
 stdout: Logical volume "osd-block-14b2a2b4-690f-4a41-936c-ad63be1031ec" successfully removed
 stderr: Removing physical volume "/dev/sdc" from volume group "ceph-e68f958f-c0a9-435e-9978-1b297f85d608"
 stdout: Volume group "ceph-e68f958f-c0a9-435e-9978-1b297f85d608" successfully removed
Running command: /usr/bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync
 stderr: 10+0 records in
10+0 records out
 stderr: 10485760 bytes (10 MB) copied, 0.0148186 s, 708 MB/s
--> Zapping successful for: <Raw Device: /dev/sdc>



#卸载磁盘操作:
#在osd节点进行:
umount 挂载点
ceph-volume zap --destroy /dev/sdb
ceph-volume lvm zap --destroy /dev/sdb

#然后那块磁盘就被卸掉了 也可以重新加进去
 



# 开始添加 OSD:
查看现有osd:  
[ceph@ceph-mon01 ceph-cluster]$ ceph -s
  cluster:
    id:     ed9c5f24-611a-44ca-81e8-f3d035e494e8
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
            Reduced data availability: 1 pg inactive
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 1 daemons, quorum ceph-mon01 (age 112m)
    mgr: ceph-mgr02(active, since 112m), standbys: ceph-mgr01
    osd: 0 osds: 0 up, 0 in     #<---------# 可以看到一个都没有
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             1 unknown




#添加过程:
[ceph@ceph-mon01 ceph-cluster]$ ceph-deploy osd create ceph-node01 --data /dev/sdb

#日志:
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy osd create ceph-node01 --data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7effe4586e18>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-node01
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7effe45518c0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[ceph-node01][DEBUG ] connection detected need for sudo
[ceph-node01][DEBUG ] connected to host: ceph-node01 
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node01
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01][WARNIN] osd keyring does not exist yet, creating one
[ceph-node01][DEBUG ] create a keyring file
[ceph-node01][DEBUG ] find the location of an executable
[ceph-node01][INFO  ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[ceph-node01][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-node01][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f7fa47f9-1d84-4a0f-9386-c355c7710b42
[ceph-node01][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-6b77d12c-de27-4894-aef2-dc649e6d1448 /dev/sdb
[ceph-node01][WARNIN]  stdout: Physical volume "/dev/sdb" successfully created.
[ceph-node01][WARNIN]  stdout: Volume group "ceph-6b77d12c-de27-4894-aef2-dc649e6d1448" successfully created
[ceph-node01][WARNIN] Running command: /sbin/lvcreate --yes -l 25599 -n osd-block-f7fa47f9-1d84-4a0f-9386-c355c7710b42 ceph-6b77d12c-de27-4894-aef2-dc649e6d1448
[ceph-node01][WARNIN]  stdout: Logical volume "osd-block-f7fa47f9-1d84-4a0f-9386-c355c7710b42" created.
[ceph-node01][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-node01][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-node01][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-6b77d12c-de27-4894-aef2-dc649e6d1448/osd-block-f7fa47f9-1d84-4a0f-9386-c355c7710b42
[ceph-node01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[ceph-node01][WARNIN] Running command: /bin/ln -s /dev/ceph-6b77d12c-de27-4894-aef2-dc649e6d1448/osd-block-f7fa47f9-1d84-4a0f-9386-c355c7710b42 /var/lib/ceph/osd/ceph-0/block
[ceph-node01][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-node01][WARNIN]  stderr: 2021-09-02T02:34:13.471+0800 7f6b064d9700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node01][WARNIN] 2021-09-02T02:34:13.471+0800 7f6b064d9700 -1 AuthRegistry(0x7f6b00059250) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node01][WARNIN]  stderr: got monmap epoch 1
[ceph-node01][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQCkxy9hma5WKhAAg8jByEqSqd9bzryK1zh77Q==
[ceph-node01][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph-node01][WARNIN] added entity osd.0 auth(key=AQCkxy9hma5WKhAAg8jByEqSqd9bzryK1zh77Q==)
[ceph-node01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-node01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-node01][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid f7fa47f9-1d84-4a0f-9386-c355c7710b42 --setuser ceph --setgroup ceph
[ceph-node01][WARNIN]  stderr: 2021-09-02T02:34:13.710+0800 7fb40e3a8bc0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
[ceph-node01][WARNIN]  stderr: 2021-09-02T02:34:13.719+0800 7fb40e3a8bc0 -1 freelist read_size_meta_from_db missing size meta in DB
[ceph-node01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdb
[ceph-node01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node01][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-6b77d12c-de27-4894-aef2-dc649e6d1448/osd-block-f7fa47f9-1d84-4a0f-9386-c355c7710b42 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ceph-node01][WARNIN] Running command: /bin/ln -snf /dev/ceph-6b77d12c-de27-4894-aef2-dc649e6d1448/osd-block-f7fa47f9-1d84-4a0f-9386-c355c7710b42 /var/lib/ceph/osd/ceph-0/block
[ceph-node01][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-node01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[ceph-node01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node01][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-f7fa47f9-1d84-4a0f-9386-c355c7710b42
[ceph-node01][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-f7fa47f9-1d84-4a0f-9386-c355c7710b42.service to /usr/lib/systemd/system/ceph-volume@.service.
[ceph-node01][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@0
[ceph-node01][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph-node01][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[ceph-node01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph-node01][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[ceph-node01][INFO  ] checking OSD status...
[ceph-node01][DEBUG ] find the location of an executable
[ceph-node01][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node01 is now ready for osd use.


# ceph-node01 添加完了第一块,现在添加第二块:
[ceph@ceph-mon01 ceph-cluster]$ ceph-deploy osd create ceph-node01 --data /dev/sdc
#输出和上面一样  就不写了


#node02 的第二块也加上去
[ceph@ceph-mon01 ceph-cluster]$ ceph-deploy osd create ceph-node02 --data /dev/sdc


#添加磁盘例子:ceph-deploy osd create ceph-node01 --data /dev/sda

#注意:
[ceph-node02][WARNIN] Running command: /bin/systemctl start ceph-osd@2
[ceph-node02][WARNIN] --> ceph-volume lvm activate successful for osd ID: 2

这里的ID是物理磁盘的ID号,很有用。它可以区分物理磁盘



#通过历史命令可以看到 我已经擦除了两台服务器的2个磁盘:
node01 :
ceph-deploy osd create ceph-node01 --data /dev/sdb
ceph-deploy osd create ceph-node01 --data /dev/sdc

node02 :
ceph-deploy osd create ceph-node02 --data /dev/sdb
ceph-deploy osd create ceph-node02 --data /dev/sdc


#总共添加了4块磁盘,现在命令可以看到磁盘了

[ceph@ceph-mon01 ceph-cluster]$ ceph -s
  cluster:
    id:     ed9c5f24-611a-44ca-81e8-f3d035e494e8
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-mon01 (age 4h)
    mgr: ceph-mgr02(active, since 10h), standbys: ceph-mgr01, ceph-node01
    osd: 4 osds: 4 up (since 111m), 4 in (since 8h); 1 remapped pgs   #up 为在线状态,可以看到4个磁盘为在线状态
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   4.0 GiB used, 396 GiB / 400 GiB avail         #<======= 4个OSD 的总大小 400GB
    pgs:     1 active+clean+remapped





# 去node01检查进程是否启动进程:
[root@ceph-node01 ~]# ps -ef|grep osd
ceph       10639       1  0 08:20 ?        00:00:30 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
ceph       11092       1  0 08:22 ?        00:00:29 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
root       12947   12132  0 11:03 pts/0    00:00:00 grep --color=auto osd


# 去node02检查进程是否启动进程:
[root@ceph-node02 ~]# ps -ef|grep osd
ceph       10814       1  0 02:37 ?        00:00:01 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
ceph       11269       1  0 02:40 ?        00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph
root       11405    9559  0 02:44 pts/0    00:00:00 grep --color=auto osd


#通过命令可以启动单独的磁盘服务:
[root@ceph-node02 ~]# systemctl status ceph-osd@2
● ceph-osd@2.service - Ceph object storage daemon osd.2
   Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: disabled)
   Active: active (running) since Thu 2021-09-02 02:37:30 CST; 7min ago
 Main PID: 10814 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@2.service
           └─10814 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph



# 将他改为开机启动:
[root@ceph-node02 ~]# systemctl enable ceph-osd@3
Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@3.service to /usr/lib/systemd/system/ceph-osd@.service.
[root@ceph-node02 ~]# systemctl enable ceph-osd@2
Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@2.service to /usr/lib/systemd/system/ceph-osd@.service.

#node01也设置为开机启动:
[root@ceph-node01 ~]# systemctl enable ceph-osd@0 ceph-osd@1
Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@1.service to /usr/lib/systemd/system/ceph-osd@.service.



# 现在 ceph-node01  ceph-node02 都已经添加了osd
# 查看osd
ceph-deploy disk list ceph-node01 ceph-node02

#也可以通过ceph -s 查看概览信息:

[ceph@ceph-mon01 ceph-cluster]$ ceph -s
  cluster:
    id:     ed9c5f24-611a-44ca-81e8-f3d035e494e8
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-mon01 (age 4h)
    mgr: ceph-mgr02(active, since 10h), standbys: ceph-mgr01, ceph-node01
    osd: 4 osds: 4 up (since 2h), 4 in (since 8h); 1 remapped pgs
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   4.0 GiB used, 396 GiB / 400 GiB avail
    pgs:     1 active+clean+remapped

OSD的删除

https://ke.qq.com/webcourse/340397/103689953#taid=11724109667185069&vid=3701925922636459917

33分钟

1.查看osd
ceph osd tree

2. 查看osd服务
#这个服务 添加了几块磁盘就会有几个osd,1代表序号
systemctl status ceph-osd@1

3. 检查osd状态
[root@ceph-node01 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME            STATUS REWEIGHT PRI-AFF 
-1       0.39075 root default                                 
-3       0.19537     host ceph-node01                         
 0   hdd 0.09769         osd.0            up  1.00000 1.00000 
 1   hdd 0.09769         osd.1            up  1.00000 1.00000 
-5       0.19537     host ceph-node02                         
 2   hdd 0.09769         osd.2            up  1.00000 1.00000 
 3   hdd 0.09769         osd.3            up  1.00000 1.00000 

3. 停止单个osd服务
systemctl stop ceph-osd@1

4. 移除单个 osd
ceph osd crush rm osd.3

5. 删除osd的密钥key
ceph auth del osd.3

6. 删除osd:
ceph osd rm osd.3

7. ceph.conf配置文件删除该主机

安装dashboard

#部署dashboard
yum install -y ceph-mgr-dashboard

#启用dashboard
ceph mgr module enable dashboard

#配置登录dashboard用户密码
echo 'admin' >password
ceph dashboard ac-user-create admin administrator -i password


#查看解析的域名地址:
[root@ceph-mgr01 ~]# ceph mgr services 
{
    "dashboard": "http://ceph-mgr01.example:8443/"
}

这篇关于基于centos7的企业级ceph集群搭建的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!