使用ceph-deploy部署ceph集群

使用ceph-deploy部署ceph集群并配置ceph-dashboard。

1、准备工作

1.1 关闭selinux(所有节点)

1
2
3
4
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

# 也可以直接修改/etc/selinux/config文件

1.2 配置ntp时间同步(所有节点)

具体的操作可以参考之前的文章

1.3 安装epel源

1
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

1.4 导入repo仓库

1
2
3
4
5
6
7
8
9
cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-{ceph-stable-release}/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM

注意这里的{ceph-stable-release}要换成对应的版本号,如我这里就是:

1
2
3
4
5
6
7
8
9
cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-octopus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM

然后更新yum repo

1
2
yum clean all
yum repolist

1.5 安装python工具

1
sudo yum install python-setuptools

1.6 安装ceph-deploy

1
2
sudo yum update
sudo yum install ceph-deploy

注意这里的版本为2.0.1要比epel中的1.5.x要新一些,后面的repo也应该是刚刚导入的ceph repo

1.7 创建ceph部署用户(所有节点)

这里我们需要创建一个用户专门用来给ceph-deploy部署,使用ceph-deploy部署的时候只需要加上--username选项即可指定用户,需要注意的是:

  • 不建议使用root
  • 不能使用ceph为用户名,因为后面的部署中需要用到该用户名,如果系统中已存在该用户则会先删除掉该用户,然后就会导致部署失败
  • 该用户需要具备超级用户权限(sudo),并且不需要输入密码使用sudo权限
  • 所有的节点均需要创建该用户
  • 该用户需要在ceph集群中的所有机器之间免密ssh登录

创建新用户

1
2
sudo useradd -d /home/{username} -m {username}
sudo passwd {username}

配置sudo权限并设置免密

1
2
echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
sudo chmod 0440 /etc/sudoers.d/{username}

如果我们的节点已经设置了ssh免密登录,可以直接把免密登录用户的ssh文件夹复制到新建的用户目录下,这里以root用户为例。{username}请替换成需要新建的用户名。

1
2
sudo cp -R /root/.ssh/ /home/{username}/
sudo chown {username}:{username} /home/{username}/.ssh/ -R

编辑deploy节点的ssh文件

我们可以通过编辑deploy节点的ssh配置文件来指定登录到ceph其他节点的用户:

1
2
3
4
5
6
7
8
9
Host node1
Hostname node1
User {username}
Host node2
Hostname node2
User {username}
Host node3
Hostname node3
User {username}

在我这里替换成

1
2
3
4
5
6
7
8
9
Host ceph71
Hostname ceph71
User cephDeploy
Host ceph72
Hostname ceph72
User cephDeploy
Host ceph73
Hostname ceph73
User cephDeploy

1.8 防火墙中放行端口

1
2
3
4
5
6
7
sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
# on monitors

sudo firewall-cmd --zone=public --add-service=ceph --permanent
# on OSDs and MDSs

sudo firewall-cmd --reload

也可以直接关闭或禁用防火墙

1.9 安装yum插件

1
sudo yum install yum-plugin-priorities

Ensure that your package manager has priority/preferences packages installed and enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to enable optional repositories.

2、部署ceph

2.1 创建部署目录

1
2
3
4
[cephDeploy@ceph71 ~]$ mkdir my-cluster
[cephDeploy@ceph71 ~]$ cd my-cluster
[cephDeploy@ceph71 my-cluster]$ pwd
/home/cephDeploy/my-cluster

由于部署过程中会生成许多文件,这里我们专门创建一个目录用于存放。

2.2 初始化mon节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[cephDeploy@ceph71 my-cluster]$ ceph-deploy new ceph71
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephDeploy/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy new ceph71
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7f7f15a43e60>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f7f151ba950>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['ceph71']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph71][DEBUG ] connection detected need for sudo
[ceph71][DEBUG ] connected to host: ceph71
[ceph71][DEBUG ] detect platform information from remote host
[ceph71][DEBUG ] detect machine type
[ceph71][DEBUG ] find the location of an executable
[ceph71][INFO ] Running command: sudo /usr/sbin/ip link show
[ceph71][INFO ] Running command: sudo /usr/sbin/ip addr show
[ceph71][DEBUG ] IP addresses found: [u'240e:f8:a903:2455:5054:ff:fe99:871', u'192.168.122.71', u'192.168.100.71']
[ceph_deploy.new][DEBUG ] Resolving host ceph71
[ceph_deploy.new][DEBUG ] Monitor ceph71 at 192.168.122.71
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph71']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.122.71']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[cephDeploy@ceph71 my-cluster]$

2.3 指定网卡

由于我们这里的虚拟机每台都有两个网卡,因此我们需要指定ceph集群用于通信的网卡所在的网段

2.4 安装ceph

在所有的节点上都安装ceph

1
ceph-deploy install ceph71 ceph72 ceph73

初始化mon

1
ceph-deploy mon create-initial

顺利执行后会在当前目录下生成一系列相关的密钥文件

使用ceph-deploy复制配置文件和密钥

Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

1
ceph-deploy admin ceph71 ceph72 ceph73

2.5 部署manager

1
ceph-deploy mgr create ceph71

2.6 添加OSD

这里我们添加三个节点上面的共计6个硬盘到ceph集群中作为osd

1
2
3
4
5
6
ceph-deploy osd create --data /dev/vdb ceph71
ceph-deploy osd create --data /dev/vdb ceph72
ceph-deploy osd create --data /dev/vdb ceph73
ceph-deploy osd create --data /dev/vdc ceph71
ceph-deploy osd create --data /dev/vdc ceph72
ceph-deploy osd create --data /dev/vdc ceph73

2.7 检测结果

1
2
3
# 查看ceph集群状态
sudo ceph health
sudo ceph -s

3、配置dashboard

详细的官网部署文档链接:https://docs.ceph.com/docs/master/mgr/dashboard/

3.1 启用dashboard

1
ceph mgr module enable dashboard

3.2 禁用ssl加密

1
ceph config set mgr mgr/dashboard/ssl false

3.3 重启ceph-dashboard

1
2
ceph mgr module disable dashboard
ceph mgr module enable dashboard

3.4 配置IP和端口

1
2
3
$ ceph config set mgr mgr/dashboard/$name/server_addr $IP
$ ceph config set mgr mgr/dashboard/$name/server_port $PORT
$ ceph config set mgr mgr/dashboard/$name/ssl_server_port $PORT

到我这里是

1
2
3
ceph config set mgr mgr/dashboard/ceph71/server_addr 192.168.100.71
ceph config set mgr mgr/dashboard/ceph71/server_port 8080
ceph config set mgr mgr/dashboard/ceph71/ssl_server_port 8443

3.5 创建dashboard用户

1
ceph dashboard ac-user-create <username> <password> administrator

到我这里是

1
[root@ceph71 my-cluster]# ceph dashboard set-login-credentials tinychen tinychen