Allen/Ceph测试集群搭建记录

Created Thu, 21 Dec 2023 21:54:15 +0800 Modified Mon, 28 Oct 2024 12:22:06 +0800
4314 Words

记录一下手工搭建一个三节点ceph集群的步骤,包含了rbd,cephfs和rgw。

ceph 部署步骤

官方文档
参考视频

  • 本次部署采用手动yum安装方式,入门时候对linux不熟悉,使用ceph-deploy部署,熟悉后发现还是手动部署维护最方便。

搭建环境说明

  • 操作系统采用CentOS Linux release 7.9.2009,内核5.4.248-1.el7.elrepo.x86_64
  • centos 原生系统内核版本过低,某些情况下对ceph新特性支持不是太好,所以安装前先升级到最新稳定版本内核。
  • ceph 版本采用centos7 支持的14 N版本,小版本号为 2.22 该版本修复了很多bug,属于比较稳定优秀的版本。
  • 虚拟磁盘规格需大于50G,之前在小于50G的磁盘上创建osd遇见奇奇怪怪的问题。
  • 虚拟机内存需大于4G,因为ceph相关的服务还是蛮重的,内存消耗不小。
  • 本次搭建采用三台Linux虚拟机。配置如下
主机名 IP CPU/内存 硬盘
ceph1 192.168.230.121 4C8G 2 * 100G
ceph2 192.168.230.122 4C8G 2 * 100G
ceph3 192.168.230.123 4C8G 2 * 100G

基础环境准备(三台节点均需要执行)

  • 关闭selinux 和防火墙,配置静态IP和hosts文件,更新系统内核。
# 关闭seliunx,修改SELINUX=disabled ,因后续更新系统内核会重启节点,不再重复在线修改
# 在线修改命令为 setenforce 0
[root@ceph1 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled 
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted
# 关闭防火墙,生产上千万不要这样玩
systemctl stop firewalld.service
systemctl disable firewalld.service
# 修改网卡配置为静态IP,centos虚拟机默认为DHCP。具体配置请结合实际情况修改
[root@ceph1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.230.121
NETMASK=255.255.255.0
GATEWAY=192.168.230.2
DNS1=223.5.5.5
NM_CONTROLLED=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=ens33
UUID=7938cf9d-39ac-4ce5-ab7e-2c81d7136e7a
DEVICE=ens33
ONBOOT=yes

# 修改hosts文件,添加 ceph 相关的3条解析记录
[root@ceph1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.230.121 ceph1
192.168.230.122 ceph2
192.168.230.123 ceph3
# 更新Linux内核
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org # 导入GPG key 
yum install -y https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm #配置elrepo源
yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available # 查看可更新内核

[root@ceph1 ~]# yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available
Loaded plugins: fastestmirror, langpacks
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Determining fastest mirrors
 * elrepo-kernel: hkg.mirror.rackspace.com
Available Packages
kernel-lt-devel.x86_64                                                     5.4.248-1.el7.elrepo                                           elrepo-kernel
kernel-lt-doc.noarch                                                       5.4.248-1.el7.elrepo                                           elrepo-kernel
kernel-lt-headers.x86_64                                                   5.4.248-1.el7.elrepo                                           elrepo-kernel
kernel-lt-tools.x86_64                                                     5.4.248-1.el7.elrepo                                           elrepo-kernel
kernel-lt-tools-libs.x86_64                                                5.4.248-1.el7.elrepo                                           elrepo-kernel
kernel-lt-tools-libs-devel.x86_64                                          5.4.248-1.el7.elrepo                                           elrepo-kernel
kernel-ml.x86_64                                                           6.3.9-1.el7.elrepo                                             elrepo-kernel
kernel-ml-devel.x86_64                                                     6.3.9-1.el7.elrepo                                             elrepo-kernel
kernel-ml-doc.noarch                                                       6.3.9-1.el7.elrepo                                             elrepo-kernel
kernel-ml-headers.x86_64                                                   6.3.9-1.el7.elrepo                                             elrepo-kernel
kernel-ml-tools.x86_64                                                     6.3.9-1.el7.elrepo                                             elrepo-kernel
kernel-ml-tools-libs.x86_64                                                6.3.9-1.el7.elrepo                                             elrepo-kernel
kernel-ml-tools-libs-devel.x86_64                                          6.3.9-1.el7.elrepo                                             elrepo-kernel
perf.x86_64                                                                5.4.248-1.el7.elrepo                                           elrepo-kernel
python-perf.x86_64                                                         5.4.248-1.el7.elrepo                                           elrepo-kernel

yum  --enablerepo=elrepo-kernel  install  -y  kernel-lt # 安装lt版本内核,表示long term 稳定版本
awk -F\' '$1=="menuentry " {print i++ " : " $2}' /boot/grub2/grub.cfg # 获取当前已安装内核信息

[root@ceph1 ~]# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /boot/grub2/grub.cfg
0 : CentOS Linux (5.4.248-1.el7.elrepo.x86_64) 7 (Core)
1 : CentOS Linux (3.10.0-1160.el7.x86_64) 7 (Core)
2 : CentOS Linux (0-rescue-c80f8cf7a34d4b02ac5bcb9a789f1870) 7 (Core)

grub2-set-default 0 # 设置默认启动内核版本
grub2-editenv list # 查看当前默认启动内核版本

[root@ceph1 ~]# grub2-editenv list
saved_entry=0
  • 更新yum源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

[root@ceph1 ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for
baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

yum list | grep ceph #查看当前可安装ceph版本

[root@ceph1 ~]# yum list | grep ceph
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
ceph.x86_64                              2:14.2.22-0.el7               @Ceph
ceph-base.x86_64                         2:14.2.22-0.el7               @Ceph
ceph-common.x86_64                       2:14.2.22-0.el7               @Ceph
ceph-mds.x86_64                          2:14.2.22-0.el7               @Ceph
ceph-mgr.x86_64                          2:14.2.22-0.el7               @Ceph
...

安装ceph包,增加ceph.conf 配置文件

yum install ceph -y # 安装ceph包
uuidgen # 生成随机uuid
# 编写ceph.conf 文件,并同步到全部三台节点
[root@ceph1 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 3e07d43f-688e-4284-bfb7-3e6ed5d3b77b  # 刚才生成的随机uuid
mon initial members = ceph1, ceph2, ceph3 # ceph mon 名称
mon host = 192.168.230.121,192.168.230.122,192.168.230.123 # mon 节点ip
public network = 192.168.230.1/24 # 存储集群外网ip段,因为虚拟机只有一张网卡,会与存储内网IP段复用
cluster network = 192.168.230.1/24 # 存储集群内网ip端
auth cluster required = cephx # ceph认证方式设置为cephx
auth service required = cephx
auth client required = cephx
osd crush chooseleaf type = 1 # 选择host 级别冗余,如果是0 ,副本则只会分不到不同的osd

部署mon

# 创建mon keyring,在ceph1节点执行
sudo ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'

# 创建 client.admin keyring,在ceph1节点执行
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'

# 创建 bootstrap-osd keyring,在ceph1节点执行
sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'

# 将生成的三个keyring文件复制到各个节点
scp /tmp/ceph.mon.keyring <hostname>:/tmp/
scp /etc/ceph/ceph.client.admin.keyring <hostname>:/etc/ceph/
scp /var/lib/ceph/bootstrap-osd/ceph.keyring <hostname>:/var/lib/ceph/bootstrap-osd/

# 将生成的keyring 增加到 ceph mon keyring文件,全部节点执行。
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring

# 修改文件归属到ceph用户,全部节点执行。
sudo chown ceph:ceph /tmp/ceph.mon.keyring

# 生成 mon map ,全部节点执行。
monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
# 例:
monmaptool --create --add ceph1 192.168.230.121 --add ceph2 192.168.230.122 --add ceph3 192.168.230.123 --fsid 3e07d43f-688e-4284-bfb7-3e6ed5d3b77b /tmp/monmap

# 创建mon dir 并初始化 mon,全部节点执行。
sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}
chown -R ceph:ceph /var/lib/ceph/mon/{cluster-name}-{hostname}
sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
# 例:
sudo -u ceph ceph-mon --mkfs -i ceph1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

# 启动mon 并配置开机启动项,全部节点执行。
systemctl start ceph-mon@ceph1
systemctl enable ceph-mon@ceph1

# 启动后检查,任意节点执行`ceph -s`,应该可看到三个mon正常启动
# 我这里是已经全部服务部署完后的记录
[root@ceph3 ~]# ceph -s
  cluster:
    id:     3e07d43f-688e-4284-bfb7-3e6ed5d3b77b
    health: HEALTH_WARN
            noout flag(s) set
            mons are allowing insecure global_id reclaim

  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 8m) # 如果只部署完mon,看到这里三个服务都加入集群,即可认为正常
    mgr: ceph2(active, since 100m), standbys: ceph3, ceph1
    mds: cephfs:1 {0=ceph2=up:active} 2 up:standby
    osd: 3 osds: 3 up (since 100m), 3 in (since 6M)
         flags noout

  data:
    pools:   3 pools, 40 pgs
    objects: 116 objects, 304 MiB
    usage:   3.9 GiB used, 296 GiB / 300 GiB avail
    pgs:     40 active+clean

部署mgr,全部节点执行。

# 创建keyring
mkdir /var/lib/ceph/mgr/ceph-ceph1 # 根据不同的主机名修改目录名称,格式均为/var/lib/ceph/mgr/ceph-{hostname},下同,不再重复赘述。
ceph auth get client.bootstrap-mgr -o /etc/ceph/ceph.client.bootstrap-mgr.keyring
ceph --cluster ceph --name client.bootstrap-mgr --keyring /etc/ceph/ceph.client.bootstrap-mgr.keyring auth get-or-create mgr.ceph1 mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-ceph1/keyring # 根据节点名称修改

# 启动服务,并配置开机启动项
# 需要touch 这两个文件,表示开机自启动
touch /var/lib/ceph/mgr/ceph-ceph1/done
touch /var/lib/ceph/mgr/ceph-ceph1/systemd
chown ceph:ceph -R /var/lib/ceph/mgr/ceph-ceph1/
systemctl start ceph-mgr@ceph1
systemctl enable ceph-mgr@ceph1

# 启动后检查,任意节点执行`ceph -s`,应该可看到三个mgr正常启动
# 我这里是已经全部服务部署完后的记录
[root@ceph3 ~]# ceph -s
  cluster:
    id:     3e07d43f-688e-4284-bfb7-3e6ed5d3b77b
    health: HEALTH_WARN
            noout flag(s) set
            mons are allowing insecure global_id reclaim

  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 8m) 
    mgr: ceph2(active, since 100m), standbys: ceph3, ceph1 # 看到这里三个服务都加入集群,即可认为正常
    mds: cephfs:1 {0=ceph2=up:active} 2 up:standby
    osd: 3 osds: 3 up (since 100m), 3 in (since 6M)
         flags noout

  data:
    pools:   3 pools, 40 pgs
    objects: 116 objects, 304 MiB
    usage:   3.9 GiB used, 296 GiB / 300 GiB avail
    pgs:     40 active+clean

部署OSD,全部节点执行

# osd 部署方式非常多,这里只采用了最简单方式
# 每个节点的硬盘均执行该命令,--data 后替换为盘符路径
ceph-volume lvm create --data /dev/sdc

[root@ceph1 ~]# ceph-volume lvm create --data /dev/sdc
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b89556fe-9fb7-4a07-8417-8bf684bfc486
Running command: /usr/sbin/vgcreate --force --yes ceph-0d5de9c1-c3cf-4490-86bc-2855018c41a3 /dev/sdc
 stdout: Physical volume "/dev/sdc" successfully created.
 stdout: Volume group "ceph-0d5de9c1-c3cf-4490-86bc-2855018c41a3" successfully created
Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-b89556fe-9fb7-4a07-8417-8bf684bfc486 ceph-0d5de9c1-c3cf-4490-86bc-2855018c41a3
 stdout: Logical volume "osd-block-b89556fe-9fb7-4a07-8417-8bf684bfc486" created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-0d5de9c1-c3cf-4490-86bc-2855018c41a3/osd-block-b89556fe-9fb7-4a07-8417-8bf684bfc486
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
Running command: /usr/bin/ln -s /dev/ceph-0d5de9c1-c3cf-4490-86bc-2855018c41a3/osd-block-b89556fe-9fb7-4a07-8417-8bf684bfc486 /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
 stderr: 2023-12-21 19:57:06.731 7fa06a862700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2023-12-21 19:57:06.731 7fa06a862700 -1 AuthRegistry(0x7fa0640662f8) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
 stderr: got monmap epoch 2
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring --create-keyring --name osd.3 --add-key AQARKIRlbJqUMRAAn1sypgT7KD+5jpS/MVKU8A==
 stdout: creating /var/lib/ceph/osd/ceph-3/keyring
added entity osd.3 auth(key=AQARKIRlbJqUMRAAn1sypgT7KD+5jpS/MVKU8A==)
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid b89556fe-9fb7-4a07-8417-8bf684bfc486 --setuser ceph --setgroup ceph
 stderr: 2023-12-21 19:57:07.240 7f91e4ec8a80 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid
--> ceph-volume lvm prepare successful for: /dev/sdc
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-0d5de9c1-c3cf-4490-86bc-2855018c41a3/osd-block-b89556fe-9fb7-4a07-8417-8bf684bfc486 --path /var/lib/ceph/osd/ceph-3 --no-mon-config
Running command: /usr/bin/ln -snf /dev/ceph-0d5de9c1-c3cf-4490-86bc-2855018c41a3/osd-block-b89556fe-9fb7-4a07-8417-8bf684bfc486 /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/systemctl enable ceph-volume@lvm-3-b89556fe-9fb7-4a07-8417-8bf684bfc486
 stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-3-b89556fe-9fb7-4a07-8417-8bf684bfc486.service to /usr/lib/systemd/system/ceph-volume@.service.
Running command: /usr/bin/systemctl enable --runtime ceph-osd@3
 stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service to /usr/lib/systemd/system/ceph-osd@.service.
Running command: /usr/bin/systemctl start ceph-osd@3
--> ceph-volume lvm activate successful for osd ID: 3
--> ceph-volume lvm create successful for: /dev/sdc

# 最后看到successful 即为创建成功

# 任意节点执行 ceph osd tree 查看,ceph2 节点保留一个作为扩容测试使用
[root@ceph2 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF
-1       0.48798 root default
-5       0.19499     host ceph1
 0   hdd 0.09799         osd.0      up  1.00000 1.00000
 3   hdd 0.09799         osd.3      up  1.00000 1.00000
-7       0.09799     host ceph2
 1   hdd 0.09799         osd.1      up  1.00000 1.00000
-3       0.19499     host ceph3
 2   hdd 0.09799         osd.2      up  1.00000 1.00000
 4   hdd 0.09799         osd.4      up  1.00000 1.00000

创建RBD

# 创建对应的存储池,最后的数字表示这个pool对应的pg数量,这个值如何选择后续再开坑。
ceph osd pool create rbd 8

# 执行ceph df 检查对应的pool 状态,我这里还有其他pool,暂时先省略,能看到rbd pool即可
[root@ceph2 ~]# ceph df
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
    hdd       500 GiB     494 GiB     1.1 GiB      6.1 GiB          1.22
    TOTAL     500 GiB     494 GiB     1.1 GiB      6.1 GiB          1.22

POOLS:
    POOL                          ID     PGS     STORED      OBJECTS     USED        %USED     MAX AVAIL
    ...
    rbd                            3       8     296 MiB          93     891 MiB      0.19       156 GiB
    ...

# 取消pool 的pg自动扩缩功能,能自己控制最好,这个扩缩容有点傻
ceph osd pool set rbd pg_autoscale_mode off
# 设置 pool 的应用类型是rbd 
ceph osd pool application enable rbd rbd
# 创建 测试用img
rbd create img --size 10G
# 直接使用nbd模式挂载,暂时先不用rbd内核模式,内核挂载还需要取消一些功能才能挂载
rbd-nbd map rbd/img


[root@ceph3 ~]# rbd ls # 查看rbd是否开出来
img
[root@ceph3 ~]# rbd-nbd map rbd/img # 挂载rbd
/dev/nbd0
[root@ceph3 ~]# lsblk
NAME                                                                                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb                                                                                                     8:16   0  100G  0 disk
└─ceph--7f221e4c--5f01--4ade--bc4e--5defa7c0ca7b-osd--block--fa66afac--653b--4d3a--ae43--abf1216f8112 253:2    0  100G  0 lvm
sr0                                                                                                    11:0    1 1024M  0 rom
sdc                                                                                                     8:32   0  100G  0 disk
└─ceph--0a327e95--d75e--4db2--b2b3--f51578287b92-osd--block--04315d24--dcf5--4c42--a3fb--7246b0f05d64 253:3    0  100G  0 lvm
nbd0                                                                                                   43:0    0   10G  0 disk #看到已经挂载上
sda                                                                                                     8:0    0   50G  0 disk
├─sda2                                                                                                  8:2    0   49G  0 part
│ ├─centos-swap                                                                                       253:1    0    5G  0 lvm  [SWAP]
│ └─centos-root                                                                                       253:0    0   44G  0 lvm  /
└─sda1                                                                                                  8:1    0    1G  0 part /boot

部署mds,根据需要配置启动mds的数量

  • 我这里在三个节点上都部署了,为了方便以后做故障倒换测试,一个也可以运行
# 配置mds服务,与mgr类似,修改目录为对应节点主机名
ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-ceph1/keyring --gen-key -n mds.ceph1
ceph auth add mds.ceph1 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-ceph1/keyring
ceph auth get mds.ceph1
touch /var/lib/ceph/mds/ceph-ceph1/systemd
touch /var/lib/ceph/mds/ceph-ceph1/done
chown ceph:ceph -R /var/lib/ceph/mds/ceph-ceph1
systemctl start ceph-mds@ceph1
systemctl enable ceph-mds@ceph1

# 创建cephfs
ceph osd pool create cephfs_metadata 16 # 与rbd类似,fs因为是文件系统,元数据和数据分开存储,所以要开两个pool
ceph osd pool create cephfs_data 16
ceph fs new cephfs cephfs_metadata cephfs_data # 创建对应的cephfs

# ceph-fuse 方式挂载cephfs
ceph-fuse /mnt/cephfs/

[root@ceph3 ~]# ceph-fuse /mnt/cephfs/
2023-12-21 21:19:48.354 7fcbd83e7f80 -1 init, newargv = 0x55701292e6a0 newargc=9ceph-fuse[36864]: starting ceph client

ceph-fuse[36864]: starting fuse
[root@ceph3 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 3.9G     0  3.9G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G  9.8M  3.9G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root   44G  5.6G   39G  13% /
/dev/sda1               1014M  228M  787M  23% /boot
tmpfs                    3.9G   24K  3.9G   1% /var/lib/ceph/osd/ceph-2
tmpfs                    793M   12K  793M   1% /run/user/42
tmpfs                    793M     0  793M   0% /run/user/0
tmpfs                    3.9G   52K  3.9G   1% /var/lib/ceph/osd/ceph-4
ceph-fuse                157G     0  157G   0% /mnt/cephfs # 这里看到已经挂载成功

部署rgw

  • 我这里只在其中一个节点上部署了,因为rgw没有故障倒换一说。一般生产上是启动多个rgw,然后使用nginx反向代理负载均衡到多个rgw。
# ceph.conf 增加以下配置,
[client.ceph3]
    host = ceph3 # 根据节点名称自行修改
    keyring = /var/lib/ceph/radosgw/ceph-ceph3/keyring # 根据节点名称自行修改
    log file = /var/log/ceph/radosgw.ceph3.log # 根据节点名称自行修改
    rgw dns name = s3.ceph3.local # 这个域名自己喜欢,随便起
    rgw_dynamic_resharding = false
    rgw frontends = civetweb port=80

# 启动radosgw 服务,与之前一样根据节点名称自行修改
ceph auth get-or-create client.ceph3 osd 'allow rwx' mon 'allow rwx' -o /var/lib/ceph/radosgw/ceph-ceph3/keyring
touch /var/lib/ceph/radosgw/ceph-ceph3/done
touch /var/lib/ceph/radosgw/ceph-ceph3/systemd
chown -R ceph:ceph /var/lib/ceph/radosgw/ceph-ceph3/
systemctl start ceph-radosgw@ceph3
systemctl enable ceph-radosgw@ceph3

# 创建用户,s3cmd 测试访问
[root@ceph3 ~]# radosgw-admin user create --uid wzn --display-name=wzn
{
    "user_id": "wzn",
    "display_name": "wzn",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "wzn",
            "access_key": "3W1***WEO", # 记住这两个数值,就是访问S3 的AK/SK
            "secret_key": "mqO***DHq" # 记住这两个数值,就是访问S3 的AK/SK
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}
  • 配置s3cmd 测试
# s3cmd 配置文件的默认访问路径就是当前用户家目录下的.s3cfg,如果事其他文件,请用 -c 参数指定,配置文件中的参数再开坑记录吧,(开了好多坑啊)
[root@ceph1 ~]# cat .s3cfg 
[default]
access_key = 3W1***WEO # 填入 AK
secret_key = mqO***DHq # 填入SK
default_mime_type = binary/octet-stream
enable_multipart = True
multipart_chunk_size_mb = 15
socket_timeout = 300
stop_on_error = False
use_mine_magix = True
verbosity = WARNING
signature_v2 = True
encoding = UTF-8
encrypt = False
host_base = s3.ceph3.local
host_bucket = %(bucket)s.s3.ceph3.local
use_https = False

# 配置hosts 解析,因为s3cmd访问bucket是通过域名
[root@ceph1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.230.121 ceph1
192.168.230.122 ceph2
192.168.230.123 ceph3 s3.ceph3.local test.s3.ceph3.local # 增加s3域名和对应bucket的解析,bucket名称是test,访问域名就是<bucket>.<s3_endpoint>组合

# 执行创建bucket命令,并查看
s3cmd mb s3://test

[root@ceph1 ~]# s3cmd ls
2023-12-20 19:39  s3://test # 看到已经开出来了