分享

分布式存储的未来 Ceph 14.2.5 集群安装

 copy_left 2020-05-09

Ceph14.2.5 Cluster Install

  • Pattern:Ceph Cluster Install
  • Author:Aleon

1. 本文建立在对Ceph有一定基础的情况下

2. 采用手动部署,以增强对ceph的理解

⼀、ENV Message:

1、Written in front:

A. Official documents are used for this installat ion

B. As an experimental test only

C. Ceph Official website

2、SoftWare:

Ceph version:14.2.5

Ceph: Ceph is an open-source, massively scalable, software-defined

storage system which provides object , block and file system storage

from a single clustered plat form. Ceph's main goals is to be completely

dist ributed without a single point of failure, scalable to the exabyte

level, and freely-available. The data is replicated, making it fault

tolerant . Ceph software runs on commodity hardware. The system is

designed to be both self-healing and self-managing and self awesome

3、Machine:

Hostname IP system Config roleadmin 192.168.184.131 Centos7.7 1C2G && disk*2 &&net *1node1 192.168.184.132 Centos7.7 1C2G && disk*2&&net *1node2 192.168.184.133 Centos7.7 1C2G && disk*2&&net *1node3 192.168.184.134 Centos7.7 1C2G && disk*2&&net *1

⼆、Host Configure

1、Basic configuration for client * 4

# Stop firewalldsystemct l stop firewalld && systemct l disable firewalld# Stop Selinuxsed -I 's/ SELINUX=enforcing / SELINUX=disabled /g' /etc/selinux/config \&& setenforce 0# Install ntp servert imedatect l set-t imezone Asia/Shanghai \&& yum install ntp# Configure ntp for adminvi /etc/ntp.confrest rict 192.168.184.131 nomodify not rapserver cn.ntp.org.cn iburstsystemct l enable ntpd && systemct l restart ntpd# Configure ntp for node*3vi /etc/ntp.confserver 192.168.184.131 iburstsystemct l enable ntpd && systemct l restart ntpd# Configure hostscat << EOF >> /etc/hosts192.168.184.131 admin.example.com admin192.168.184.132 node1.example.com node1192.168.184.133 node2.example.com node2192.168.184.134 node3.example.com node3EOFfor ip in admin node1 node2 node3doscp /etc/hosts $ip:/etc/done#Configure Authssh-keygenfor ip in admin node1 node2 node3dossh-copy-id root@$ipdone#Configure repoyum -y install epel-release yum-plugin-priorit ies ht tps://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-naut ilus/el7/noarch/ceph-release-1- 1.el7.noarch.rpmsed -i -e 's/enabled=1/enabled=1\npriority=1/g' /etc/yum.repos.d/ceph.repocat << EOF >> /etc/yum.repos.d/ceph.repo[ceph]name=cephbaseurl=https://mirrors.aliyun.com/ceph/rpm-naut ilus/el7/x86_64/gpgcheck=0[ceph-noarch]name=cephnoarchbaseurl=https://mirrors.aliyun.com/ceph/rpm-naut ilus/el7/noarch/gpgcheck=0EOFfor ip in admin node1 node2 node3doscp /etc/yum.repos.d/ceph.repo $ip:/etc/yum.repos.d/done

2、Install Ceph

for ip in admin node1 node2 node3 do ssh $ip yum -y install ceph ceph-radosgwdonefor ip in admin node1 node2 node3 do ssh $ip ceph -vdone

三、Ceph Configure

1、Mon Configure

# 添加 ceph配置文件 vi /etc/ceph/ceph.conf[global]fsid = 497cea05-5471-4a1a-9a4d-c86974b27d49mon init ial members = node01mon host = 192.168.184.132public network = 192.168.184.0/24auth cluster required = cephxauth service required = cephxauth client required = cephxosd journal size = 1024#设置副本数osd pool default size = 3#设置最⼩副本数osd pool default min size = 2osd pool default pg num = 333osd pool default pgp num = 333osd crush chooseleaf type = 1osd_mkfs_type = xfsmax mds = 5mds max file size = 100000000000000mds cache size = 1000000#设置osd节点down后900s,把此osd节点逐出ceph集群,把之前映射到此节 点的数据映射到其他节点。mon osd down out interval = 900[mon]#把时钟偏移设置成0.5s,默认是0.05s,由于ceph集群中存在异构PC,导致时钟 偏移总是⼤于0.05s,为了⽅便同步直接把时钟偏移设置成0.5smon clock drift allowed = .50# 创建 keyringceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'C.Create admin keyringsudo ceph-authtool --create-keyring /etc/ceph/ceph.client .admin.keyring --gen- key -n client .admin --cap mon 'allow *' --cap osd 'allow *' --cap mds'allow *' -- cap mgr 'allow *'D.创建sudo ceph-authtool --create-keyring /var/lib/ceph/bootst rap-osd/ceph.keyring-- gen-key -n client .bootst rap-osd --cap mon 'profile bootst rap-osd'E.add keyring with ceph.mon.keyringsudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring/etc/ceph/ceph.client .admin.keyringsudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring/var/lib/ceph/bootst rap-osd/ceph.keyringF.Use Hostname Ipaddress and FSID create Monmapmonmaptool --create --add node1 192.168.184.132 --fsid 497cea05-5471-4a1a- 9a4d-c86974b27d49 /tmp/monmapG.Create default data dirsudo -u ceph mkdir /var/lib/ceph/mon/ceph-node1H.Change ceph dir with user cephchown ceph.ceph /tmp/ceph.mon.keyringI.initial Monsudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring/tmp/ceph.mon.keyringll /var/lib/ceph/mon/ceph-node1J.Avoid reinstall create none dirsudo touch /var/lib/ceph/mon/ceph-node1/doneK.Start ceph with enablesystemct l enable ceph-mon@node1 \&& systemct l restart ceph-mon@node1 \&& systemct l status ceph-mon@node1 -l \&& ceph -sL.Copy keyring file to Cluster hostfor ip in admin node1 node2 node3doscp /etc/ceph/\* root@$ip:/etc/ceph/done2、Add outher monA.Create default dirssh node2 sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node2B.Get keyring in tmpssh node2 ceph auth get mon. -o /tmp/ceph.mon.keyringC.Get mapssh node2 ceph mon getmap -o /tmp/ceph.mon.mapD.Change file limitssh node2 chown ceph.ceph /tmp/ceph.mon.keyringE.Initial monssh node2 sudo -u ceph ceph-mon --mkfs -i node2 --monmap/tmp/ceph.mon.map --keyring /tmp/ceph.mon.keyringF.Start Mon nodessh node2 systemct l start ceph-mon@node2ssh node2 systemct l enable ceph-mon@node2 \&& ceph -s3、Configure OSDA. Create Ceph data & journl Disk with bluestoresudo ceph-volume lvm create --data /dev/sdbB.List Osd Numbersudo ceph-volume lvm listC.Start OSDsystemct l enable ceph-osd@0 && systemctl start ceph-osd@0Notes: Outher Host osd need get net keyringceph auth get client .bootst rap-osd -o /var/lib/ceph/bootst rap-osd/ceph.keyringsudo ceph-volume lvm create --data /dev/sdbsudo ceph-volume lvm listsystemctl start ceph-osd@1 && systemctl enable ceph-osd@14、install Mgr for adminA. Create Keyringceph auth get-or-create mgr.admin mon 'allow profile mgr' osd 'allow *' mds 'allow *'B. Create dir with cephsudo -u ceph mkdir /var/lib/ceph/mgr/ceph-admin/C. get key to dirceph auth get mgr.admin -o /var/lib/ceph/mgr/ceph-admin/keyringD. Start Mgrsystemct l enable ceph-mgr@admin && systemct l restart ceph-mgr@adminNotes: With look the design sketch

2、Add outher mon

# A.Create default dirssh node2 sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node2# B.Get keyring in tmpssh node2 ceph auth get mon. -o /tmp/ceph.mon.keyring# C.Get mapssh node2 ceph mon getmap -o /tmp/ceph.mon.map# D.Change file limitssh node2 chown ceph.ceph /tmp/ceph.mon.keyring# E.Initial monssh node2 sudo -u ceph ceph-mon --mkfs -i node2 --monmap /tmp/ceph.mon.map --keyring /tmp/ceph.mon.keyring# F.Start Mon nodessh node2 systemct l start ceph-mon@node2ssh node2 systemct l enable ceph-mon@node2 && ceph -s

分布式存储的未来 Ceph 14.2.5 集群安装

3、Configure OSD

# A. Create Ceph data & journl Disk with bluestore   sudo ceph-volume lvm create --data /dev/sdb  # B.List Osd Number   sudo ceph-volume lvm list # C.Start OSD   systemctl enable ceph-osd@0 && systemctl start ceph-osd@0 # Notes: Outher Host osd need get net keyring   ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-      osd/ceph.keyring   sudo ceph-volume lvm create --data /dev/sdb    sudo ceph-volume lvm list    systemctl start ceph-osd@1 && systemctl enable ceph-osd@1

4、install Mgr for admin

# A. Create Keyringceph auth get-or-create mgr.admin mon 'allow profile mgr' osd 'allow *' mds 'allow *'# B. Create dir with cephsudo -u ceph mkdir /var/lib/ceph/mgr/ceph-admin/# C. get key to dirceph auth get mgr.admin -o /var/lib/ceph/mgr/ceph-admin/keyring# D. Start Mgrsystemct l enable ceph-mgr@admin && systemct l restart ceph-mgr@adminNotes: With look the design sketch

四、Use Ceph block device

# A、Create OSD poolceph osd pool create rbd 128# B、Checkceph osd lspools# C、Initial poolrbd pool init rbd# D、Create rbd diskrbd create disk01 --size 2G --image-feature layering && rbd ls -l# E、Mapping rbd block localsudo rbd map disk01# F、show mappingrbd showmapped# G、Format disksudo mkfs.xfs /dev/rbd0# H、mount disksudo mount /dev/rbd0 /mnt && df -Th

五、Use filesystem

# A、Configure with MDS with node1sudo -u ceph mkdir /var/lib/ceph/mds/ceph-node1# B、Create MDS keyringceph auth get-or-create mds.node1 osd 'allow rwx' mds 'allow' mon 'allow profile mds'# C、Get Mds keyringceph auth get mds.node1 -o /var/lib/ceph/mds/ceph-node01/keyring#D、Configure ceph.confcat << EOF >> /etc/ceph/ceph.conf[mds.{node1}]host = {node1}EOF# E、Start MDSsystemct l enable ceph-mds@node1 && systemct l start ceph-mds@node1# F、Create Poolceph osd pool create cephfs_data 128ceph osd pool create cephfs_metadata 128ceph fs new cephfs cephfs_metadata cephfs_dataceph fs lsceph mds stat# G、Mount cephFS with Clientyum -y install ceph-fusessh node1 'sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring' > admin.keychmod 600 admin.keymount -t ceph node1:6789:/ /mnt -o name=admin,secretfile=admin.key && df -Th

分布式存储的未来 Ceph 14.2.5 集群安装

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多