分享

k8s笔记014-DaemonSet部署

 涅槃沉殇 2018-01-05
DaemonSet可以在k8s集群的所有或指定Node上运行同一个pod(守护进程),当Node添加进k8s集群时,DaemonSet会将该pod调度到Node上运行,当node从k8s集群移出时,DaemonSet会删除该Node上的pod。
通常,DaemonSet可以用于部署类似日志收集、监控、分布式存储等服务。
下面,我们尝试在k8s集群中部署glusterfs分布式存储
1. GlusterFS管理服务容器需要在特权模式下运行,所以需要给apiserver和kubelet添加启动参数--allow-privileged=true,这个我们在之前部署flannel时已经配置好了
2. 在规划好准备安装GlusterFS的Node上安装GlusterFS客户端
[root@k8s-node01 ~]# yum -y install glusterfs glusterfs-fuse
3. 给要部署GlusterFS pod的Node打上标签,便于后面pod选择
[root@k8s-node01 ~]# kubectl label node 172.18.0.144 storagenode=glusterfs
node "172.18.0.144" labeled
[root@k8s-node01 ~]# kubectl label node 172.18.0.145 storagenode=glusterfs
node "172.18.0.145" labeled
[root@k8s-node01 ~]# kubectl label node 172.18.0.146 storagenode=glusterfs
node "172.18.0.146" labeled
查看下
[root@k8s-node01 ~]# kubectl get nodes -l=storagenode=glusterfs
NAME STATUS AGE VERSION
172.18.0.144 Ready 34d v1.7.5
172.18.0.145 Ready 33d v1.7.5
172.18.0.146 Ready 33d v1.7.5
4. 编辑DaemonSet定义文件
[root@k8s-master01 glusterfs]# vim glusterfs-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: glusterfs
labels:
glusterfs: daemonset
annotations:
description: GlusterFS DaemonSet
tags: glusterfs
spec:
template:
metadata:
name: glusterfs
labels:
glusterfs-node: pod
spec:
nodeSelector:
storagenode: glusterfs
hostNetwork: true
containers:
- image: gluster/gluster-centos:latest
name: glusterfs
volumeMounts:
- name: glusterfs-heketi
mountPath: /var/lib/heketi
- name: glusterfs-run
mountPath: /run
- name: glusterfs-lvm
mountPath: /run/lvm
- name: glusterfs-etc
mountPath: /etc/glusterfs
- name: glusterfs-log
mountPath: /var/log/glusterfs
- name: glusterfs-config
mountPath: /var/lib/glusterd
- name: glusterfs-dev
mountPath: /dev
- name: glusterfs-misc
mountPath: /var/lib/misc/glusterfsd
- name: glusterfs-cgroup
mountPath: /sys/fs/cgroup
- name: glusterfs-ssl
mountPath: /etc/ssl
readOnly: true
securityContext:
capabilities: {}
privileged: true
readinessProbe:
timeoutSeconds: 3
initialDelaySeconds: 60
exec:
command:
- "/bin/bash"
- "-c"
- systemctl status glusterd.service
livenessProbe:
timeoutSeconds: 3
initialDelaySeconds: 60
exec:
command:
- "/bin/bash"
- "-c"
- systemctl status glusterd.service
volumes:
- name: glusterfs-heketi
hostPath:
path: /var/lib/heketi
- name: glusterfs-run
hostPath:
path: /run
- name: glusterfs-lvm
hostPath:
path: /run/lvm
- name: glusterfs-etc
hostPath:
path: /etc/glusterfs
- name: glusterfs-log
hostPath:
path: /var/log/glusterfs
- name: glusterfs-config
hostPath:
path: /var/lib/glusterd
- name: glusterfs-dev
hostPath:
path: /dev
- name: glusterfs-misc
hostPath:
path: /var/lib/misc/glusterfsd
- name: glusterfs-cgroup
hostPath:
path: /sys/fs/cgroup
- name: glusterfs-ssl
hostPath:
path: /etc/ssl
5. 根据定义文件创建DaemonSet
[root@k8s-master01 glusterfs]# kubectl create -f glusterfs-ds.yaml
daemonset "glusterfs" created
6. 查看GlusterFS Daemonset
[root@k8s-master01 glusterfs]# kubectl get daemonset
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE
glusterfs 3 3 0 3 0 storagenode=glusterfs 17s
[root@k8s-master01 glusterfs]# kubectl describe daemonset glusterfs
Name: glusterfs
Selector: glusterfs-node=pod
Node-Selector: storagenode=glusterfs
Labels: glusterfs=daemonset
Annotations: description=GlusterFS DaemonSet
tags=glusterfs
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: glusterfs-node=pod
Containers:
glusterfs:
Image: gluster/gluster-centos:latest
Port: <none>
Liveness: exec [/bin/bash -c systemctl status glusterd.service] delay=60s timeout=3s period=10s #success=1 #failure=3
Readiness: exec [/bin/bash -c systemctl status glusterd.service] delay=60s timeout=3s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/dev from glusterfs-dev (rw)
/etc/glusterfs from glusterfs-etc (rw)
/etc/ssl from glusterfs-ssl (ro)
/run from glusterfs-run (rw)
/run/lvm from glusterfs-lvm (rw)
/sys/fs/cgroup from glusterfs-cgroup (rw)
/var/lib/glusterd from glusterfs-config (rw)
/var/lib/heketi from glusterfs-heketi (rw)
/var/lib/misc/glusterfsd from glusterfs-misc (rw)
/var/log/glusterfs from glusterfs-log (rw)
Volumes:
glusterfs-heketi:
Type: HostPath (bare host directory volume)
Path: /var/lib/heketi
glusterfs-run:
Type: HostPath (bare host directory volume)
Path: /run
glusterfs-lvm:
Type: HostPath (bare host directory volume)
Path: /run/lvm
glusterfs-etc:
Type: HostPath (bare host directory volume)
Path: /etc/glusterfs
glusterfs-log:
Type: HostPath (bare host directory volume)
Path: /var/log/glusterfs
glusterfs-config:
Type: HostPath (bare host directory volume)
Path: /var/lib/glusterd
glusterfs-dev:
Type: HostPath (bare host directory volume)
Path: /dev
glusterfs-misc:
Type: HostPath (bare host directory volume)
Path: /var/lib/misc/glusterfsd
glusterfs-cgroup:
Type: HostPath (bare host directory volume)
Path: /sys/fs/cgroup
glusterfs-ssl:
Type: HostPath (bare host directory volume)
Path: /etc/ssl
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
28s 28s 1 daemon-set Normal SuccessfulCreate Created pod: glusterfs-wfjmp
28s 28s 1 daemon-set Normal SuccessfulCreate Created pod: glusterfs-ls248
28s 28s 1 daemon-set Normal SuccessfulCreate Created pod: glusterfs-kq5v7
[root@k8s-master01 glusterfs]# kubectl get pods -l=glusterfs-node=pod
NAME READY STATUS RESTARTS AGE
glusterfs-kq5v7 1/1 Running 0 9m
glusterfs-ls248 1/1 Running 0 9m
glusterfs-wfjmp 1/1 Running 0 9m
7. 创建Heketi服务
在上面的操作中,我们已经成功使用DaemonSet部署了三个glusterfs的pod,接下来,我们需要创建glusterfs集群。
Heketi是一个提供RESTful API管理GlusterFS卷的框架
7.1 创建heketi的ServiceAccount对象
[root@k8s-master01 glusterfs]# vim heketi-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: heketi-service-account
[root@k8s-master01 glusterfs]# kubectl create -f heketi-sa.yaml
serviceaccount "heketi-service-account" created
7.2 部署Heketi服务
[root@k8s-master01 glusterfs]# vim heketi-dp-svc.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deploy-heketi
labels:
glusterfs: heketi-deployment
deploy-heketi: heketi-deployment
annotations:
description: Defines how to deploy Heketi
spec:
replicas: 1
template:
metadata:
name: deploy-heketi
labels:
name: deploy-heketi
glusterfs: heketi-pod
spec:
serviceAccountName: heketi-service-account
containers:
- name: deploy-heketi
image: heketi/heketi:dev
env:
- name: HEKETI_EXECUTOR
value: "kubernetes"
- name: HEKETI_FSTAB
value: "/var/lib/heketi/fstab"
- name: HEKETI_SNAPSHOT_LIMIT
value: "14"
- name: HEKETI_KUBE_GLUSTER_DAEMONSET
value: "y"
ports:
- containerPort: 8080
volumeMounts:
- name: db
mountPath: /var/lib/heketi
readinessProbe:
timeoutSeconds: 3
initialDelaySeconds: 3
httpGet:
path: /hello
port: 8080
livenessProbe:
timeoutSeconds: 3
initialDelaySeconds: 30
httpGet:
path: /hello
port: 8080
volumes:
- name: db
hostPath:
path: /heketi-data
---
apiVersion: v1
kind: Service
metadata:
name: deploy-heketi
labels:
glusterfs: heketi-service
deploy-heketi: support
annotations:
description: Exposes Heketi Service
spec:
selector:
name: deploy-heketi
ports:
- name: deploy-heketi
port: 8080
targetPort: 8080
[root@k8s-master01 glusterfs]# kubectl create -f heketi-dp-svc.yaml
deployment "deploy-heketi" created
service "deploy-heketi" created
[root@k8s-master01 glusterfs]# kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy-heketi 1 1 1 1 1m
8. 为Heketi设置GlusterFS集群
我们需要先定义一个json文件(包含GlusterFS节点和设备等信息),让Heketi使用该json文件来创建和管理GlusterFS集群
8.1 创建topology.json文件
下面的manage是主机名,storage是ip地址,devices是未创建文件系统的裸设备(可以有多块磁盘)
[root@k8s-master01 glusterfs]# vim topology.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"k8s-node01"
],
"storage": [
"172.18.0.144"
]
},
"zone": 1
},
"devices": [
"/dev/sdc"
]
},
{
"node": {
"hostnames": {
"manage": [
"k8s-node02"
],
"storage": [
"172.18.0.145"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"k8s-master03"
],
"storage": [
"172.18.0.146"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
}
]
}
]
}
8.2 将json文件拷贝到heketi容器中
[root@k8s-master01 glusterfs]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-heketi-2475378450-9010c 1/1 Running 0 36m
[root@k8s-master01 glusterfs]# kubectl cp topology.json deploy-heketi-2475378450-9010c:/
8.3 进入Heketi容器,使用命令行工具heketi-cli创建GlusterFS集群
[root@k8s-master01 glusterfs]# kubectl exec -ti deploy-heketi-2475378450-9010c /bin/bash
[root@deploy-heketi-2475378450-9010c /]# heketi-cli topology load --json=topology.json

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多