Pod是kubernetes的最小管理单元,在kubernetes中,按照pod的创建方式可以将其分为两类:
Pod控制器是管理pod的中间层,使用Pod控制器之后,只需要告诉Pod控制器,想要多少个什么样的Pod就可以了,它会创建出满足条件的Pod并确保每一个Pod资源处于用户期望的目标状态。如果Pod资源在运行中出现故障,它会基于指定策略重新编排Pod。
在kubernetes中,有很多类型的pod控制器,每种都有自己的适合的场景,常见的有下面这些:
ReplicaSet的主要作用是保证一定数量的pod正常运行,它会持续监听这些Pod的运行状态,一旦Pod发生故障,就会重启或重建。同时它还支持对pod数量的扩缩容和镜像版本的升降级。
apiVersion: apps/v1 # 版本号
kind: ReplicaSet # 类型
metadata:
name:
namespace:
labels:
spec:
replicas: <副本数量>
selector: # 选择器,通过它指定该控制器管理哪些pod
matchLabels: # Labels匹配规则
matchExpressions: # Expressions匹配规则
- {key: <lableskey>, operator: <匹配方式>, values: <lablesvalue>}
template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
metadata:
labels:
spec:
containers:
- name:
image:
ports:
replicas:指定副本数量,其实就是当前rs创建出来的pod的数量,默认为1
selector:选择器,它的作用是建立pod控制器和pod之间的关联关系,采用的Label Selector机制。在pod模板上定义label,在控制器上定义选择器,就可以表明当前控制器能管理哪些pod了
template:模板,就是当前控制器创建pod所使用的模板板,里面其实就是前一章学过的pod的定义
创建Pc-Replicaset.yaml文件,内容如下:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: pc-replicaset
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
# 创建rs
[root@master yaml]# kubectl create -f Pc-Replicaset.yaml
replicaset.apps/pc-replicaset created
# 查看rs
# DESIRED:期望副本数量
# CURRENT:当前副本数量
# READY:已经准备好提供服务的副本数量
[root@master yaml]# kubectl get rs pc-replicaset -n default -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
pc-replicaset 3 3 3 2m41s nginx docker.io/library/nginx:1.23.1 app=nginx-pod
# 查看当前控制器创建出来的pod
# 控制器创建的pod的名称是在控制器名称后面拼接了-xxxxx随机码
[root@master yaml]# kubectl get pod -n default
NAME READY STATUS RESTARTS AGE
pc-replicaset-fvjg2 1/1 Running 0 3m53s
pc-replicaset-lzfc2 1/1 Running 0 3m53s
pc-replicaset-q7hrm 1/1 Running 0 3m53s
# 在线编辑配置
# 修改spce.replicas为6即可
[root@master yaml]# kubectl edit rs pc-replicaset -n default
replicaset.apps/pc-replicaset edited
# 查看Pod数量
[root@master yaml]# kubectl get pod -n default
NAME READY STATUS RESTARTS AGE
pc-replicaset-49zl6 1/1 Running 0 67s
pc-replicaset-5ngpl 1/1 Running 0 67s
pc-replicaset-fvjg2 1/1 Running 0 6m45s
pc-replicaset-lzfc2 1/1 Running 0 6m45s
pc-replicaset-pnstd 1/1 Running 0 67s
pc-replicaset-q7hrm 1/1 Running 0 6m45s
# 使用命令
# 使用scale实现扩缩容replicas为扩缩容的数量
[root@master yaml]# kubectl scale rs pc-replicaset --replicas=2 -n default
replicaset.apps/pc-replicaset scaled
# 查看Pod数量
[root@master yaml]# kubectl get pod -n default|grep pc
NAME READY STATUS RESTARTS AGE
pc-replicaset-fvjg2 1/1 Running 0 8m39s
pc-replicaset-q7hrm 1/1 Running 0 8m39s
# 在线编辑配置
# 修改spce.template.spec.containers.image为docker.io/library/nginx:latest即可
[root@master yaml]# kubectl edit rs pc-replicaset -n default
replicaset.apps/pc-replicaset edited
# 查看rs状态
# 镜像版本已经变更了
[root@master yaml]# kubectl get rs -n default -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
pc-replicaset 2 2 2 14m nginx docker.io/library/nginx:latest app=nginx-pod
# 使用命令
# kubectl set image rs rs名称 容器=镜像版本 -n namespace
[root@master yaml]# kubectl set image rs pc-replicaset nginx=docker.io/library/nginx:1.23.1 -n default
replicaset.apps/pc-replicaset image updated
# 再次查看
# 镜像版本已经变更了
[root@master yaml]# kubectl get rs -n default -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
pc-replicaset 2 2 2 17m nginx docker.io/library/nginx:1.23.1 app=nginx-pod
# 使用kubectl delete命令会删除此RS以及它管理的Pod
# 在kubernetes删除RS前,会将RS的replicasclear调整为0,等待所有的Pod被删除后,在执行RS对象的删除
[root@master yaml]# kubectl delete rs pc-replicaset -n default
replicaset.apps "pc-replicaset" deleted
[root@master yaml]# kubectl get pod -n default -o wide
No resources found in default namespace.
# 如果希望仅仅删除RS对象(保留Pod),可以使用kubectl delete命令时添加--cascade=false选项(不推荐)。
[root@master yaml]# kubectl delete rs pc-replicaset -n default --cascade=false
replicaset.apps "pc-replicaset" deleted
[root@master yaml]# kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
pc-replicaset-cl82j 1/1 Running 0 75s
pc-replicaset-dslhb 1/1 Running 0 75s
# 也可以使用yaml直接删除(推荐)
[root@master yaml]# kubectl delete -f Pc-Replicaset.yaml
replicaset.apps "pc-replicaset" deleted
kubernetes在V1.2版本开始,引入了Deployment控制器。值得一提的是,这种控制器并不直接管理pod,而是通过管理ReplicaSet来简介管理Pod,即:Deployment管理ReplicaSet,ReplicaSet管理Pod。所以Deployment比ReplicaSet功能更加强大。
Deployment主要功能有下面几个:
apiVersion: apps/v1 # 版本号
kind: Deployment # 类型
metadata: # 元数据
name: # deploy名称
namespace: # 所属命名空间
labels: #标签
spec: # 详情描述
replicas: <副本数量>
revisionHistoryLimit: <保留历史版本数量>
paused: false # 暂停部署,默认是false
progressDeadlineSeconds: 600 # 部署超时时间(s),默认是600
strategy: # 策略
type: <更新策略>
rollingUpdate: # 滚动更新
maxSurge: 30% # 最大额外可以存在的副本数,可以为百分比,也可以为整数
maxUnavailable: 30% # 最大不可用状态的 Pod 的最大值,可以为百分比,也可以为整数
selector: # 选择器,通过它指定该控制器管理哪些pod
matchLabels: # Labels匹配规则
matchExpressions: # Expressions匹配规则
- {key: <lableskey>, operator: <匹配方式>, values: <lablesvalue>}
template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
metadata:
labels:
spec:
containers:
- name:
image:
ports:
创建Pc-Deployment.yaml,内容如下
apiVersion: apps/v1
kind: Deployment
metadata:
name: pc-deployment
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
# 创建deploy
[root@master yaml]# kubectl create -f Pc-Deployment.yaml --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/pc-deployment created
# 查看deploy
# UP-TO-DATE 最新版本的pod的数量
# AVAILABLE 当前可用的pod的数量
[root@master yaml]# kubectl get deploy pc-deployment -n default
NAME READY UP-TO-DATE AVAILABLE AGE
pc-deployment 3/3 3 3 85s
# 查看rs
# 发现rs的名称是在原来查看deploy的名字后面添加了一个10位数的随机串
[root@master yaml]# kubectl get rs -n default
NAME DESIRED CURRENT READY AGE
pc-deployment-6895856946 3 3 3 2m14s
# 查看pod
[root@master yaml]# kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
pc-deployment-6895856946-7nbs4 1/1 Running 0 3m30s
pc-deployment-6895856946-g5n6g 1/1 Running 0 3m30s
pc-deployment-6895856946-jkqnm 1/1 Running 0 3m30s
# 命令方式
[root@master yaml]# kubectl scale deploy pc-deployment --replicas=5 -n default
deployment.apps/pc-deployment scaled
# 查看deploy
[root@master yaml]# kubectl get deploy pc-deployment -n default
NAME READY UP-TO-DATE AVAILABLE AGE
pc-deployment 5/5 5 5 3m24s
# 查看pod数量
[root@master yaml]# kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
pc-deployment-6895856946-2tpfc 1/1 Running 0 4m2s
pc-deployment-6895856946-5pn96 1/1 Running 0 4m2s
pc-deployment-6895856946-792dj 1/1 Running 0 4m2s
pc-deployment-6895856946-89vrz 1/1 Running 0 117s
pc-deployment-6895856946-hl7pz 1/1 Running 0 117s
# 在线编辑方式
# 修改spec.replicase为3
[root@master yaml]# kubectl edit deploy pc-deployment -n default
deployment.apps/pc-deployment edited
# 查看deploy
[root@master yaml]# kubectl get deploy pc-deployment -n default
NAME READY UP-TO-DATE AVAILABLE AGE
pc-deployment 3/3 3 3 6m18s
# 查看pod数量
[root@master yaml]# kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
pc-deployment-6895856946-792dj 1/1 Running 0 6m44s
pc-deployment-6895856946-89vrz 1/1 Running 0 6m44s
pc-deployment-6895856946-792dj 1/1 Running 0 6m44s
deployment支持两种更新策略:重建更新
和滚动更新
,可以通过strategy
指定策略类型,支持两个属性:
strategy:指定新的Pod替换旧的Pod的策略, 支持两个属性:
type:指定策略类型,支持两种策略
Recreate:在创建出新的Pod之前会先杀掉所有已存在的Pod
RollingUpdate:滚动更新,就是杀死一部分,就启动一部分,在更新过程中,存在两个版本Pod
rollingUpdate:当type为RollingUpdate时生效,用于为RollingUpdate设置参数,支持两个属性:
maxUnavailable:用来指定在升级过程中不可用Pod的最大数量,默认为25%。
maxSurge: 用来指定在升级过程中可以超过期望的Pod的最大数量,默认为25%。
# 修改配置清单,并更新配置
# 修改spec.strategy.type为Recreate
[root@master yaml]# vim Pc-Deployment.yaml
[root@master yaml]# kubectl apply -f Pc-Deployment.yaml
deployment.apps/pc-deployment configured
# 命令方式更变镜像
[root@master yaml]# kubectl set image deployment pc-deployment nginx=docker.io/library/nginx:latest -n default
deployment.apps/pc-deployment image updated
# 查看升级过程
[root@master yaml]# kubectl get pods -n default -w
NAME READY STATUS RESTARTS AGE
pc-deployment-6895856946-9zvpn 1/1 Terminating 0 5s
pc-deployment-6895856946-bnz2v 1/1 Terminating 0 5s
pc-deployment-6895856946-6dswz 0/1 Terminating 0 5s
pc-deployment-74556686fb-f76kc 0/1 Pending 0 0s
pc-deployment-74556686fb-g48rh 0/1 Pending 0 0s
pc-deployment-74556686fb-m2rvf 0/1 Pending 0 0s
pc-deployment-74556686fb-f76kc 0/1 ContainerCreating 0 0s
pc-deployment-74556686fb-g48rh 0/1 ContainerCreating 0 0s
pc-deployment-74556686fb-m2rvf 0/1 ContainerCreating 0 0s
pc-deployment-74556686fb-g48rh 1/1 Running 0 1s
pc-deployment-74556686fb-f76kc 1/1 Running 0 2s
pc-deployment-74556686fb-m2rvf 1/1 Running 0 2s
# 修改配置清单,并更新配置
# 修改spec.strategy.type为RollingUpdate,并添加rollingUpdate配置
[root@master yaml]# vim Pc-Deployment.yaml
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 33%
maxUnavailable: 33%
[root@master yaml]# kubectl apply -f Pc-Deployment.yaml
deployment.apps/pc-deployment configured
# 更变镜像
[root@master yaml]# kubectl set image deployment pc-deployment nginx=docker.io/library/nginx:latest -n default
deployment.apps/pc-deployment image updated
# 查看升级过程
[root@master yaml]# kubectl get pod -n default -w
pc-deployment-6895856946-47gjm 1/1 Running 0 2m33s
pc-deployment-6895856946-4rhkr 1/1 Running 0 2m32s
pc-deployment-6895856946-6xg5w 1/1 Running 0 2m30s
pc-deployment-74556686fb-7bz2k 0/1 Pending 0 0s
pc-deployment-74556686fb-7bz2k 0/1 ContainerCreating 0 0s
pc-deployment-74556686fb-7bz2k 1/1 Running 0 1s
pc-deployment-6895856946-47gjm 1/1 Terminating 0 2m43s
pc-deployment-74556686fb-xnvx5 0/1 Pending 0 0s
pc-deployment-74556686fb-xnvx5 0/1 ContainerCreating 0 0s
pc-deployment-74556686fb-xnvx5 1/1 Running 0 2s
pc-deployment-6895856946-4rhkr 1/1 Terminating 0 2m44s
pc-deployment-74556686fb-zgrss 0/1 Pending 0 0s
pc-deployment-74556686fb-zgrss 0/1 ContainerCreating 0 0s
pc-deployment-74556686fb-zgrss 1/1 Running 0 1s
pc-deployment-6895856946-6xg5w 1/1 Terminating 0 2m43s
# 至此,新版本的pod创建完毕,旧版本的pod销毁完毕
滚动更新的过程如下:
在镜像更新之后,查看rs的变化,变化如下
# 查看rs,发现原来的rs的依旧存在,只是pod数量变为了0,而后又新产生了一个rs,pod数量为3,其实这就是deployment能够进行版本回退的奥妙所在,后面会详细解释。
[root@master yaml]# kubectl get rs -n default
NAME DESIRED CURRENT READY AGE
pc-deployment-6696798b78 0 0 0 7m37s
pc-deployment-6696798b11 0 0 0 5m37s
pc-deployment-c848d76789 3 3 3 72s
deployment支持版本升级过程中的暂停、继续功能以及版本回退等诸多功能,命令如下
kubectl rollout: 版本升级相关功能,支持下面的选项:
status 显示当前升级状态
history 显示 升级历史记录
pause 暂停版本升级过程
resume 继续已经暂停的版本升级过程
restart 重启版本升级过程
undo 回滚到上一级版本(可以使用--to-revision回滚到指定版本)
# 查看当前升级版本的状态
[root@master yaml]# kubectl rollout status deploy pc-deployment -n default
deployment "pc-deployment" successfully rolled out
# 查看升级历史记录
[root@master yaml]# kubectl rollout history deploy pc-deployment -n default
deployment.apps/pc-deployment
REVISION CHANGE-CAUSE
1 kubectl create --filename=Pc-Deployment.yaml --record=true
2 kubectl create --filename=Pc-Deployment.yaml --record=true
3 kubectl create --filename=Pc-Deployment.yaml --record=true
# 版本回滚
# --to-revision=1
# 1是最初创建的版本,2是上一个,3是现在的
[root@master yaml]# kubectl rollout undo deployment pc-deployment --to-revision=1 -n default
deployment.apps/pc-deployment rolled back
# 查看发现,通过nginx镜像版本可以发现到了最初版本
[root@master yaml]# kubectl get deploy -n default -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
pc-deployment 3/3 1 3 47m nginx docker.io/library/nginx:1.32.1 app=nginx-pod
# 查看rs,发现第一个rs中有3个pod运行,后面两个版本的rs中pod为运行
[root@master yaml]# kubectl get rs -n default
NAME DESIRED CURRENT READY AGE
pc-deployment-6696798b78 3 3 3 78m
pc-deployment-966bf7f44 0 0 0 37m
pc-deployment-c848d767 0 0 0 71m
Deployment控制器支持控制更新过程中的控制,如“暂停(pause)”或“继续(resume)”更新操作。比如有一批新的Pod资源创建完成后立即暂停更新过程,此时,仅存在一部分新版本的应用,主体部分还是旧的版本。然后,再筛选一小部分的用户请求路由到新版本的Pod应用,继续观察能否稳定地按期望的方式运行。确定没问题之后再继续完成余下的Pod资源滚动更新,否则立即回滚更新操作。这就是所谓的金丝雀发布。
# 更新deployment的版本,并配置暂停deployment
[root@master yaml]# kubectl set image deploy pc-deployment nginx=docker.io/library/nginx:latest -n default && kubectl rollout pause deployment pc-deployment -n default
deployment.apps/pc-deployment image updated
deployment.apps/pc-deployment paused
# 查看更新状态
[root@master yaml]# kubectl rollout status deploy pc-deployment -n default
Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
# 查看rs
[root@master yaml]# kubectl get rs -n default -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES
pc-deployment-5d89bdfbf9 2 2 2 19m nginx docker.io/library/nginx:1.32.1
pc-deployment-675d469f8b 0 0 0 14m nginx docker.io/library/nginx:latest
pc-deployment-6c9f56fcfb 1 1 1 3m16s nginx docker.io/library/nginx:1.32.1
# 查看Pod
[root@master yaml]# kubectl get rs -n default -o wide
NAME READY STATUS RESTARTS AGE
pc-deployment-5d89bdfbf9-rj8sq 1/1 Running 0 7m33s
pc-deployment-5d89bdfbf9-ttwgg 1/1 Running 0 7m35s
pc-deployment-6c9f56fcfb-j2gtj 1/1 Running 0 3m31s
# 继续更新
[root@master yaml]# kubectl rollout resume deploy pc-deployment -n default
deployment.apps/pc-deployment resumed
# 查看rs
[root@master yaml]# kubectl get rs -n default -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES
pc-deployment-5d89bdfbf9 0 0 0 21m nginx docker.io/library/nginx:1.32.1
pc-deployment-675d469f8b 0 0 0 16m nginx docker.io/library/nginx:latest
pc-deployment-6c9f56fcfb 3 3 3 5m11s nginx docker.io/library/nginx:1.32.1
# 查看Pod
[root@master yaml]# kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
pc-deployment-6c9f56fcfb-996rt 1/1 Running 0 5m27s
pc-deployment-6c9f56fcfb-7bfwh 1/1 Running 0 37s
pc-deployment-6c9f56fcfb-rf84v 1/1 Running 0 37s
# 删除deployment,deploy管理的rs和pod将也会被删除
[root@master yaml]# kubectl delete -f Pc-Deployment.yaml
deployment.apps "pc-deployment" deleted
DaemonSet类型的控制器可以保证在集群中的每一台(或指定)节点上都运行一个副本。一般适用于日志收集、节点监控等场景。也就是说,如果一个Pod提供的功能是节点级别的(每个节点都需要且只需要一个),那么这类Pod就适合使用DaemonSet类型的控制器创建。
apiVersion: apps/v1 # 版本号
kind: DaemonSet # 类型
metadata: # 元数据
name: # ds名称
namespace: # 所属命名空间
labels: #标签
spec: # 详情描述
revisionHistoryLimit: <保留历史版本>
updateStrategy: # 更新策略
type: RollingUpdate # 滚动更新策略
rollingUpdate: # 滚动更新
maxUnavailable: 1 # 最大不可用状态的 Pod 的最大值,可以为百分比,也可以为整数
selector: # 选择器,通过它指定该控制器管理哪些pod
matchLabels: # Labels匹配规则
- {key: value}
matchExpressions: # Expressions匹配规则
- {key: app, operator: In, values: [nginx-pod]}
template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
metadata:
labels:
spec:
containers:
- name:
image:
ports:
创建文件Pc-Daemonset.yaml,内容如下
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pc-daemonset
namespace: default
spec:
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
# 创建DS
[root@master yaml]# kubectl create -f Pc-Daemonset.yaml
daemonset.apps/pc-daemonset created
# 查看DS
[root@master yaml]# kubectl get ds -n default -o wide
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
pc-daemonset 2 2 2 2 2 <none> 104s nginx docker.io/library/nginx:1.23.1 app=nginx-pod
# 查看pod,发现在每个work节点上都运行一个pod
[root@master yaml]# kubectl get pods -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pc-daemonset-h6q4b 1/1 Running 0 2m23s 10.244.67.78 work1.host.com <none> <none>
pc-daemonset-lghrj 1/1 Running 0 2m23s 10.244.52.209 work2.host.com <none> <none>
# 删除DS
[root@master yaml]# kubectl delete -f Pc-dDaemonset.yaml
daemonset.apps "pc-daemonset" deleted
Job,主要用于负责批量处理(一次要处理指定数量任务)短暂的一次性(每个任务仅运行一次就结束)任务。Job特点如下:
apiVersion: batch/v1 # 版本号
kind: Job # 类型
metadata: # 元数据
name: # rs名称
namespace: # 所属命名空间
labels: #标签
controller: job
spec: # 详情描述
completions: 1 # 指定job需要成功运行Pods的次数。默认值: 1
parallelism: 1 # 指定job在任一时刻应该并发运行Pods的数量。默认值: 1
activeDeadlineSeconds: <可运行时间期限> # 指定job可运行的时间期限,超过时间还未结束,系统将会尝试进行终止。
backoffLimit: 6 # 指定job失败后进行重试的次数。默认是6
manualSelector: false # 是否可以使用selector选择器选择pod,默认是false
selector: # 选择器,通过它指定该控制器管理哪些pod
matchLabels: # Labels匹配规则
- {key: value}
matchExpressions: # Expressions匹配规则
- {key: app, operator: In, values: [counter-pod]}
template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
metadata:
labels:
spec:
restartPolicy: <重启策略> # 重启策略只能设置为Never或者OnFailure
containers:
- name:
image:
command:
关于重启策略设置的说明:
如果指定为OnFailure,则job会在pod出现故障时重启容器,而不是创建pod,failed次数不变
如果指定为Never,则job会在pod出现故障时创建新的pod,并且故障pod不会消失,也不会重启,failed次数加1
如果指定为Always的话,就意味着一直重启,意味着job任务会重复去执行了,当然不对,所以不能设置为Always
创建Pc-Job.yaml,内容如下
apiVersion: batch/v1
kind: Job
metadata:
name: pc-job
namespace: default
spec:
manualSelector: true
selector:
matchLabels:
app: counter-pod
template:
metadata:
labels:
app: counter-pod
spec:
restartPolicy: Never
containers:
- name: counter
image: docker.io/library/busybox:1.35.0
command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 3;done"]
# 创建Job
[root@master yaml]# kubectl create -f Pc-Job.yaml
job.batch/pc-job created
# 持续观察Job状态
[root@master yaml]# kubectl get job -n default -o wide -w
NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR
pc-job 0/1 1s 1s counter docker.io/library/busybox:1.35.0 app=counter-pod
pc-job 0/1 3s 3s counter docker.io/library/busybox:1.35.0 app=counter-pod
# 查看Pod状态
# 可以发现pod运行完命令之后就会边车Completed
[root@master yaml]# kubectl get pod -n default -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pc-job-6qwpd 1/1 Running 0 29s 10.244.67.81 work1.host.com <none> <none>
pc-job-6qwpd 0/1 Completed 0 119s 10.244.67.81 work1.host.com <none> <none>
# 删除job
[root@master yaml]# kubectl delete -f Pc-Job.yaml
job.batch "pc-job" deleted
CronJob控制器以Job控制器资源为其管控对象,并借助它管理pod资源对象,Job控制器定义的作业任务在其控制器资源创建之后便会立即执行,但CronJob可以以类似于Linux操作系统的周期性任务作业计划的方式控制其运行时间点及重复运行的方式。也就是说,CronJob可以在特定的时间点(反复的)去运行job任务。
apiVersion: batch/v1beta1 # 版本号
kind: CronJob # 类型
metadata: # 元数据
name: # rs名称
namespace: # 所属命名空间
labels: #标签
controller: cronjob
spec: # 详情描述
schedule: # cron格式的作业调度运行时间点,用于控制任务在什么时间执行
concurrencyPolicy: # 并发执行策略,用于定义前一次作业运行尚未完成时是否以及如何运行后一次的作业
failedJobHistoryLimit: # 为失败的任务执行保留的历史记录数,默认为1
successfulJobHistoryLimit: # 为成功的任务执行保留的历史记录数,默认为3
startingDeadlineSeconds: # 启动作业错误的超时时长
jobTemplate: # job控制器模板,用于为cronjob控制器生成job对象;下面其实就是job的定义
metadata:
spec:
completions: 1
parallelism: 1
activeDeadlineSeconds: 30
backoffLimit: 6
manualSelector: true
selector:
matchLabels:
matchExpressions: 规则
- {key: app, operator: In, values: [counter-pod]}
template:
metadata:
labels:
spec:
restartPolicy:
containers:
- name:
image:
command:
需要重点解释的几个选项:
schedule: cron表达式,用于指定任务的执行时间
*/1 * * * *
<分钟> <小时> <日> <月份> <星期>
分钟 值从 0 到 59.
小时 值从 0 到 23.
日 值从 1 到 31.
月 值从 1 到 12.
星期 值从 0 到 6, 0 代表星期日
多个时间可以用逗号隔开; 范围可以用连字符给出;*可以作为通配符; /表示每...
concurrencyPolicy:
Allow: 允许Jobs并发运行(默认)
Forbid: 禁止并发运行,如果上一次运行尚未完成,则跳过下一次运行
Replace: 替换,取消当前正在运行的作业并用新作业替换它
创建Pc-Cronjob.yaml,内容如下:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pc-cronjob
namespace: default
labels:
controller: cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
metadata:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: counter
image: docker.io/library/busybox:1.35.0
command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 3;done"]
# 创建CJ
[root@master yaml]# kubectl create -f Pc-Cronjob.yaml
Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
cronjob.batch/pc-cronjob created
# 查看CJ
[root@master yaml]# kubectl get cronjobs -n default
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
pc-cronjob */1 * * * * False 1 9s 58s
# 查看job
[root@master yaml]# kubectl get job -n default
NAME COMPLETIONS DURATION AGE
pc-cronjob-27705149 0/1 2m8s 2m8s
pc-cronjob-27705150 0/1 68s 68s
pc-cronjob-27705151 0/1 8s 8s
# 查看pod
[root@master yaml]# kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
pc-cronjob-27705149-kms26 0/1 Completed 0 2m37s
pc-cronjob-27705150-2mvkv 0/1 Completed 0 97s
pc-cronjob-27705151-dvr8c 1/1 Running 0 37s
[root@master yaml]# kubectl delete -f Pc-Cronjob.yaml
Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
cronjob.batch "pc-cronjob" deleted
]]>Pod 是在 Kubernetes 中创建和管理的、最小的可部署的计算单元。Pod(就像在鲸鱼荚或者豌豆荚中)是一组(一个或多个) 容器; 这些容器共享存储、网络、以及怎样运行这些容器的声明。 Pod 中的内容总是并置(colocated)的并且一同调度,在共享的上下文中运行。 Pod 所建模的是特定于应用的 “逻辑主机”,其中包含一个或多个应用容器, 这些容器相对紧密地耦合在一起。 在非云环境中,在相同的物理机或虚拟机上运行的应用类似于在同一逻辑主机上运行的云应用。
每一个Pod都可以包含一个或者多个容器,这些容器分为两种:
Pause容器,这是每个Pod都会有的一个根容器,他的作用有两个:
apiVersion: v1 #必选,版本号,例如v1
kind: Pod #必选,资源类型,例如 Pod
metadata: #必选,元数据
name: string #必选,Pod名称
namespace: string #Pod所属的命名空间,默认为"default"
labels: #自定义标签列表
- name: string
spec: #必选,Pod中容器的详细定义
containers: #必选,Pod中容器列表
- name: string #必选,容器名称
image: string #必选,容器的镜像名称
imagePullPolicy: [ Always|Never|IfNotPresent ] #获取镜像的策略
command: [string] #容器的启动命令列表,如不指定,使用打包时使用的启动命令
args: [string] #容器的启动命令参数列表
workingDir: string #容器的工作目录
volumeMounts: #挂载到容器内部的存储卷配置
- name: string #引用pod定义的共享存储卷的名称,需用volumes[]部分定义的的卷名
mountPath: string #存储卷在容器内mount的绝对路径,应少于512字符
readOnly: boolean #是否为只读模式
ports: #需要暴露的端口库号列表
- name: string #端口的名称
containerPort: int #容器需要监听的端口号
hostPort: int #容器所在主机需要监听的端口号,默认与Container相同
protocol: string #端口协议,支持TCP和UDP,默认TCP
env: #容器运行前需设置的环境变量列表
- name: string #环境变量名称
value: string #环境变量的值
resources: #资源限制和请求的设置
limits: #资源限制的设置
cpu: string #Cpu的限制,单位为core数,将用于docker run --cpu-shares参数
memory: string #内存限制,单位可以为Mib/Gib,将用于docker run --memory参数
requests: #资源请求的设置
cpu: string #Cpu请求,容器启动的初始可用数量
memory: string #内存请求,容器启动的初始可用数量
lifecycle: #生命周期钩子
postStart: #容器启动后立即执行此钩子,如果执行失败,会根据重启策略进行重启
preStop: #容器终止前执行此钩子,无论结果如何,容器都会终止
livenessProbe: #对Pod内各容器健康检查的设置,当探测无响应几次后将自动重启该容器
exec: #对Pod容器内检查方式设置为exec方式
command: [string] #exec方式需要制定的命令或脚本
httpGet: #对Pod内个容器健康检查方法设置为HttpGet,需要制定Path、port
path: string
port: number
host: string
scheme: string
HttpHeaders:
- name: string
value: string
tcpSocket: #对Pod内个容器健康检查方式设置为tcpSocket方式
port: number
initialDelaySeconds: 0 #容器启动完成后首次探测的时间,单位为秒
timeoutSeconds: 0 #对容器健康检查探测等待响应的超时时间,单位秒,默认1秒
periodSeconds: 0 #对容器监控检查的定期探测时间设置,单位秒,默认10秒一次
successThreshold: 0
failureThreshold: 0
securityContext:
privileged: false
restartPolicy: [Always | Never | OnFailure] #Pod的重启策略
nodeName: <string> #设置NodeName表示将该Pod调度到指定到名称的node节点上
nodeSelector: obeject #设置NodeSelector表示将该Pod调度到包含这个label的node上
imagePullSecrets: #Pull镜像时使用的secret名称,以key:secretkey格式指定
- name: string
hostNetwork: false #是否使用主机网络模式,默认为false,如果设置为true,表示使用宿主机网络
volumes: #在该pod上定义共享存储卷列表
- name: string #共享存储卷名称 (volumes类型有很多种)
emptyDir: {} #类型为emtyDir的存储卷,与Pod同生命周期的一个临时目录。为空值
hostPath: string #类型为hostPath的存储卷,表示挂载Pod所在宿主机的目录
path: string #Pod所在宿主机的目录,将被用于同期中mount的目录
secret: #类型为secret的存储卷,挂载集群与定义的secret对象到容器内部
scretname: string
items:
- key: string
path: string
configMap: #类型为configMap的存储卷,挂载预定义的configMap对象到容器内部
name: string
items:
- key: string
path: string
创建一个名字为Pod-Basic.yaml文件,内容如下:
apiVersion: v1
kind: Pod
metadata:
name: pod-basic
namespace: default
labels:
app: pod
spec:
containers:
- name: mynginx
image: docker.io/library/nginx:1.23.1
- name: mybusybox
image: docker.io/library/busybox:1.35.0
上面定义了一个比较简单Pod的配置,名字叫做"pod-basic",命名空间在"default"下, 并给他打了一个标签叫做"app:pod",并定义了两个容器:
定义好资源清单之后可以使用下面的命令进行管理:
# 创建pod
[root@master yaml]# kubectl create -f Pod-Basic.yaml
pod/pod-basic created
# 查看pod状态
# "-n default"是指定命名空间,这里不加也可以查询到,因为不加默认查询的就是default命名空间下的资源
# READY 1/2 表示当前Pod中有2个容器,其中1个准备就绪,1个未就绪
# RESTARTS 重启次数,因为有1个容器故障了,Pod一直在重启试图恢复它
[root@master yaml]# kubectl get pod -n default
NAME READY STATUS RESTARTS AGE
pod-basic 1/2 NotReady 2 (29s ago) 43s
# 查看pod的详细信息
[root@master yaml]# kubectl describe pod pod-basic -n default
创建Pod-ImagePull.yaml文件,内容如下:
apiVersion: v1
kind: Pod
metadata:
name: pod-imagepull
namespace: default
spec:
containers:
- name: mynginx
image: docker.io/library/nginx:1.23.1
imagePullPolicy: Always # 设置镜像拉取策略
- name: mybusybox
image: docker.io/library/busybox:2.4.54
imagePullPolicy,用于设置镜像拉取策略,kubernetes支持配置三种拉取策略:
默认值说明:
如果镜像tag为具体版本号, 默认策略是:IfNotPresent
如果镜像tag为:latest(最终版本) ,默认策略是always
# 创建pod
[root@master yaml]# kubectl create -f Pod-ImagePull.yaml
pod/pod-imagepull created
# 查看Pod详情
# 此时明显可以看到nginx镜像有一步Pulling image "nginx:1.17.1"的过程
[root@master yaml]# kubectl describe pod pod-imagepull -n default
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5s default-scheduler Successfully assigned default/pod-imagepull to work1.host.com
Normal Pulling 4s kubelet Pulling image "docker.io/library/nginx:1.23.1"
Normal Pulled 1s kubelet Successfully pulled image "docker.io/library/nginx:1.23.1" in 2.762104819s
Normal Created 1s kubelet Created container mynginx
Normal Started 1s kubelet Started container mynginx
Normal Pulled 1s (x2 over 1s) kubelet Container image "docker.io/library/busybox:1.35.0" already present on machine
Normal Created 1s (x2 over 1s) kubelet Created container mybusybox
Normal Started 1s kubelet Started container mybusybox
在上面的配置中mybusybox容器一直没有成功运行,原因就是mybusybox容器不是一个程序,而是一个类似于一个工具类的合集,kubernetes集群启动后,会因为没有程序支撑运行而关闭容器。解决方法就是让他一直运行一个命令或者程。
创建Pod-Command.yaml文件,内容如下:
apiVersion: v1
kind: Pod
metadata:
name: pod-command
namespace: default
labels:
app: pod
spec:
containers:
- name: mynginx
image: docker.io/library/nginx:1.23.1
- name: mybusybox
image: docker.io/library/busybox:1.35.0
command: ["/bin/sh","-c","touch /tmp/hello.txt;while true;do /bin/echo $(date +%T) >> /tmp/hello.txt; sleep 3; done;"]
command:在pod中的容器初始化完成之后运行的命令。
命令解释:
"/bin/sh","-c"
使用sh来执行命令
touch /tmp/hello.txt;
在/tmp下创建一个hello.txt文件
while true;do /bin/echo $(date +%T) >> /tmp/hello.txt; sleep 3; done;
每过三秒就往/tmp/hello.txt文件里面追加当前的时间
# 创建pod
[root@master yaml]# kubectl create -f Pod-Command.yaml
pod/pod-command created
# 查看Pod状态
# 这个时候俩容器就都正常运行了
[root@master yaml]# kubectl get pod
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod-command 2/2 Running 0 1m38s
# 进入容器查看文件内容
[root@master yaml]# kubectl exec pod-command -n default -it -c mybusybox /bin/sh
/ # tail -f /tmp/hello.txt
12:55:27
12:55:30
12:55:33
12:55:36
12:55:39
12:55:42
特别说明:
通过上面发现command已经可以完成启动命令和传递参数的功能,为什么这里还要提供一个args选项,用于传递参数呢?这其实跟docker有点关系,kubernetes中的command、args两项其实是实现覆盖Dockerfile中ENTRYPOINT的功能。
1 如果command和args均没有写,那么用Dockerfile的配置。
2 如果command写了,但args没有写,那么Dockerfile默认的配置会被忽略,执行输入的command
3 如果command没写,但args写了,那么Dockerfile中配置的ENTRYPOINT的命令会被执行,使用当前args的参数
4 如果command和args都写了,那么Dockerfile的配置被忽略,执行command并追加上args参数
创建Pod-Env.yaml文件,内容如下:
apiVersion: v1
kind: Pod
metadata:
name: pod-env
namespace: default
spec:
containers:
- name: mybusybox
image: docker.io/library/busybox:1.35.0
command: ["/bin/sh","-c","while true;do /bin/echo $(date +%T);sleep 60; done;"]
env: # 设置环境变量列表
- name: "username"
value: "admin"
- name: "password"
value: "123456"
env: 环境变量,用于在pod中的容器设置环境变量。
# 创建pod
[root@master yaml]# kubectl create -f Pod-Env.yaml
pod/pod-env created
# 进入容器,输出环境变量
[root@master yaml]# kubectl exec pod-env -n default -c mybusybox -it /bin/sh
/ # echo $username
admin
/ # echo $password
123456
这种方式不是很推荐,推荐将这些配置单独存储在配置文件中,这种方式将在后面介绍。
要对容器的端口设置需要对containers的ports选项进行修改,先看一下ports支持的子选项
[root@master yaml]# kubectl explain pod.spec.containers.ports
KIND: Pod
VERSION: v1
RESOURCE: ports <[]Object>
DESCRIPTION:
List of ports to expose from the container. Exposing a port here gives the
system additional information about the network connections a container
uses, but is primarily informational. Not specifying a port here DOES NOT
prevent that port from being exposed. Any port which is listening on the
default "0.0.0.0" address inside a container will be accessible from the
network. Cannot be updated.
ContainerPort represents a network port in a single container.
FIELDS:
containerPort <integer> -required- # 容器要监听的端口(0<x<65536)
Number of port to expose on the pod's IP address. This must be a valid port
number, 0 < x < 65536.
hostIP <string> # 要将外部端口绑定到的主机IP(一般省略)
What host IP to bind the external port to.
hostPort <integer> # 容器要在主机上公开的端口,如果设置,主机上只能运行容器的一个副本(一般省略)
Number of port to expose on the host. If specified, this must be a valid
port number, 0 < x < 65536. If HostNetwork is specified, this must match
ContainerPort. Most containers do not need this.
name <string> # 端口名称,如果指定,必须保证name在pod中是唯一的
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each
named port in a pod must have a unique name. Name for the port that can be
referred to by services.
protocol <string> # 端口协议。必须是UDP、TCP或SCTP。默认为“TCP”。
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP".
Possible enum values:
- `"SCTP"` is the SCTP protocol.
- `"TCP"` is the TCP protocol.
- `"UDP"` is the UDP protocol.
创建Pod-Ports.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-ports
namespace: default
spec:
containers:
- name: mynginx
image: docker.io/library/nginx:1.23.1
ports: # 设置容器暴露的端口列表
- name: nginx-port
containerPort: 80
protocol: TCP
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Ports.yaml
pod/pod-ports created
# 查看pod
# 在下面可以明显看到配置信息
[root@master ~]# [root@master yaml]# kubectl get pod pod-ports -n default -o yaml
......
spec:
containers:
- image: docker.io/library/nginx:1.23.1
imagePullPolicy: IfNotPresent
name: mynginx
ports:
- containerPort: 80
name: nginx-port
protocol: TCP
......
podIP: 10.244.52.207
......
# 访问服务
# 访问容器中的程序需要使用的是`podIp:containerPort`
[root@master yaml]# curl http://10.244.52.207:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
容器中的程序要运行,肯定是要占用一定资源的,比如cpu和内存等,如果不对某个容器的资源做限制,那么它就可能吃掉大量资源,导致其它容器无法运行。针对这种情况,kubernetes提供了对内存和cpu的资源进行配额的机制,这种机制主要通过resources选项实现,他有两个子选项:
可以通过上面两个选项设置资源的上下限。
创建Pod-Resources.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-resources
namespace: default
spec:
containers:
- name: mynginx
image: docker.io/library/nginx:1.23.1
resources: # 资源配额
limits: # 限制资源(上限)
cpu: "2" # CPU限制,单位是core数
memory: "10Gi" # 内存限制
requests: # 请求资源(下限)
cpu: "1" # CPU限制,单位是core数
memory: "10Mi" # 内存限制
CPU和Memory的单位说明:
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Resources.yaml
pod/pod-resources created
# 查看发现pod运行状态
[root@master yaml]# kubectl get pod pod-resources -n default
NAME READY STATUS RESTARTS AGE
pod-resources 1/1 Running 0 88s
# 删除Pod
[root@master yaml]# kubectl delete -f Pod-Resources.yaml
pod "pod-resources" deleted
# 编辑Pod-Resources.yaml,修改requests的限制
......
requests:
cpu: "1"
memory: "10Gi"
......
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Resources.yaml
pod/pod-resources created
# 查看Pod状态,Pod启动失败
[root@master yaml]# kubectl get pod pod-resources -n default
NAME READY STATUS RESTARTS AGE
pod-resources 0/1 Pending 0 29s
# 查看Pod详细信息会看到报错
[root@master yaml]# kubectl describe pod pod-resources -n default
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 97s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 3 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
# 上面的报错指三个节点内存不足
Pod生命周期一般是Pod对象从创建至终的这段时间范围称为pod的生命周期,它主要包含下面的过程:
运行主容器(main container)
在整个生命周期中,Pod会出现5种状态(相位),分别如下:
pod的创建过程
pod的终止过程
初始化容器是在pod的主容器启动之前要运行的容器,主要是做一些主容器的前置工作,它具有两大特征:
初始化容器有很多的应用场景,下面列出的是最常见的几个:
假设要以主容器运行一个web程序,但是要求在运行之前需要能够连接上mysql和redis所在的服务器,为了方便测试,事先规划好数据库服务器地址。创建文件Pod-InitContainer.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-initcontainer
namespace: default
spec:
containers:
- name: main-container
image: docker.io/library/nginx:1.23.1
ports:
- name: nginx-port
containerPort: 80
initContainers:
- name: test-mysql
image: docker.io/library/busybox:1.35.0
command: ['sh', '-c', 'until ping 192.16.1.100 -c 1 ; do echo waiting for mysql...; sleep 2; done;']
- name: test-redis
image: docker.io/library/busybox:1.35.0
command: ['sh', '-c', 'until ping 192.16.1.200 -c 1 ; do echo waiting for reids...; sleep 2; done;']
# 创建Pod
[root@master yaml]# kubectl create -f Pod-InitContainer.yaml
pod/pod-initcontainer created
# 查看状态
# 发现pod一直卡在第一个初始化容器过程中,后面的容器不会运行
[root@master yaml]# kubectl describe pod pod-initcontainer -n default
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 66s default-scheduler Successfully assigned default/pod-initcontainer to work1.host.com
Normal Pulled 66s kubelet Container image "docker.io/library/busybox:1.35.0" already present on machine
Normal Created 66s kubelet Created container test-mysql
Normal Started 66s kubelet Started container test-mysql
# 动态查看pod状态
[root@master yaml]# kubectl get pods pod-initcontainer -n default -w
NAME READY STATUS RESTARTS AGE
pod-initcontainer 0/1 Init:0/2 0 5m1s
pod-initcontainer 0/1 Init:1/2 0 5m4s
pod-initcontainer 0/1 Init:1/2 0 5m5s
pod-initcontainer 0/1 PodInitializing 0 5m17s
pod-initcontainer 1/1 Running 0 5m18s
# 开一个新的终端链接并执行以下命令查看pod状态
[root@master ~]# ifconfig ens33:1 192.16.1.100 netmask 255.255.255.0 up
[root@master ~]# ifconfig ens33:1 192.16.1.200 netmask 255.255.255.0 up
钩子函数能够感知自身生命周期中的事件,并在相应的时刻到来时运行用户指定的程序代码。
kubernetes在主容器的启动之后和停止之前提供了两个钩子函数:
钩子处理器支持使用下面三种方式定义动作:
Exec命令:在容器内执行一次命令
……
lifecycle:
postStart:
exec:
command:
- cat
- /tmp/healthy
……
TCPSocket:在当前容器尝试访问指定的socket
……
lifecycle:
postStart:
tcpSocket:
port: 8080
……
HTTPGet:在当前容器中向某url发起http请求
……
lifecycle:
postStart:
httpGet:
path: / #URI地址
port: 80 #端口号
host: 192.168.109.100 #主机地址
scheme: HTTP #支持的协议,http或者https
……
以exec方式为例,创建Pod-Hook-Exec.yaml文件,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-hook-exec
namespace: default
spec:
containers:
- name: main-container
image: docker.io/library/nginx:1.23.1
ports:
- name: nginx-port
containerPort: 80
lifecycle:
postStart:
exec: # 在容器启动的时候执行一个命令,修改掉nginx的默认首页内容
command: ["/bin/sh", "-c", "echo postStart... > /usr/share/nginx/html/index.html"]
preStop:
exec: # 在容器停止之前停止nginx服务
command: ["/usr/sbin/nginx","-s","quit"]
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Hook-Exec.yaml
pod/pod-hook-exec created
# 查看Pod
[root@master yaml]# kubectl get pods pod-hook-exec -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-hook-exec 1/1 Running 0 52s 10.244.52.213 work2.host.com <none> <none>
# 访问Pod
[root@master yaml]# curl 10.244.52.213
postStart...
容器探测用于检测容器中的应用实例是否正常工作,是保障业务可用性的一种传统机制。如果经过探测,实例的状态不符合预期,那么kubernetes就会把该问题实例" 摘除 ",不承担业务流量。kubernetes提供了两种探针来实现容器探测,分别是:
livenessProbe 决定是否重启容器,readinessProbe 决定是否将请求转发给容器。
上面两种探针目前均支持三种探测方式:
Exec命令:在容器内执行一次命令,如果命令执行的退出码为0,则认为程序正常,否则不正常
……
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
……
TCPSocket:将会尝试访问一个用户容器的端口,如果能够建立这条连接,则认为程序正常,否则不正常
……
livenessProbe:
tcpSocket:
port: 8080
……
HTTPGet:调用容器内Web应用的URL,如果返回的状态码在200和399之间,则认为程序正常,否则不正常
……
livenessProbe:
httpGet:
path: / #URI地址
port: 80 #端口号
host: 127.0.0.1 #主机地址
scheme: HTTP #支持的协议,http或者https
……
以liveness probes为例,做几个演示:
方式一:Exec
创建Pod-Liveness-Exec.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-exec
namespace: default
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
exec:
command: ["/bin/cat","/tmp/hello.txt"] # 执行一个查看文件的命令
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Liveness-Exec.yaml
pod/pod-liveness-exec created
# 查看Pod详情
# 发现nginx容器启动之后就进行了健康检查
# 检查失败之后容器就呗kill掉了,之后容器
[root@master yaml]# kubectl describe pods pod-liveness-exec -n default
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25s default-scheduler Successfully assigned default/pod-liveness-exec to work1.host.com
Normal Pulled 24s kubelet Container image "docker.io/library/nginx:1.23.1" already present on machine
Normal Created 24s kubelet Created container nginx
Normal Started 24s kubelet Started container nginx
Warning Unhealthy 5s (x2 over 15s) kubelet Liveness probe failed: /bin/cat: /tmp/hello.txt: No such file or directory
# 查看Pod状态
# 发现RESTARTS一直在增长
[root@master yaml]# kubectl get pods pod-liveness-exec -n default
NAME READY STATUS RESTARTS AGE
pod-liveness-exec 0/1 CrashLoopBackOff 4 (12s ago) 2m43s
方式二:TCPSocket
创建Pod-Liveness-Tcpsocket.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-tcpsocket
namespace: default
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
tcpSocket:
port: 8080 # 尝试访问8080端口
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Liveness-Tcpsocket.yaml
pod/pod-liveness-tcpsocket created
# 查看Pod详情
# 发现容器尝试访问8080端口,但是失败了
[root@master yaml]# kubectl describe pods pod-liveness-tcpsocket -n default
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31s default-scheduler Successfully assigned default/pod-liveness-tcpsocket to work1.host.com
Normal Pulled 1s (x2 over 30s) kubelet Container image "docker.io/library/nginx:1.23.1" already present on machine
Normal Created 1s (x2 over 30s) kubelet Created container nginx
Normal Started 1s (x2 over 30s) kubelet Started container nginx
Warning Unhealthy 1s (x3 over 21s) kubelet Liveness probe failed: dial tcp 10.244.67.89:8080: connect: connection refused
Normal Killing 1s kubelet Container nginx failed liveness probe, will be restarted
# 查看Pod状态
# 发现RESTARTS一直在增长
[root@master yaml]# kubectl get pods pod-liveness-tcpsocket -n default
NAME READY STATUS RESTARTS AGE
pod-liveness-tcpsocket 1/1 Running 4 (7s ago) 2m7s
方式三:HTTPGet
创建Pod-Liveness-Httpget.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-httpget
namespace: default
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
httpGet: # 其实就是访问http://127.0.0.1:80/hello
scheme: HTTP #支持的协议,http或者https
port: 80 #端口号
path: /hello #URI地址
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Liveness-Httpget.yaml
pod/pod-liveness-httpget created
# 查看Pod详情
[root@master yaml]# kubectl describe pod pod-liveness-httpget -n default
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22s default-scheduler Successfully assigned default/pod-liveness-httpget to work2.host.com
Normal Pulled 22s kubelet Container image "docker.io/library/nginx:1.23.1" already present on machine
Normal Created 21s kubelet Created container nginx
Normal Started 21s kubelet Started container nginx
Warning Unhealthy 2s (x2 over 12s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 404
# 查看Pod状态
# 发现RESTARTS一直在增长
[root@master yaml]# kubectl get pod pod-liveness-httpget -n default
NAME READY STATUS RESTARTS AGE
pod-liveness-httpget 1/1 Running 2 (26s ago) 86s
在LivenessProbe的子属性下还会发现一些其他的配置,这里简单解释一下含义:
[root@master yaml]# kubectl explain pod.spec.containers.livenessProbe
KIND: Pod
VERSION: v1
RESOURCE: livenessProbe <Object>
DESCRIPTION:
Periodic probe of container liveness. Container will be restarted if the
probe fails. Cannot be updated. More info:
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
Probe describes a health check to be performed against a container to
determine whether it is alive or ready to receive traffic.
FIELDS:
exec <Object>
Exec specifies the action to take.
failureThreshold <integer> # 连续探测失败多少次才被认定为失败。默认是3。最小值是1
Minimum consecutive failures for the probe to be considered failed after
having succeeded. Defaults to 3. Minimum value is 1.
grpc <Object>
GRPC specifies an action involving a GRPC port. This is a beta field and
requires enabling GRPCContainerProbe feature gate.
httpGet <Object>
HTTPGet specifies the http request to perform.
initialDelaySeconds <integer> # 容器启动后等待多少秒执行第一次探测
Number of seconds after the container has started before liveness probes
are initiated. More info:
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
periodSeconds <integer> # 执行探测的频率。默认是10秒,最小1秒
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum
value is 1.
successThreshold <integer> # 连续探测成功多少次才被认定为成功。默认是1
Minimum consecutive successes for the probe to be considered successful
after having failed. Defaults to 1. Must be 1 for liveness and startup.
Minimum value is 1.
tcpSocket <Object>
TCPSocket specifies an action involving a TCP port.
terminationGracePeriodSeconds <integer>
Optional duration in seconds the pod needs to terminate gracefully upon
probe failure. The grace period is the duration in seconds after the
processes running in the pod are sent a termination signal and the time
when the processes are forcibly halted with a kill signal. Set this value
longer than the expected cleanup time for your process. If this value is
nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this
value overrides the value provided by the pod spec. Value must be
non-negative integer. The value zero indicates stop immediately via the
kill signal (no opportunity to shut down). This is a beta field and
requires enabling ProbeTerminationGracePeriod feature gate. Minimum value
is 1. spec.terminationGracePeriodSeconds is used if unset.
timeoutSeconds <integer> # 探测超时时间。默认1秒,最小1秒
Number of seconds after which the probe times out. Defaults to 1 second.
Minimum value is 1. More info:
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
设置探测详细时间参照下面配置
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-httpget
namespace: dev
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
httpGet:
scheme: HTTP
port: 80
path: /
initialDelaySeconds: 30 # 容器启动后30s开始探测
timeoutSeconds: 5 # 探测超时时间为5s
在容器探测livenessProbe中一旦探测出现了问题,Kubernetes就会对容器所在的Pod进行重启,重启的方式是由pod的重启策略决定的,Pod的重启策略有三种,分别如下:
重启策略适用于pod对象中的所有容器,首次需要重启的容器,将在其需要时立即进行重启,随后再次需要重启的操作将由kubelet延迟一段时间后进行,且反复的重启操作的延迟时长以此为10s、20s、40s、80s、160s和300s,300s是最大延迟时长。
创建Pod-Restartpolicy.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-restartpolicy
namespace: default
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
httpGet:
scheme: HTTP
port: 80
path: /hello
restartPolicy: Never # 设置重启策略为Never
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Restartpolicy.yaml
pod/pod-restartpolicy created
# 查看Pod详情,发现nginx容器的健康检查失败
[root@master yaml]# kubectl describe pods pod-restartpolicy -n default
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49s default-scheduler Successfully assigned default/pod-restartpolicy to work1.host.com
Normal Pulled 48s kubelet Container image "docker.io/library/nginx:1.23.1" already present on machine
Normal Created 48s kubelet Created container nginx
Normal Started 48s kubelet Started container nginx
Warning Unhealthy 19s (x3 over 39s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 19s kubelet Stopping container nginx
# 过一会之后,查看pod的状态,发现重启次数一直是0
[root@master yaml]# kubectl get pods pod-restartpolicy -n default
NAME READY STATUS RESTARTS AGE
pod-restartpolicy 0/1 Completed 0 2m7s
在默认情况下,一个Pod在哪个Node节点上运行,是由Scheduler组件采用相应的算法计算出来的,这个过程是不受人工控制的。但是在实际使用中,这并不满足的需求,因为很多情况下,控制某些Pod到达某些节点上,这就需要了解kubernetes对Pod的调度规则,kubernetes提供了四大类调度方式:
定向调度,指的是利用在pod上声明nodeName或者nodeSelector,以此将Pod调度到期望的node节点上。注意,这里的调度是强制的,这就意味着即使要调度的目标Node不存在,也会向上面进行调度,只不过pod运行失败而已
NodeName用于强制约束将Pod调度到指定的Name的Node节点上。这种方式,其实是直接跳过Scheduler的调度逻辑,直接将Pod调度到指定名称的节点。
创建一个Pod-Nodename.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-nodename
namespace: default
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
nodeName: node1 # 指定调度到node1节点上
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Nodename.yaml
pod/pod-nodename created
# 查看Pod具体状态和调度节点
# 发现Pod调度到了node1节点上,但是实则我的集群是没有这个节点的所以导致一直无法正常运行
[root@master yaml]# kubectl get pods pod-nodename -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodename 0/1 Pending 0 10s <none> node1 <none> <none>
# 修改文件的nodeName为"work1.host.com"并更新配置
[root@master yaml]# vim Pod-Nodename.yaml
[root@master yaml]# kubectl apply -f Pod-Nodename.yaml
pod/pod-nodename created
# 再次查看Pod的具体状态和调度节点
# 发现已经成功调度到其他节点并运行成功
[root@master yaml]# kubectl get pods pod-nodename -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodename 1/1 Running 0 34s 10.244.67.91 work1.host.com <none> <none>
NodeSelector用于将pod调度到添加了指定标签的node节点上。它是通过kubernetes的label-selector机制实现的,也就是说,在pod创建之前,会由scheduler使用MatchNodeSelector调度策略进行label匹配,找出目标node,然后将pod调度到目标节点,该匹配规则是强制约束。
# 给节点创建标签
# 给work1.host.com节点创建了一个nodeenv=pro标签
# 给work2.host.com节点创建了一个nodeenv=test标签
[root@master yaml]# kubectl label nodes work1.host.com nodeenv=pro
node/work1.host.com labeled
[root@master yaml]# kubectl label nodes work2.host.com nodeenv=test
node/work2.host.com labeled
创建Pod-Nodeselector.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeselector
namespace: default
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
nodeSelector:
nodeenv: pro # 指定调度到具有nodeenv=pro标签的节点上
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Nodeselector.yaml
pod/pod-nodeselector created
# 查看Pod的具体状态和调度节点
[root@master yaml]# kubectl get pods pod-nodeselector -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeselector 1/1 Running 0 51s 10.244.67.92 work1.host.com <none> <none>
# 之后删除pod,修改nodeSelector的值为nodeenv: pro2 (不存在打有此标签的节点)
[root@master yaml]# kubectl delete -f Pod-Nodeselector.yaml
pod "pod-nodeselector" deleted
[root@master yaml]# vim Pod-Nodeselector.yaml
[root@master yaml]# kubectl create -f Pod-Nodeselector.yaml
pod/pod-nodeselector created
# 再次查看Pod的具体状态和调度节点
# 发现调度节的值为<none>
[root@master yaml]# kubectl get pods pod-nodeselector -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeselector 0/1 Pending 0 43s <none> <none> <none> <none>
# 通过查看Pod详情,发现node selector匹配失败的提示
[root@master yaml]# kubectl describe pods pod-nodeselector -n default
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 118s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
使用定向调度进行调度时,如果出现没有满足条件的Node那么Pod就会不被运行,为了解决这个问题,Kubernetes在NodeSelector的基础之上的进行了扩展,通过配置的形式实现优先选择满足条件的Node进行调度,如果没有也可以调度到不满足条件的节点上,使其更加灵活。Affinity主要分为三类:
关于亲和性(反亲和性)使用场景的说明:
亲和性:如果两个应用频繁交互,那就有必要利用亲和性让两个应用的尽可能的靠近,这样可以减少因网络通信而带来的性能损耗。
反亲和性:当应用的采用多副本部署时,有必要采用反亲和性让各个应用实例打散分布在各个node上,这样可以提高服务的高可用性。
NodeAffinity
的可配置项如下:
pod.spec.affinity.nodeAffinity
requiredDuringSchedulingIgnoredDuringExecution Node节点必须满足指定的所有规则才可以,相当于硬限制
nodeSelectorTerms 节点选择列表
matchFields 按节点字段列出的节点选择器要求列表
matchExpressions 按节点标签列出的节点选择器要求列表(推荐)
key 键
values 值
operator 关系符 支持Exists, DoesNotExist, In, NotIn, Gt, Lt
preferredDuringSchedulingIgnoredDuringExecution 优先调度到满足指定的规则的Node,相当于软限制 (倾向)
preference 一个节点选择器项,与相应的权重相关联
matchFields 按节点字段列出的节点选择器要求列表
matchExpressions 按节点标签列出的节点选择器要求列表(推荐)
key 键
values 值
operator 关系符 支持In, NotIn, Exists, DoesNotExist, Gt, Lt
weight 倾向权重,在范围1-100。
关系符的使用说明:
- matchExpressions:
- key: nodeenv # 匹配存在标签的key为nodeenv的节点
operator: Exists
- key: nodeenv # 匹配标签的key为nodeenv,且value是"xxx"或"yyy"的节点
operator: In
values: ["xxx","yyy"]
- key: nodeenv # 匹配标签的key为nodeenv,且value大于"xxx"的节点
operator: Gt
values: "xxx"
requiredDuringSchedulingIgnoredDuringExecution
创建Pod-Nodeaffinity-Required.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeaffinity-required
namespace: default
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
affinity: #亲和性设置
nodeAffinity: #设置node亲和性
requiredDuringSchedulingIgnoredDuringExecution: # 硬限制
nodeSelectorTerms:
- matchExpressions: # 匹配env的值在["xxx","yyy"]中的标签
- key: nodeenv
operator: In
values: ["xxx","yyy"]
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Nodeaffinity-Required.yaml
pod/pod-nodeaffinity-required created
# 查看Pod状态
# 发现Pod的NODE一直为<none>
[root@master yaml]# kubectl get pods pod-nodeaffinity-required -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeaffinity-required 0/1 Pending 0 21s <none> <none> <none> <none>
# 查看Pod详情
# 发现提示node选择失败
[root@master yaml]# kubectl describe pod pod-nodeaffinity-required -n default
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 105s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
# 删除Pod
[root@master yaml]# kubectl delete -f Pod-Nodeaffinity-Required.yaml
pod "pod-nodeaffinity-required" deleted
# 修改Pod-Nodeaffinity-Required.yaml文件
# 将values: ["xxx","yyy"]------> ["pro","yyy"],并启动
[root@master yaml]# vim Pod-Nodeaffinity-Required.yaml
[root@master yaml]# kubectl create -f Pod-Nodeaffinity-Required.yaml
pod/pod-nodeaffinity-required created
# 查看Pod信息
# 发现Pod已经成功调度到work1.host.com节点上
[root@master yaml]# kubectl get pods pod-nodeaffinity-required -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeaffinity-required 1/1 Running 0 79s 10.244.67.107 work1.host.com <none> <none>
requiredDuringSchedulingIgnoredDuringExecution
创建Pod-Nodeaffinity-Preferred.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeaffinity-preferred
namespace: default
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
affinity: #亲和性设置
nodeAffinity: #设置node亲和性
preferredDuringSchedulingIgnoredDuringExecution: # 软限制
- weight: 1
preference:
matchExpressions: # 匹配env的值在["xxx","yyy"]中的标签(当前环境没有)
- key: nodeenv
operator: In
values: ["xxx","yyy"]
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Nodeaffinity-Preferred.yaml
pod/pod-nodeaffinity-preferred created
# 查看Pod状态
# 发现Pod成功调度
[root@master yaml]# kubectl get pod pod-nodeaffinity-preferred -n default
NAME READY STATUS RESTARTS AGE
pod-nodeaffinity-preferred 1/1 Running 0 27s
NodeAffinity规则设置的注意事项:
1 如果同时定义了nodeSelector和nodeAffinity,那么必须两个条件都得到满足,Pod才能运行在指定的Node上
2 如果nodeAffinity指定了多个nodeSelectorTerms,那么只需要其中一个能够匹配成功即可
3 如果一个nodeSelectorTerms中有多个matchExpressions ,则一个节点必须满足所有的才能匹配成功
4 如果一个pod所在的Node在Pod运行期间其标签发生了改变,不再符合该Pod的节点亲和性需求,则系统将忽略此变化
PodAffinity主要实现以运行的Pod为参照,实现让新创建的Pod跟参照pod在一个区域的功能。PodAffinity
的可配置项如下
pod.spec.affinity.podAffinity
requiredDuringSchedulingIgnoredDuringExecution 硬限制
namespaces 指定参照pod的namespace
topologyKey 指定调度作用域
labelSelector 标签选择器
matchExpressions 按节点标签列出的节点选择器要求列表(推荐)
key 键
values 值
operator 关系符 支持In, NotIn, Exists, DoesNotExist.
matchLabels 指多个matchExpressions映射的内容
preferredDuringSchedulingIgnoredDuringExecution 软限制
podAffinityTerm 选项
namespaces
topologyKey
labelSelector
matchExpressions
key 键
values 值
operator
matchLabels
weight 倾向权重,在范围1-100
topologyKey用于指定调度时作用域,例如:
如果指定为kubernetes.io/hostname,那就是以Node节点为区分范围
如果指定为beta.kubernetes.io/os,则以Node节点的操作系统类型来区分
requiredDuringSchedulingIgnoredDuringExecution
创建一个参照Pod的清单Pod-Podaffinity-Target.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-podaffinity-target
namespace: default
labels:
podenv: pro #设置标签
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
nodeName: work1.host.com # 将目标pod名确指定到work1.host.com上
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Podaffinity-Target.yaml
pod/pod-podaffinity-target created
# 查看Pod状态和调度节点
[root@master yaml]# kubectl get pods pod-podaffinity-target -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-podaffinity-target 1/1 Running 0 19m 10.244.67.108 work1.host.com <none> <none>
创建Pod-Podaffinity-Required.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-podaffinity-required
namespace: default
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
affinity: #亲和性设置
podAffinity: #设置pod亲和性
requiredDuringSchedulingIgnoredDuringExecution: # 硬限制
- labelSelector:
matchExpressions: # 匹配env的值在["xxx","yyy"]中的标签
- key: podenv
operator: In
values: ["xxx","yyy"]
topologyKey: kubernetes.io/hostname
上面的配置为匹配标签podenv=xxx或者podenv=yyy的容器的同一节点,现在还没有这样的Pod,下面运行测试一下
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Podaffinity-Required.yaml
pod/pod-podaffinity-required created
# 查看Pod状态
# 发现创建失败
[root@master yaml]# kubectl get pods pod-podaffinity-required -n default
NAME READY STATUS RESTARTS AGE
pod-podaffinity-required 0/1 Pending 0 41s
# 查看Pod详情
# 发现NODE节点调度失败
[root@master yaml]# kubectl describe pods pod-podaffinity-required -n default
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 85s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 3 node(s) didn't match pod affinity rules. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
# 删除Pod
[root@master yaml]# kubectl delete -f Pod-Podaffinity-Required.yaml
pod "pod-podaffinity-required" deleted
# 修改Pod-Podaffinity-Required.yaml的values: ["xxx","yyy"]为values:["pro","yyy"]
# 再次创建Pod
[root@master yaml]# vim Pod-Podaffinity-Required.yaml
[root@master yaml]# kubectl create -f Pod-Podaffinity-Required.yaml
pod/pod-podaffinity-required created
# 再次查看Pod状态
# 发现Pod已经成调度到参照Pod的节点
[root@master yaml]# kubectl get pods pod-podaffinity-required -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-podaffinity-required 1/1 Running 0 61s 10.244.67.109 work1.host.com <none> <none>
PodAffinity
的 preferredDuringSchedulingIgnoredDuringExecution
,不再演示。
PodAntiAffinity主要实现以运行的Pod为参照,让新创建的Pod跟参照pod不在一个区域中的功能。PodAntiAffinty的配置方式适合PodAffinty是一样的,测试方法如下
# 继续使用PodAffinity的Pod为参照Pod
[root@master yaml]# kubectl get pods -n default -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
pod-podaffinity-target 1/1 Running 0 28m 10.244.67.108 work1.host.com <none> <none> podenv=pro
创建Pod-Podantiaffinity-Required.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-podantiaffinity-required
namespace: default
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
affinity: #亲和性设置
podAntiAffinity: #设置pod亲和性
requiredDuringSchedulingIgnoredDuringExecution: # 硬限制
- labelSelector:
matchExpressions: # 匹配podenv的值在["pro"]中的标签
- key: podenv
operator: In
values: ["pro"]
topologyKey: kubernetes.io/hostname
上面配置为新Pod必须要与拥有标签nodeenv=pro的pod不在同一Node上,运行测试一下
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Podantiaffinity-Required.yaml
pod/pod-podantiaffinity-required created
# 查看Pod状态
# 发现Pod调度到了work2.host.com节点
[root@master yaml]# kubectl get pods pod-podantiaffinity-required -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-podantiaffinity-required 1/1 Running 0 2m13s 10.244.52.230 work2.host.com <none> <none>
前面的调度方式都是站在Pod的角度上,通过在Pod上添加属性,来确定Pod是否要调度到指定的Node上,其实我们也可以站在Node的角度上,通过在Node上添加污点属性,来决定是否允许Pod调度过来。Node被设置上污点之后就和Pod之间存在了一种相斥的关系,进而拒绝Pod调度进来,甚至可以将已经存在的Pod驱逐出去。
污点的格式为:key=value:effect
, key和value是污点的标签,effect描述污点的作用,支持如下三个选项:
# 设置污点
kubectl taint nodes <节点> key=value:effect
# 去除污点
kubectl taint nodes <节点> key:effect-
# 去除所有污点
kubectl taint nodes <节点> key-
# 查看污点
kubectl describe node <节点>
......
Taints: <none>
......
已NoSchedule为例,创建Pod-Taints-Noschedule.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-taints-noschedule
namespace: default
labels:
app: pod
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
# 为work1.host.com创建污点
[root@master yaml]# kubectl taint nodes work1.host.com region=qingdao:NoSchedule
node/work1.host.com tainted
# 为work2.host.com创建污点
[root@master yaml]# kubectl taint nodes work2.host.com region=beijing:NoSchedule
node/work2.host.com tainted
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Taints-Noschedule.yaml
pod/pod-taints-noschedule created
# 查看Pod状态
# 发现Pod未被调度到节点上面
[root@master yaml]# kubectl get pod -n default pod-taints-noschedule -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-taints-noschedule 0/1 Pending 0 38s <none> <none> <none> <none>
# 查看Pod详情
# 发现集群3台NODE都有污点不能调度
[root@master yaml]# kubectl describe pod -n default pod-taints-noschedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 108s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) had untolerated taint {region: beijing}, 1 node(s) had untolerated taint {region: qingdao}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
污点就是拒绝,容忍就是忽略,Node通过污点拒绝pod调度上去,Pod通过容忍忽略拒绝。
配置模板
[root@master yaml]# kubectl explain pod.spec.tolerations
......
FIELDS:
key # 对应着要容忍的污点的键,空意味着匹配所有的键
value # 对应着要容忍的污点的值
operator # key-value的运算符,支持Equal和Exists(默认)
effect # 对应污点的effect,空意味着匹配所有影响
tolerationSeconds # 容忍时间, 当effect为NoExecute时生效,表示pod在Node上的停留时间
创建Pod-Toleration.yaml,内容如下
apiVersion: v1
kind: Pod
metadata:
name: pod-toleration
namespace: default
labels:
app: pod
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.23.1
tolerations: # 添加容忍
- key: "region" # 要容忍的污点的key
operator: "Equal" # 操作符equal等于
value: "beijing" # 容忍的污点的value
effect: "NoSchedule" # 添加容忍的规则,这里必须和标记的污点规则相同
# 创建Pod
[root@master yaml]# kubectl create -f Pod-Toleration.yaml
pod/pod-toleration created
# 查看Pod状态
# 发现成功调度到work2.host.com节点
[root@master yaml]# kubectl get pod pod-toleration -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-toleration 1/1 Running 0 6s 10.244.52.232 work2.host.com <none> <none>
]]>