BoyChai's Blog - Containerd 2023-01-12T14:30:00+00:00 Typecho https://blog.boychai.xyz/index.php/feed/atom/tag/Containerd/ <![CDATA[Containerd-容器管理]]> https://blog.boychai.xyz/index.php/archives/47/ 2023-01-12T14:30:00+00:00 2023-01-12T14:30:00+00:00 BoyChai https://blog.boychai.xyz Containerd概述

什么是Containerd

Containerd是一个行业标准的容器运行时,强调简单性、健壮性和可移植性。它可以作为Linux和Windows的守护进程使用,它可以管理其主机系统的完整容器生命周期:映像传输和存储、容器执行和监督、低级存储和网络附件等。

Docker和Containerd的关系

最开始ContainerdDocker的一部分,但是Docker的公司把Containerd剥离出来并捐赠给了一个开源社区(CNCF)独发展和运营。阿里云,AWS, GoogleIBMMicrosoft将参与到Containerd的开发中。

为什么要学习Containerd

kubernetes在1.5版本就发布了CRI(Container Runtime Interface)容器运行时接口,但是Docker是不符合这个标准的,Docker在当时又占据了大部分市场直接弃用Docker是不可能的,所以当时kubernetes单独维护了一个适配器(dockershim)单独给Docker用。

Docker的功能有很多,实际kubernetes用到的功能只是一小部分,而那些用不到的功能半身就可能带来安全隐患。

在1.20版本就发消息打算弃用Docker不再默认支持Docker当容器运行时。

在1.24版本正式弃用(移除dockershim)。在1.24之后的版本如果还想使用Docker作为底层的容器管理工具则需要单独安装dockershim

Containerd是支持CRI标准的,所以自然也就将容器运行时切换到Containerd上面了。

安装Containerd

YUM

直接使用docker的镜像源安装即可。

[root@host ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
[root@host ~]# yum -y install containerd.io 
......
[root@host ~]# rpm -qa containerd.io
containerd.io-1.6.15-3.1.el8.x86_64

使用下面命令设置开机自启并启动containerd

systemctl enable --now containerd

二进制

安装包

Containerd有两种安装包,区别如下

  • 第一种是containerd-xxx,这种包用于单机测试没问题,不包含runC,需要提前安装。
  • 第二种是cri-containerd-cni-xxxx,包含runc和k8s里的所需的相关文件。k8s集群里面需要用到此包。虽然包含runC,但是依赖系统中的seccomp(安全计算模式,用来限制系统资源的模式)

本文采用第二种包进行安装。

获取安装包

下载地址:Github

本文采用的版本是cri-containerd-cni-1.6.15-linux-amd64.tar.gz

下载好上传到服务器里面即可

[root@host ~]# mkdir containerd
[root@host ~]# mv cri-containerd-cni-1.6.15-linux-amd64.tar.gz containerd/
[root@host ~]# cd containerd
[root@host containerd]# tar xvf cri-containerd-cni-1.6.15-linux-amd64.tar.gz 
[root@host containerd]# ls
cri-containerd-cni-1.6.15-linux-amd64.tar.gz  etc  opt  usr

手动安装

[root@host containerd]# cp ./etc/systemd/system/containerd.service /etc/systemd/system/
[root@host containerd]# cp usr/local/sbin/runc /usr/sbin/
[root@host containerd]# cp usr/local/bin/ctr /usr/bin/
[root@host containerd]# cp ./usr/local/bin/containerd /usr/local/bin/
[root@host containerd]# mkdir /etc/containerd
[root@host containerd]# containerd config default > /etc/containerd/config.toml

修改配置

[root@host containerd]# cat /etc/containerd/config.toml |grep sandbox
    sandbox_image = "registry.k8s.io/pause:3.6"

这个参数是指向了一个镜像地址,这个地址在国内是被墙的,通过下面命令替换,下面的地址是我在dockerhub上面做的副本。

[root@host containerd]# sed -i 's/registry.k8s.io\/pause:3.6/docker.io\/boychai\/pause:3.6/g' /etc/containerd/config.toml 
[root@test containerd]# cat /etc/containerd/config.toml |grep sandbox_image
    sandbox_image = "docker.io/boychai/pause:3.6"

启动服务

[root@host containerd]# systemctl enable --now containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
[root@host containerd]# ctr version
Client:
  Version:  v1.6.15
  Revision: 5b842e528e99d4d4c1686467debf2bd4b88ecd86
  Go version: go1.18.9

Server:
  Version:  v1.6.15
  Revision: 5b842e528e99d4d4c1686467debf2bd4b88ecd86
  UUID: ebf1fe8b-37f7-4d94-8277-788e9f2c2a17
[root@test containerd]# runc -v
runc version 1.1.4
commit: v1.1.4-0-g5fd4c4d1
spec: 1.0.2-dev
go: go1.18.9
libseccomp: 2.5.1

镜像管理

帮助信息

[root@host ~]# ctr images -h
NAME:
   ctr images - manage images

USAGE:
   ctr images command [command options] [arguments...]

COMMANDS:
   check                    check existing images to ensure all content is available locally
   export                   export images
   import                   import images
   list, ls                 list images known to containerd
   mount                    mount an image to a target path
   unmount                  unmount the image from the target
   pull                     pull an image from a remote
   push                     push an image to a remote
   delete, del, remove, rm  remove one or more images by reference
   tag                      tag an image
   label                    set and clear labels for an image
   convert                  convert an image

OPTIONS:
   --help, -h  show help
命令概述
check检查镜像
export导出镜像
import导入镜像
list,ls列出镜像
mount挂载镜像
unmount卸载镜像
pull下载镜像
push推送镜像
delete,del,remove,rm删除镜像
tag修改标记
label修改标签
convert转换镜像

images可以使用简写 例如列出帮助信息"ctr i -h"

下载镜像

containerd支持OCI标准的镜像,所以可以用dockerhub中的镜像或者dockerfile构建的镜像。

ctr i pull 镜像名称

[root@host ~]# ctr images pull docker.io/library/nginx:alpine
docker.io/library/nginx:alpine:                                                   resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6:    exists         |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:c1b9fe3c0c015486cf1e4a0ecabe78d05864475e279638e9713eb55f013f907f: exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:c7a81ce22aacea2d1c67cfd6d3c335e4e14256b4ffb80bc052c3977193ba59ba:    done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16:   exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:83e90619bc2e4993eafde3a1f5caf5172010f30ba87bbc5af3d06ed5ed93a9e9:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:d52adec6f48bc3fe2c544a2003a277d91d194b4589bb88d47f4cfa72eb16015d:    exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:10eb2ce358fad29dd5edb0d9faa50ff455c915138fdba94ffe9dd88dbe855fbe:    exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:a1be370d6a525bc0ae6cf9840a642705ae1b163baad16647fd44543102c08581:    exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:689b9959905b6f507f527ce377d7c742a553d2cda8d3529c3915fb4a95ad45bf:    exists         |++++++++++++++++++++++++++++++++++++++| 
elapsed: 11.2s                                                                    total:  15.7 M (1.4 MiB/s)                                       
unpacking linux/amd64 sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6...
done: 709.697156ms

查看镜像

crt images <ls|list>

[root@host ~]# ctr images ls
REF                            TYPE                                                      DIGEST                                                                  SIZE     PLATFORMS                                                                                LABELS 
docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -

镜像挂载

查看镜像的文件系统

crt images mount 镜像名称 本地目录

[root@host ~]# mkdir /mnt/nginx-alpine
[root@host ~]# ctr images mount docker.io/library/nginx:alpine /mnt/nginx-alpine/
sha256:a71c46316a83c0ac8c2122376a89b305936df99fa354c265f5ad2c1825e94167
/mnt/nginx-alpine/
[root@host ~]# cd /mnt/nginx-alpine/
[root@host nginx-alpine]# ls
bin  dev  docker-entrypoint.d  docker-entrypoint.sh  etc  home  lib  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

镜像卸载

卸载已经挂载到本地的镜像文件系统

crt images unmount 本地目录

[root@host ~]# ctr images unmount /mnt/nginx-alpine/
/mnt/nginx-alpine/
[root@host ~]# ls /mnt/nginx-alpine/

镜像导出

ctr images export --platform 平台 导出的文件名称 镜像名称

[root@host ~]# ctr images export --platform linux/amd64 nginx.tar docker.io/library/nginx:alpine
[root@host ~]# ls
anaconda-ks.cfg  containerd  nginx.tar

镜像删除

ctr images delete|del|remove|rm 镜像名称

[root@host ~]# ctr images del docker.io/library/nginx:alpine
docker.io/library/nginx:alpine
[root@host ~]# ctr images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS

镜像导入

ctr images import 镜像文件名称

[root@host ~]# ctr images import  nginx.tar 
unpacking docker.io/library/nginx:alpine (sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6)...done
[root@host ~]# ctr images ls
REF                            TYPE                                                      DIGEST                                                                  SIZE     PLATFORMS                                                                                LABELS 
docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -  

镜像名称

更改某个镜像的名称

ctr images tag 原镜像 新镜像名

[root@host ~]# ctr images ls
REF                            TYPE                                                      DIGEST                                                                  SIZE     PLATFORMS                                                                                LABELS 
docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -      
[root@host ~]# ctr images ls
REF                            TYPE                                                      DIGEST                                                                  SIZE     PLATFORMS                                                                                LABELS 
docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -      
[root@host ~]# ctr images tag docker.io/library/nginx:alpine nginx:alpine
nginx:alpine
[root@host ~]# ctr images ls
REF                            TYPE                                                      DIGEST                                                                  SIZE     PLATFORMS                                                                                LABELS 
docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -      
nginx:alpine                   application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -  

容器管理

]]>
<![CDATA[Kubernetes-容器编排引擎(安装-Kubeadm-Containerd-1.24.0)]]> https://blog.boychai.xyz/index.php/archives/23/ 2022-07-31T12:32:00+00:00 2022-07-31T12:32:00+00:00 BoyChai https://blog.boychai.xyz 准备开始
  • 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令
  • 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)
  • 2 CPU 核或更多
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。
  • 开启机器上的某些端口。请参见这里 了解更多详细信息。
  • 禁用交换分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区。

环境

主机名系统硬件环境
master.host.comrocky8.52核CPU,2G内存关闭selinux和防火墙,可使用主机名通信
work1.host.comrocky8.52核CPU,2G内存关闭selinux和防火墙,可使用主机名通信
work2.host.comrocky8.52核CPU,2G内存关闭selinux和防火墙,可使用主机名通信

初始化主机

一下操作所有主机都做

安装配置Containerd

curl -o /etc/yum.repos.d/docker.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install containerd.io
containerd config default | sudo tee /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
systemctl enable --now containerd

关闭SWAP分区

sudo swapoff -a
sudo sed -ri 's/.*swap.*/#&/' /etc/fstab

允许 iptables 检查桥接流量并配置内核转发

modprobe  br_netfilter
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system

配置IPVS

service有基于iptables和基于ipvs两种代理模型。基于ipvs的性能要高一些。需要手动载入才能使用ipvs模块
yum install -y ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod +x /etc/sysconfig/modules/ipvs.modules
/bin/bash /etc/sysconfig/modules/ipvs.modules
如果出现以下报错则执行下面内容
modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/4.18.0-348.el8.0.2.x86_64
sed -i 's/nf_conntrack_ipv4/nf_conntrack/g' /etc/sysconfig/modules/ipvs.modules
/bin/bash /etc/sysconfig/modules/ipvs.modules

安装Kubernetes相关软件工具

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.24.0 kubeadm-1.24.0 kubectl-1.24.0
systemctl enable --now kubelet

安装Kubernetes

MASTER节点

生成kubeadm配置文件:sudo kubeadm config print init-defaults > kubeadm.yaml
编辑kubeadm.yaml并修改下面内容

advertiseAddress: 改成自己的ip
nodeRegistration下的name字段:改成自己的主机名
imageRepository: registry.aliyuncs.com/google_containers

在networking段添加pod的网段:podSubnet: 10.244.0.0/16
修改后内容如下:

$ cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.0.109
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master.host.com
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.24.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}

下载Kubernetes所需镜像:

$ kubeadm config --config kubeadm.yaml images pull
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.24.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.7
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.3-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

在意一下pause镜像的的版本名称我这里是registry.aliyuncs.com/google_containers/pause:3.7
修改containerd的配置文件/etc/containerd/config.toml,把里面的sandbox_image的值改为pause镜像的全称加版本

$ cat /etc/containerd/config.toml |grep sandbox
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"

重启Containerd:systemctl restart containerd
初始化master节点:kubeadm init --config kubeadm.yaml
注意:修改containerd的sandbox_image配置是全部的主机都要修改
初始化成功之后会打印下面的内容


Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.4:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:91b1d4502e8950ece37fbc591160007f5e2a3311ff0ebe05112d24851ca082a9

其中下面内容需要自己去执行

o start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

之后这段内容是加入集群的命令,work节点可以通过下面命令来加入集群

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.4:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:91b1d4502e8950ece37fbc591160007f5e2a3311ff0ebe05112d24851ca082a9

WORK节点

WORK节点执行master节点返回的加入集群命令加入集群,出现下面内容即加入成功

kubeadm join 192.168.0.4:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:91b1d4502e8950ece37fbc591160007f5e2a3311ff0ebe05112d24851ca082a9

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

网络插件Calico

选择网络插件可参考官方文档进行选择本文选用Calico网络插件
在master节点下载calico的yaml文件
curl https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calico.yaml -O
找到下面两行内容进行取消注释并修改value值

# - name: CALICO_IPV4POOL_CIDR
#   value: "192.168.0.0/16"

value值应为开始创建master节点时的pod网络10.244.0.0/16,修改后为

- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"

之后进行创建,创建方法如下

$ sudu kubectl apply -f calico.yaml
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

执行完成没有报错之后可以运行kubectl get node来查看节点的联通状态,当STATUS全都变成Ready即部署成功

$ kubectl get node
NAME             STATUS   ROLES           AGE   VERSION
master.host.com  Ready    control-plane   43m   v1.24.3
work1.host.com   Ready    <none>          39m   v1.24.3
work2.host.com   Ready    <none>          39m   v1.24.3

问题

出现报错以及问题欢迎在评论区讨论
]]>