淘宝开发的ngx_lua模块通过将lua解释器解释器集成Nginx,可以采用lua脚本实现业务逻辑,由于lua的紧凑、快速以及内建协程,所以在保证高并发服务能力的同时极大降低了业务逻辑实现成本。
LuaJIT是采用C语言编写的Lua代表的解释器。
官网: http://luajit.org
在官网找到对应下载地址: https://github.com/LuaJIT/LuaJIT/tags
[root@work env]# wget https://github.com/LuaJIT/LuaJIT/archive/refs/tags/v2.0.5.tar.gz
[root@work env]# tar xvf v2.0.5.tar.gz
[root@work env]# cd LuaJIT-2.0.5/
[root@work LuaJIT-2.0.5]# make && make install
make[1]: Leaving directory '/opt/env/LuaJIT-2.0.5/src'
==== Successfully built LuaJIT 2.0.5 ====
==== Installing LuaJIT 2.0.5 to /usr/local ====
mkdir -p /usr/local/bin /usr/local/lib /usr/local/include/luajit-2.0 /usr/local/share/man/man1 /usr/local/lib/pkgconfig /usr/local/share/luajit-2.0.5/jit /usr/local/share/lua/5.1 /usr/local/lib/lua/5.1
cd src && install -m 0755 luajit /usr/local/bin/luajit-2.0.5
cd src && test -f libluajit.a && install -m 0644 libluajit.a /usr/local/lib/libluajit-5.1.a || :
rm -f /usr/local/bin/luajit /usr/local/lib/libluajit-5.1.so.2.0.5 /usr/local/lib/libluajit-5.1.so /usr/local/lib/libluajit-5.1.so.2
cd src && test -f libluajit.so && \
install -m 0755 libluajit.so /usr/local/lib/libluajit-5.1.so.2.0.5 && \
ldconfig -n /usr/local/lib && \
ln -sf libluajit-5.1.so.2.0.5 /usr/local/lib/libluajit-5.1.so && \
ln -sf libluajit-5.1.so.2.0.5 /usr/local/lib/libluajit-5.1.so.2 || :
cd etc && install -m 0644 luajit.1 /usr/local/share/man/man1
cd etc && sed -e "s|^prefix=.*|prefix=/usr/local|" -e "s|^multilib=.*|multilib=lib|" luajit.pc > luajit.pc.tmp && \
install -m 0644 luajit.pc.tmp /usr/local/lib/pkgconfig/luajit.pc && \
rm -f luajit.pc.tmp
cd src && install -m 0644 lua.h lualib.h lauxlib.h luaconf.h lua.hpp luajit.h /usr/local/include/luajit-2.0
cd src/jit && install -m 0644 bc.lua v.lua dump.lua dis_x86.lua dis_x64.lua dis_arm.lua dis_ppc.lua dis_mips.lua dis_mipsel.lua bcsave.lua vmdef.lua /usr/local/share/luajit-2.0.5/jit
ln -sf luajit-2.0.5 /usr/local/bin/luajit
==== Successfully installed LuaJIT 2.0.5 to /usr/local ====
nginx第三方模块lua-nginx-module
官网: https://github.com/openresty/lua-nginx-module
[root@work env]# wget https://github.com/openresty/lua-nginx-module/archive/refs/tags/v0.10.26.tar.gz
[root@work env]# tar xvf v0.10.26.tar.gz
[root@work env]# ln -s lua-nginx-module-0.10.26 lua-nginx-module
[root@work ~]# tail -n2 /etc/profile
export LUAJIT_LIB=/usr/local/lib
export LUAJIT_INC=/usr/local/include/luajit-2.0
[root@work ~]# source /etc/profile
打开nginx编译安装的位置 进行重新编译安装
[root@work nginx-1.24.0]# ./configure --prefix=/usr/local/nginx --sbin-path=/usr/local/nginx/sbin/nginx --conf-path=/usr/local/nginx/conf/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock --user=nginx --group=nginx --with-http_ssl_module --with-http_stub_status_module --with-http_gzip_static_module --http-client-body-temp-path=/var/tmp/nginx/client/ --http-proxy-temp-path=/var/tmp/nginx/proxy/ --http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi --http-scgi-temp-path=/var/tmp/nginx/scgi --with-pcre --add-module=/opt/package/nginx/lua-nginx-module
[root@work nginx-1.24.0]# make && make install
扩展的重点是--with-pcre --add-module=/opt/package/nginx/lua-nginx-module
这里就相当于重新安装了,之前安装的模块还需要再这里再添加一遍
当在扩展号nginx模块后执行nginx相关命令出现以下错误
[root@work ~]# nginx -V
nginx: error while loading shared libraries: libluajit-5.1.so.2: cannot open shared object file: No such file or directory
这个错误表明 Nginx 在启动时无法找到名为 libluajit-5.1.so.2 的共享库文件。这很可能是由于 Nginx 模块依赖 LuaJIT 库,但系统中缺少了该库所致。解决办法如下
[root@work ~]# ln -s /usr/local/lib/libluajit-5.1.so.2 /lib64/liblua-5.1.so.2
[root@work conf]# nginx
nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
nginx: [alert] failed to load the 'resty.core' module (https://github.com/openresty/lua-resty-core); ensure you are using an OpenResty release from https://openresty.org/en/download.html (reason: module 'resty.core' not found:
no field package.preload['resty.core']
no file './resty/core.lua'
no file '/usr/local/share/luajit-2.0.5/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core/init.lua'
no file './resty/core.so'
no file '/usr/local/lib/lua/5.1/resty/core.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file './resty.so'
no file '/usr/local/lib/lua/5.1/resty.so'
no file '/usr/local/lib/lua/5.1/loadall.so') in /usr/local/nginx/conf/nginx.conf:117
原因似乎是缺少lua-resty-core模块,这里手动编译安装一下
项目地址: https://github.com/openresty/lua-resty-core
[root@work nginx]# tar xvf v0.1.28.tar.gz
tar xvf
make install
直接使用OpenRestry,它是由淘宝工程师开发的,它是基于Nginx与Lua的高性能Web平台,其内部集成了大量精良的Lua库,第三方模块以及大多数的依赖项,用于方便搭建能够处理高并发、扩展性极高的动态Web应用、Web服务和动态网关。所以本身OpenResty内部就已经集成了Nginx和Lua,我们用起来会更加方便
参考: https://openresty.org/cn/linux-packages.html
配置:/usr/local/openrestry/nginx/conf
OpenRestry,它是由淘宝工程师开发的,它是基于Nginx与Lua的高性能Web平台,其内部集成了大量精良的Lua库,第三方模块以及大多数的依赖项,用于方便搭建能够处理高并发、扩展性极高的动态Web应用、Web服务和动态网关。所以本身OpenResty内部就已经集成了Nginx和Lua,我们用起来会更加方便。
PS:本文只讲ngx_lua的使用,其他的基本和nginx配置无区别。
使用Lua编写Nginx脚本的基本构建块是指令。指令用于指定何时运行用户Lua代码以及如何使用结果。下图显示了执行指令的顺序。
先来解释一下*的作用
*:无 , 即 xxx_by_lua ,指令后面跟的是 lua指令
*:_file,即 xxx_by_lua_file 指令后面跟的是 lua文件
*:_block,即 xxx_by_lua_block 在0.9.17版后替换init_by_lua_file
该指令在每次Nginx重新加载配置时执行,可以用来完成一些耗时模块的加载,或者初始化一些全局配置。
该指令用于启动一些定时任务,如心跳检查、定时拉取服务器配置等。
该指令只要用来做变量赋值,这个指令一次只能返回一个值,并将结果赋值给Nginx中指定的变量。
该指令用于执行内部URL重写或者外部重定向,典型的如伪静态化URL重写,本阶段在rewrite处理阶段的最后默认执行。
该指令用于访问控制。例如,如果只允许内网IP访问。
该指令是应用最多的指令,大部分任务是在这个阶段完成的,其他的过程往往为这个阶段准备数据,正式处理基本都在本阶段。
该指令用于设置应答消息的头部信息。
该指令是对响应数据进行过滤,如截断、替换。
该指令用于在log请求处理阶段,用Lua代码处理日志,但并不替换原有log处理。
该指令主要的作用是用来实现上游服务器的负载均衡器算法
该指令作用在Nginx和下游服务开始一个SSL握手操作时将允许本配置项的Lua代码。
输出内容
location /lua {
default_type 'text/html';
content_by_lua 'ngx.say("<h1>HELLO,OpenResty</h1>")';
}
http://xxx/?name=张三&gender=1
Nginx接收到请求后根据gender传入的值,如果是gender传入的是1,则展示张三先生,如果是0则展示张三女士,如果都不是则展示张三。
location /getByGender {
default_type 'text/html';
set_by_lua $param "
-- 获取请求URL上的参数对应的值
local uri_args = ngx.req.get_uri_args()
local name = uri_args['name']
local gender = uri_args['gender']
-- 条件判断 if gender 1 先生 0 女士
if gender == '1' then
return name..'先生'
elseif gender == '0' then
return name..'女士'
else
return name
end
";
# 解决中文乱码
charset utf-8;
# 返回数据
return 200 $param;
}
ngx.req.get_uri_args()返回的是一个table类型
动态获取docker容器ip,做代理
server{
listen 80;
server_name code.boychai.xyz;
client_max_body_size 4096M;
set_by_lua $param '
local name = "gitea"
local port = "3000"
local command = string.format("echo -n `docker inspect --format=\'{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}\' %s`", name)
local handle = io.popen(command)
local result = handle:read("*a")
handle:close()
return "http://"..result..":"..port
';
location / {
if ( $param = 'http://:3000' ) {
return 500 "Error in obtaining site IP";
}
proxy_pass $param;
proxy_set_header Host $proxy_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
]]>task-nodejs.yaml
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-node-project
spec:
workspaces:
- name: cache
mountPath: /root/.npm
- name: source
- name: output
params:
- name: imgTag
type: string
- name: run
type: string
- name: dir
type: string
steps:
- name: build
workingDir: "$(workspaces.source.path)/$(params.dir)"
image: "node:$(params.imgTag)"
script: |
rm -rf package-lock.json
npm install --registry=https://registry.npmmirror.com/
npm run $(params.run)
cp -r dist/* $(workspaces.output.path)/
taskrun.yaml
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
generateName: build-node-project-run-
generation: 1
namespace: cicd-services
spec:
params:
- name: dir
value: frontend
- name: imgTag
value: 21.6.2
- name: run
value: build
serviceAccountName: default
taskRef:
kind: Task
name: build-node-project
workspaces:
- name: cache
persistentVolumeClaim:
claimName: node-cache-pvc
- name: source
persistentVolumeClaim:
claimName: test-tekton-vue-pvc
- name: output
persistentVolumeClaim:
claimName: test-tekton-vue-output-pvc
运行之后会出现下面报错
TaskRunValidationFailed [User error] more than one PersistentVolumeClaim is bound
报错翻译TaskRunValidationFailed[用户错误]绑定了多个PersistentVolumeClaim
,很明确他允许绑定多个pvc,这个蛮离谱的,cicd的过程中用到多个存储应该是很正常的事,tekton却默认不支持绑定多个pvc。
修改tekton的配置把参数disable-affinity-assistant
修改为true
,即可
kubectl -n tekton-pipelines edit cm feature-flags
这个参数的作用如下
设置为 true 将阻止 Tekton 为共享了 workspace 的每个 TaskRun 创建 Affinity Assistant Pod。 这样就可以保证这些 pod 运行在同一个节点上,避免了跨节点访问 pvc 的问题。
还有就是这个功能在v0.60会被弃用,未来估计不会因为这个问题报这个错了。
ISSUE: https://github.com/tektoncd/pipeline/issues/6543
TektonDocs: https://github.com/tektoncd/pipeline/blob/main/docs/affinityassistants.md
配置参考: https://www.soulchild.cn/post/tekton-operator%E9%85%8D%E7%BD%AE%E5%8F%82%E6%95%B0%E8%AF%A6%E8%A7%A3/
类别 | 全称 | 概述 |
---|---|---|
DDL | Data Definition Language | 数据定义语言,用来定义数据库对象(数据库,表,字段) |
DML | Data Manipulation Language | 数据操作语言,用来对数据库表中的数据进行删增改查 |
DQL | Data Query Language | 数据查询语言,用来查询数据库中表的记录 |
DCL | Data Control Language | 数据控制语言,用来创建数据库用户,控制数据库的访问权限 |
SHOW DATABASES;
SELECT DATABASE();
CREATE DATABASE [ IF NOT EXISTS ] 数据库名称 [ DEFAULT CHARSET 字符集 ] [ COLLATE 排序规则 ];
使用时可以在数据库名称前面加入"IF NOT EXISTS",意为当数据库不存在则创建否则不作为。
DROP DATABASE [ IF EXISTS ] 数据库名称;
使用时可以在数据库名称前面加入"IF EXISTS",意为当数据库存在则删除否则不作为。
USE 数据库名称;
ps:表操作需要使用数据库之后才能操作
SHOW TABLES;
DESC 表名称;
SHOW CREATE TABLE 表名;
CREATE TABLE 表名(
字段1 字段1数据类型 [ COMMENT 字段1注释 ],
字段2 字段2数据类型 [ COMMENT 字段2注释 ],
字段3 字段3数据类型 [ COMMENT 字段3注释 ]
......
) [ COMMENT 表格注释 ];
修改表名称
ALTER TEABLE 表名 RENAME TO 新表名;
DROP TABLE [ IF EXISTS ] 表名;
"IF EXISTS" 意为有则删除否则不作为
TRUNCATE TABLE 表名;
ALTER TABLE 表名 ADD 字段名 类型(长度) [ COMMENT 注释 ] [约束];
ALTER TABLE 表名 MODIFY 字段名 新数据类型(长度);
ALTER TABLE 表名 CHANGE 旧字段名 新字段名 类型(长度) [ COMMENT 注释 ] [约束];
ALTER TABLE 表名 DROP 字段名
MYSQL的数据类型主要分为三种:数值类型、字符串类型、日期时间类型。
类型 | 大小 | 有符号范围(SIGNED) | 无符号范围(UNSIGNED) | 概述 |
---|---|---|---|---|
TINYINT | 1 byte | (-128,127) | (0,255) | 小整数值 |
SMALLINT | 2 byte | (-32768,32767) | (0,65535) | 大整数值 |
MEDIUMINT | 3 byte | (-8388608,8388607) | (0,16777215) | 大整数值 |
INT或INTEGER | 4 byte | (-8388608,8388607) | (0,4294967295) | 大整数值 |
BIGINT | 8 byte | (-2^63,2^63-1) | (0,2^64-1) | 极大整数值 |
FLOAT | 4 byte | (-3.402823466 E+38,3.402823466351 E+38) | 0 和 (1.175494351 E38,3.402823466 E+38) | 单精度浮点数值 |
DOUBLE | 8 byte | (-1.7976931348623157E+308,1.7976931348623157E+308) | 0 和(2.2250738585072014E-308,1.7976931348623157E+308) | 双精度浮点数值 |
DECIMAL | 依赖于M(精度)和D(标度)的值 | 依赖于M(精度)和D(标度)的值 | 小数值(精确定点数) |
类型 | 大小 | 概述 |
---|---|---|
CHAR | 0-255 bytes | 定长字符串(需要指定长度) |
VARCHAR | 0-65535 bytes | 变长字符串(需要指定长度) |
TINYBLOB | 0-255 bytes | 不超过255个字符的二进制数据 |
TINYTEXT | 0-255 bytes | 短文本字符串 |
BLOB | 0-65 535 bytes | 二进制形式的长文本数据 |
TEXT | 0-65 535 bytes | 长文本数据 |
MEDIUMBLOB | 0-16 777 215 bytes | 二进制形式的中等长度文本数据 |
MEDIUMTEXT | 0-16 777 215 bytes | 中等长度文本数据 |
LONGBLOB | 0-4 294 967 295 bytes | 二进制形式的极大文本数据 |
LONGTEXT | 0-4 294 967 295 bytes 极 | 极大文本数据 |
类型 | 大小 | 范围 | 格式 | 概述 |
---|---|---|---|---|
DATE | 3 | 1000-01-01 至 9999-12-31 | YYYY-MM-DD | 日期值 |
TIME | 3 | -838:59:59 至 838:59:59 | HH:MM:SS | 时间值或持续时间 |
YEAR | 1 | 1901 至 2155 | YYYY | 年份值 |
DATETIME | 8 | 1000-01-01 00:00:00 至9999-12-31 23:59:59 | YYYY-MM-DDHH:MM:SS | 混合日期和时间值 |
TIMESTAMP | 4 | 1970-01-01 00:00:01 至2038-01-19 03:14:07 | YYYY-MM-DDHH:MM:SS | 混合日期和时间值,时间戳 |
INSERT INTO 表名(字段1,字段2,......) VALUES (值1,值2,......);
INSERT INTO 表名 VALUES (值1,值2,......);
INSERT INTO 表名(字段1,字段2,......) VALUES (值1,值2,......),(值1,值2,......),(值1,值2,......);
INSERT INTO 表名 VALUES (值1,值2,......),(值1,值2,......),(值1,值2,......);
UPDATE 表名 SET 字段1=值1,字段2=值2,...... [ WHERE 条件 ]
DELETE FROM 表名 [ WHERE 条件 ]
修改删除数据的时候,如果不加where判断条件则调整的整张表的内容。
SELECT
字段列表
FROM
表名列表
WHERE
条件列表
GROUP BY
分组字段列表
HAVING
分组后条件列表
ORDER BY
排序字段列表
LIMIT
分页参数
SELECT 字段1,字段2,.... FROM 表名;
SELECT * FRIN 表名;
SELECT 字段1 [ AS '别名' ],字段2 [ AS '别名2' ] FROM 表名;
输出的时候字段名称会 替换成别名,这段SQL中的AS可以不写,例如
SELECT 字段 '别名' FROM 表名;
SELECT distinct 字段列表 FROM 表名;
SELECT 字段列表 FROM 表名 WHERE 条件列表;
运算符 | 功能 |
---|---|
> | 大于 |
\>= | 大于等于 |
< | 小于 |
<= | 小于等于 |
= | 等于 |
<> 或 != | 不等于 |
BETWEEN ... AND ... | 在某个范围之内(含最小,最大值) |
IN(...) | 在in之后的列表中的值,多选一 |
LIKE 占位符 | 模糊匹配(_匹配单个字符,%匹配任意多个字符) |
IS NULL | 是NULL |
运算符 | 功能 | ||
---|---|---|---|
AND 或 && | 并且(多个条件同时成立) | ||
OR 或 \ | \ | 或者(多个条件任意一个成立) | |
NOT 或 ! | 非,不是 |
SELECT 聚合函数(字段) FROM 表名;
函数 | 功能 |
---|---|
count | 统计数量 |
max | 最大值 |
min | 最小值 |
avg | 平均值 |
sum | 求和 |
SELECT 字段列表 FROM 表名 [ WHERE 条件 ] GROUP BY 分组字段名 [ HAVING 分组后过滤条件 ]
SELECT 字段列表 FROM 表名 ORDER BY 字段1 排序方式1, 字段2,排序方式2
ASC:升序(默认)
DESC:降序
SELECT 字段列表 FROM 表名 LIMIT 起始索引,查询记录数;
DQL的执行顺序为FROM 表 > WHERE 条件查询 > GROUP BY 分组查询 > SELECT 字段查询 > ORDER BY 排序查询 > LIMIT 分页查询
USE mysql;
SELECT * FROM user;
CREATE USER `用户名`@`主机名` IDENTIFIED BY `密码`;
ALTER USER `用户名`@`主机名` IDENTIFIED WITH mysql_native_password BY `新密码`;
DROP USER `用户名`@`主机名`;
常用权限如下表
权限 | 说明 |
---|---|
ALL,ALL PRIVILEGES | 所有权限 |
SELECT | 数据查询 |
INSERT | 插入数据 |
UPDATE | 更新数据 |
DELETE | 删除数据 |
ALTER | 修改表 |
DROP | 删除数据库、表、试图 |
CREATE | 创建数据库、表 |
这是常用的 其他的可以去官网查看
SHOW GRANTS FOR `用户名`@`主机名`;
GRANT 权限列表 ON 数据库名.表名 TO `用户名`@`主机名`;
REVOKE 权限列表 ON 数据库名.表名 FROM `用户名`@`主机名`;
]]>文档地址:https://helm.sh/zh/docs/topics/charts/
通过下面命令创建一个chart,指定chart名:mychart
[root@Kubernetes charts]# helm create mychart
Creating mychart
[root@Kubernetes charts]# tree mychart
mychart
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── NOTES.txt
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml
3 directories, 10 files
目录 | 介绍 |
---|---|
charts | 存放子chart的目录,目录里存放这个chart依赖的所有子chart |
Chart.yaml | 保存chart的基本信息,包括名字、描述信息及版本等,这个变量文件都可以被template目录下文件所引用 |
templates | 模板文件目录,目录里面存放所有yaml模板文件,包含了所有部署应用的yaml文件 |
templates/deployment.yaml | 创建deployment对象的模板文件 |
templates/_helpers.tpl_ | 放置模板助手的文件,可以在整个chart中重复使用,是放一些templates目录下这些yaml都有可能会用的一些模板 |
templates/hpa.yaml | |
templates/ingress.yaml | |
templates/NOTES.txt | 存放提示信息的文件,介绍chart帮助信息,helm install部署后展示给用户,如何使用chart等,是部署chart后给用户的提示信息 |
templates/serviceaccount.yaml | |
templates/service.yaml | |
templates/tests | 用于测试的文件,测试完部署完chart后,如web,做一个连接,看看是否部署成功 |
templates/tests/test-connection.yaml | 用于渲染模板的文件(变量文件,定义变量的值)定义templates目录下的yaml文件可能引用到的变量 |
values.yaml | values.yaml用于存储templates目录中模板文件中用到变量的值,这些变量定义都是为了让templates目录下yaml引用 |
编写一个chart,不引用内置对象的变量值(用HELM3发布创建一个ConfigMap,创建到K8s集群中,发布其他应用也一样,我们由浅入深进行学习)
编写Chart:
[root@Kubernetes charts]# helm create myConfigMapChart
Creating myConfigMapChart
[root@Kubernetes charts]# cd myConfigMapChart/templates/
[root@Kubernetes templates]# rm -rf ./*
[root@Kubernetes templates]# vim configmap.yaml
# 文件内容如下
apiVersion: v1
kind: ConfigMap
metadata:
name: my-cm-chart
data:
myValue: "Hello,World"
创建release实例:
[root@Kubernetes charts]# ls
mychart myConfigMapChart
[root@Kubernetes charts]# helm install mycm ./myConfigMapChart/
NAME: mycm
LAST DEPLOYED: Thu May 11 00:26:16 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
查看release实例:
[root@Kubernetes charts]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
mycm default 1 2023-05-11 00:26:16.392817959 +0800 CST deployed myConfigMapChart-0.1.0 1.16.0
查看release的详细信息:
[root@Kubernetes charts]# helm get manifest mycm
---
# Source: myConfigMapChart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-cm-chart
data:
myValue: "Hello,World"
删除release实例:
[root@Kubernetes charts]# helm uninstall mycm
release "mycm" uninstalled
[root@Kubernetes charts]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
案例1只是简单的创建一个configmap实际和直接apply没啥区别,案例2将引入变量进行创建chart,把configmap的名称改为变量的方式进行创建。
修改Chart:
[root@Kubernetes charts]# vim myConfigMapChart/templates/configmap.yaml
[root@Kubernetes charts]# cat myConfigMapChart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
# name: my-cm-chart
name: {{ .Release.Name }}-configmap
data:
# myValue: "Hello,World"
myValue: {{ .Values.MY_VALUE }}
[root@Kubernetes charts]# > myConfigMapChart/values.yaml
[root@Kubernetes charts]# vim myConfigMapChart/values.yaml
[root@Kubernetes charts]# cat !$
cat myConfigMapChart/values.yaml
MY_VALUE: "Hello,World"
创建实例:
[root@Kubernetes charts]# helm install mycm2 ./myConfigMapChart/
NAME: mycm2
LAST DEPLOYED: Thu May 11 00:49:06 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
查看实例:
[root@Kubernetes charts]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
mycm2 default 1 2023-05-11 00:49:06.855993522 +0800 CST deployed myConfigMapChart-0.1.0 1.16.0
[root@Kubernetes charts]# kubectl get cm
NAME DATA AGE
kube-root-ca.crt 1 77d
mycm2-configmap 1 87s
YAML解释:
apiVersion: v1
kind: ConfigMap
metadata:
# name: my-cm-chart
name: {{ .Release.Name }}-configmap
data:
# myValue: "Hello,World"
myValue: {{ .Values.MY_VALUE }}
引用内置对象或其他变量的好处:
如果metadata.name中色湖之的值就是一个固定值,这样的模板是无法在k8s中多次部署的,所以我们可以试着在每次安装chart时,都自动metadata.name的设置为release的名称,因为每次部署release的时候实例名称是不一样的,这样部署的时候里面的资源名也就可以作为一个分区,而可以进行重复部署。
HELM提供了一个用来渲染模板的命令,该命令可以将模板内容渲染出来,但是不会进行任何安装的操作。可以用该命令来测试模板渲染的内容是否正确。语法如下:
helm install [release实例名] chart目录 --debug --dry-run
例:
[root@Kubernetes charts]# helm install mycm3 ./myConfigMapChart/ --debug --dry-run
install.go:200: [debug] Original chart version: ""
install.go:217: [debug] CHART PATH: /root/charts/myConfigMapChart
NAME: mycm3
LAST DEPLOYED: Thu May 11 01:03:15 2023
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
MY_VALUE: Hello,World
HOOKS:
MANIFEST:
---
# Source: myConfigMapChart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
# name: my-cm-chart
name: mycm3-configmap
data:
# myValue: "Hello,World"
myValue: Hello,World
[root@Kubernetes charts]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
]]>传统服务部署到K8s集群的流程:
拉取代码-----》打包编译-----》构建镜像-----》准备相关的yaml文件-----》kubectl apply进行部署
传统方式部署引发的问题:
Helm是K8s的包管理工具,可以方便地发现、共享和构建K8s应用
Helm是k8s的包管理器,相当于centos系统中的yum工具,可以将一个服务相关的所有资源信息整合到一个chart中,并且可以使用一套资源发布到多个环境中,可以将应用程序的所有资源和部署信息组合到单个部署包中。
就像Linux下的rpm包管理器,如yum/apt等,可以很方便的将之前打包好的yaml文件部署到k8s上。
helm3中必须指定release名称,如果需要生成一个随机名称,需要加选项--generate-name,helm2中如果不指定release名称,可以自动生成一个随机名称
helm install ./mychart --generate-name
相关连接:
Github:https://github.com/helm/helm/releases
官网:https://helm.sh/zh/docs/intro/install/
下载好对应的版本上传到服务器进行解压缩
[root@Kubernetes helm]# ls
helm-v3.11.3-linux-amd64.tar.gz
[root@Kubernetes helm]# tar xvf helm-v3.11.3-linux-amd64.tar.gz
linux-amd64/
linux-amd64/LICENSE
linux-amd64/README.md
linux-amd64/helm
之后配置一下系统环境,让命令可以在系统中随机调用
[root@Kubernetes helm]# ls
helm-v3.11.3-linux-amd64.tar.gz linux-amd64
[root@Kubernetes helm]# cd linux-amd64/
[root@Kubernetes linux-amd64]# pwd
/opt/helm/linux-amd64
[root@Kubernetes linux-amd64]# vim /etc/profile
#追加下面内容
export PATH=$PATH:/opt/helm/linux-amd64
[root@Kubernetes linux-amd64]# source /etc/profile
检查一下是否安装成功
[root@Kubernetes ~]# helm version
version.BuildInfo{Version:"v3.11.3", GitCommit:"323249351482b3bbfc9f5004f65d400aa70f9ae7", GitTreeState:"clean", GoVersion:"go1.20.3"}
]]>fdisk是一种用于管理磁盘分区的工具,常用于Linux和其他Unix-like操作系统中。它可以用于创建、删除和修改磁盘分区,并支持多种文件系统类型,例如FAT、ext2、ext3等。
fdisk还可以显示当前系统中所有磁盘的分区信息,包括磁盘标识符、分区类型、分区大小等。使用fdisk,用户可以轻松地管理磁盘空间,为不同的操作系统或应用程序分配不同的存储空间。
除此之外,fdisk还支持MBR(Master Boot Record)分区方案,它是一种常见的磁盘分区方案,能够在BIOS引导下启动操作系统。
MBR(Master Boot Record)分区是指使用MBR分区方案的磁盘分区方式。MBR分区方案是一种常见的分区方案,能够在BIOS引导下启动操作系统。MBR分区方案将磁盘的前512个字节(即MBR)用于存储分区表和引导程序。其中分区表记录了磁盘分区的信息,包括分区类型、分区起始位置、分区大小等。MBR分区方案最多支持4个主分区或3个主分区和1个扩展分区,扩展分区可以划分为多个逻辑分区。MBR分区方案已经存在了很长时间,但是它有一个缺点,即它只支持最大2TB的磁盘容量。如果需要使用更大的磁盘,就需要使用GPT(GUID Partition Table)分区方案。
系统:Rocky Linux release 8.5
工具:fdisk from util-linux 2.32.1
硬盘:虚拟机添加了一块20G的硬盘
[root@host ~]# lsblk -p // 通过lsblk来查看一下新增的硬盘位置(/dev/sdb)
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/sda 8:0 0 20G 0 disk
├─/dev/sda1 8:1 0 1G 0 part /boot
└─/dev/sda2 8:2 0 19G 0 part
├─/dev/mapper/rl-root 253:0 0 17G 0 lvm /
└─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP]
/dev/sdb 8:16 0 20G 0 disk
/dev/sr0 11:0 1 1024M 0 rom
[root@host ~]# fdisk /dev/sdb // 使用fdisk工具对/dev/sdb这块硬盘进行分区
Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x178d8de5.
Command (m for help): n // 输入n进行新建分区操作
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p // p是创建主分区,e是创建逻辑分区
Partition number (1-4, default 1): 1 // 选择分区号,默认为1,这里可以改其他的(MBR分区最多有4个分区)
First sector (2048-41943039, default 2048): // 起始扇区选择默认即可
Last sector, +sectors or +size{K,M,G,T,P} (2048-41943039, default 41943039): +5G //设置分区大小我这里设置5G
Created a new partition 1 of type 'Linux' and of size 5 GiB. //提示创建成功
Command (m for help): p // p查看分区表
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x178d8de5
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 10487807 10485760 5G 83 Linux // 创建好的分区
Command (m for help): w // 保存之前的分区操作
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
[root@host ~]# mkfs.xfs /dev/sdb1 // 格式化创建好的分区
meta-data=/dev/sdb1 isize=512 agcount=4, agsize=327680 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@host ~]# mkdir /sdb1 // 创建挂载位置
[root@host ~]# mount /dev/sdb1 /sdb1 // 将格式化好的分区进行挂载
[root@host ~]# lsblk -p // 查看分区情况
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/sda 8:0 0 20G 0 disk
├─/dev/sda1 8:1 0 1G 0 part /boot
└─/dev/sda2 8:2 0 19G 0 part
├─/dev/mapper/rl-root 253:0 0 17G 0 lvm /
└─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP]
/dev/sdb 8:16 0 20G 0 disk
└─/dev/sdb1 8:17 0 5G 0 part /sdb1 // 已经挂载好可以使用了
/dev/sr0 11:0 1 1024M 0 rom
[root@host ~]# fdisk /dev/sdb // 对/dev/sdb进行分区
Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n // 创建分区
Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p): e // 创建逻辑分区
Partition number (2-4, default 2): // 分区号
First sector (10487808-41943039, default 10487808): // 设置起始扇区选择默认即可
Last sector, +sectors or +size{K,M,G,T,P} (10487808-41943039, default 41943039): // 设置大小,逻辑分区的默认大小是剩余全部存储空间,这里我选择默认。
Created a new partition 2 of type 'Extended' and of size 15 GiB.
Command (m for help): p // 查看分区表
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x178d8de5
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 10487807 10485760 5G 83 Linux
/dev/sdb2 10487808 41943039 31455232 15G 5 Extended // 创建好了一个逻辑分区大小为15G
Command (m for help): w // 保存分区操作
The partition table has been altered.
Syncing disks.
[root@host ~]# lsblk -p
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/sda 8:0 0 20G 0 disk
├─/dev/sda1 8:1 0 1G 0 part /boot
└─/dev/sda2 8:2 0 19G 0 part
├─/dev/mapper/rl-root 253:0 0 17G 0 lvm /
└─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP]
/dev/sdb 8:16 0 20G 0 disk
├─/dev/sdb1 8:17 0 5G 0 part /sdb1
└─/dev/sdb2 8:18 0 15G 0 part
/dev/sr0 11:0 1 1024M 0 rom
创建好逻辑分区之后就可以在逻辑分区里面创建无数个子分区,这样就逃离了MBR分区方式的分区限制。
[root@host ~]# fdisk /dev/sdb // 对/dev/sdb进行分区
Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n // 新建分区
All space for primary partitions is in use.
Adding logical partition 5 // 当创建好逻辑分区之后就会发现分区号会从5开始增加
First sector (10489856-41943039, default 10489856): // 磁盘起始位置,默认即可
Last sector, +sectors or +size{K,M,G,T,P} (10489856-41943039, default 41943039): +1G //设置大小
Created a new partition 5 of type 'Linux' and of size 1 GiB.
Command (m for help): n // 继续创建
All space for primary partitions is in use.
Adding logical partition 6
First sector (12589056-41943039, default 12589056):
Last sector, +sectors or +size{K,M,G,T,P} (12589056-41943039, default 41943039): +1G
Created a new partition 6 of type 'Linux' and of size 1 GiB.
Command (m for help): n // 继续创建
All space for primary partitions is in use.
Adding logical partition 7
First sector (14688256-41943039, default 14688256):
Last sector, +sectors or +size{K,M,G,T,P} (14688256-41943039, default 41943039): +1G
Created a new partition 7 of type 'Linux' and of size 1 GiB.
Command (m for help): w // 保存
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
[root@host ~]# lsblk -p // 查看分区表
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/sda 8:0 0 20G 0 disk
├─/dev/sda1 8:1 0 1G 0 part /boot
└─/dev/sda2 8:2 0 19G 0 part
├─/dev/mapper/rl-root 253:0 0 17G 0 lvm /
└─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP]
/dev/sdb 8:16 0 20G 0 disk
├─/dev/sdb1 8:17 0 5G 0 part /sdb1
├─/dev/sdb2 8:18 0 1K 0 part
├─/dev/sdb5 8:21 0 1G 0 part
├─/dev/sdb6 8:22 0 1G 0 part
└─/dev/sdb7 8:23 0 1G 0 part
/dev/sr0 11:0 1 1024M 0 rom
这里会发现sdb磁盘一共创建了7个分区。
以sdb1为例,sdb1这个分区已经挂载到了/sdb1目录已经在使用了,在删除分区前需要取消挂载。具体操作方法如下
[root@host ~]# lsblk -p // 查看分区状态
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/sda 8:0 0 20G 0 disk
├─/dev/sda1 8:1 0 1G 0 part /boot
└─/dev/sda2 8:2 0 19G 0 part
├─/dev/mapper/rl-root 253:0 0 17G 0 lvm /
└─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP]
/dev/sdb 8:16 0 20G 0 disk
├─/dev/sdb1 8:17 0 5G 0 part /sdb1 // 发现/dev/sdb1已经挂载到/sdb1目录下
├─/dev/sdb2 8:18 0 1K 0 part
├─/dev/sdb5 8:21 0 1G 0 part
├─/dev/sdb6 8:22 0 1G 0 part
└─/dev/sdb7 8:23 0 1G 0 part
/dev/sr0 11:0 1 1024M 0 rom
[root@host ~]# umount /sdb1 // 取消挂载
[root@host ~]# lsblk -p
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/sda 8:0 0 20G 0 disk
├─/dev/sda1 8:1 0 1G 0 part /boot
└─/dev/sda2 8:2 0 19G 0 part
├─/dev/mapper/rl-root 253:0 0 17G 0 lvm /
└─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP]
/dev/sdb 8:16 0 20G 0 disk
├─/dev/sdb1 8:17 0 5G 0 part // 已经取消挂载了
├─/dev/sdb2 8:18 0 1K 0 part
├─/dev/sdb5 8:21 0 1G 0 part
├─/dev/sdb6 8:22 0 1G 0 part
└─/dev/sdb7 8:23 0 1G 0 part
/dev/sr0 11:0 1 1024M 0 rom
[root@host ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): d // d(delete)删除分区
Partition number (1,2,5-7, default 7): 1 // 删除分区号,这里选择1
Partition 1 has been deleted.
Command (m for help): w // 保存位置
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
[root@host ~]# lsblk -p // 查看磁盘状态
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/sda 8:0 0 20G 0 disk
├─/dev/sda1 8:1 0 1G 0 part /boot
└─/dev/sda2 8:2 0 19G 0 part
├─/dev/mapper/rl-root 253:0 0 17G 0 lvm /
└─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP]
/dev/sdb 8:16 0 20G 0 disk // 发现sdb1已经消失
├─/dev/sdb2 8:18 0 1K 0 part
├─/dev/sdb5 8:21 0 1G 0 part
├─/dev/sdb6 8:22 0 1G 0 part
└─/dev/sdb7 8:23 0 1G 0 part
/dev/sr0 11:0 1 1024M 0 rom
创建好分区之后需要格式化之后才可以挂载使用,格式化需要使用mkfs工具,这里不多讲。关于格式化的格式可以使用mkfs.
[root@host ~]# mkfs.
mkfs.cramfs mkfs.ext2 mkfs.ext3 mkfs.ext4 mkfs.minix mkfs.xfs
这里使用xfs来格式化创建的分区(逻辑分区只能格式化子分区不能直接格式化逻辑分区)。
[root@host ~]# lsblk -p // 查看分区状态
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/sda 8:0 0 20G 0 disk
├─/dev/sda1 8:1 0 1G 0 part /boot
└─/dev/sda2 8:2 0 19G 0 part
├─/dev/mapper/rl-root 253:0 0 17G 0 lvm /
└─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP]
/dev/sdb 8:16 0 20G 0 disk
├─/dev/sdb2 8:18 0 1K 0 part
├─/dev/sdb5 8:21 0 1G 0 part
├─/dev/sdb6 8:22 0 1G 0 part
└─/dev/sdb7 8:23 0 1G 0 part
/dev/sr0 11:0 1 1024M 0 rom
[root@host ~]# mkfs.xfs /dev/sdb5 // 使用xfs类型格式化sdb5
meta-data=/dev/sdb5 isize=512 agcount=4, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@host ~]# mkfs.xfs /dev/sdb6 // 使用xfs类型格式化sdb6
meta-data=/dev/sdb6 isize=512 agcount=4, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@host ~]# mkfs.xfs /dev/sdb7 // 使用xfs类型格式化sdb7
meta-data=/dev/sdb7 isize=512 agcount=4, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@host ~]# lsblk -p // 查看分区状态
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/sda 8:0 0 20G 0 disk
├─/dev/sda1 8:1 0 1G 0 part /boot
└─/dev/sda2 8:2 0 19G 0 part
├─/dev/mapper/rl-root 253:0 0 17G 0 lvm /
└─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP]
/dev/sdb 8:16 0 20G 0 disk
├─/dev/sdb2 8:18 0 1K 0 part
├─/dev/sdb5 8:21 0 1G 0 part
├─/dev/sdb6 8:22 0 1G 0 part
└─/dev/sdb7 8:23 0 1G 0 part
/dev/sr0 11:0 1 1024M 0 rom
当格式化分区之后可以通过mount来进行挂载,挂载好的分区就可以使用了。
[root@host ~]# mkdir -p /disk/sdb{5..7} // 创建挂载目录
[root@host ~]# mount /dev/sdb5 /disk/sdb5 // 挂载sdb5
[root@host ~]# mount /dev/sdb6 /disk/sdb6 // 挂载sdb6
[root@host ~]# mount /dev/sdb7 /disk/sdb7 // 挂载sdb7
[root@host ~]# lsblk -p // 查看分区状态
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/sda 8:0 0 20G 0 disk
├─/dev/sda1 8:1 0 1G 0 part /boot
└─/dev/sda2 8:2 0 19G 0 part
├─/dev/mapper/rl-root 253:0 0 17G 0 lvm /
└─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP]
/dev/sdb 8:16 0 20G 0 disk
├─/dev/sdb2 8:18 0 1K 0 part
├─/dev/sdb5 8:21 0 1G 0 part /disk/sdb5
├─/dev/sdb6 8:22 0 1G 0 part /disk/sdb6
└─/dev/sdb7 8:23 0 1G 0 part /disk/sdb7
/dev/sr0 11:0 1 1024M 0 rom
]]>Containerd是一个行业标准的容器运行时,强调简单性、健壮性和可移植性。它可以作为Linux和Windows的守护进程使用,它可以管理其主机系统的完整容器生命周期:映像传输和存储、容器执行和监督、低级存储和网络附件等。
最开始Containerd是Docker的一部分,但是Docker的公司把Containerd剥离出来并捐赠给了一个开源社区(CNCF)独发展和运营。阿里云,AWS, Google,IBM和Microsoft将参与到Containerd的开发中。
kubernetes在1.5版本就发布了CRI(Container Runtime Interface)容器运行时接口,但是Docker是不符合这个标准的,Docker在当时又占据了大部分市场直接弃用Docker是不可能的,所以当时kubernetes单独维护了一个适配器(dockershim)单独给Docker用。
Docker的功能有很多,实际kubernetes用到的功能只是一小部分,而那些用不到的功能半身就可能带来安全隐患。
在1.20版本就发消息打算弃用Docker不再默认支持Docker当容器运行时。
在1.24版本正式弃用(移除dockershim)。在1.24之后的版本如果还想使用Docker作为底层的容器管理工具则需要单独安装dockershim。
Containerd是支持CRI标准的,所以自然也就将容器运行时切换到Containerd上面了。
直接使用docker的镜像源安装即可。
[root@host ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
[root@host ~]# yum -y install containerd.io
......
[root@host ~]# rpm -qa containerd.io
containerd.io-1.6.15-3.1.el8.x86_64
使用下面命令设置开机自启并启动containerd
systemctl enable --now containerd
Containerd有两种安装包,区别如下
本文采用第二种包进行安装。
下载地址:Github
本文采用的版本是cri-containerd-cni-1.6.15-linux-amd64.tar.gz
下载好上传到服务器里面即可
[root@host ~]# mkdir containerd
[root@host ~]# mv cri-containerd-cni-1.6.15-linux-amd64.tar.gz containerd/
[root@host ~]# cd containerd
[root@host containerd]# tar xvf cri-containerd-cni-1.6.15-linux-amd64.tar.gz
[root@host containerd]# ls
cri-containerd-cni-1.6.15-linux-amd64.tar.gz etc opt usr
[root@host containerd]# cp ./etc/systemd/system/containerd.service /etc/systemd/system/
[root@host containerd]# cp usr/local/sbin/runc /usr/sbin/
[root@host containerd]# cp usr/local/bin/ctr /usr/bin/
[root@host containerd]# cp ./usr/local/bin/containerd /usr/local/bin/
[root@host containerd]# mkdir /etc/containerd
[root@host containerd]# containerd config default > /etc/containerd/config.toml
[root@host containerd]# cat /etc/containerd/config.toml |grep sandbox
sandbox_image = "registry.k8s.io/pause:3.6"
这个参数是指向了一个镜像地址,这个地址在国内是被墙的,通过下面命令替换,下面的地址是我在dockerhub上面做的副本。
[root@host containerd]# sed -i 's/registry.k8s.io\/pause:3.6/docker.io\/boychai\/pause:3.6/g' /etc/containerd/config.toml
[root@test containerd]# cat /etc/containerd/config.toml |grep sandbox_image
sandbox_image = "docker.io/boychai/pause:3.6"
[root@host containerd]# systemctl enable --now containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
[root@host containerd]# ctr version
Client:
Version: v1.6.15
Revision: 5b842e528e99d4d4c1686467debf2bd4b88ecd86
Go version: go1.18.9
Server:
Version: v1.6.15
Revision: 5b842e528e99d4d4c1686467debf2bd4b88ecd86
UUID: ebf1fe8b-37f7-4d94-8277-788e9f2c2a17
[root@test containerd]# runc -v
runc version 1.1.4
commit: v1.1.4-0-g5fd4c4d1
spec: 1.0.2-dev
go: go1.18.9
libseccomp: 2.5.1
[root@host ~]# ctr images -h
NAME:
ctr images - manage images
USAGE:
ctr images command [command options] [arguments...]
COMMANDS:
check check existing images to ensure all content is available locally
export export images
import import images
list, ls list images known to containerd
mount mount an image to a target path
unmount unmount the image from the target
pull pull an image from a remote
push push an image to a remote
delete, del, remove, rm remove one or more images by reference
tag tag an image
label set and clear labels for an image
convert convert an image
OPTIONS:
--help, -h show help
命令 | 概述 |
---|---|
check | 检查镜像 |
export | 导出镜像 |
import | 导入镜像 |
list,ls | 列出镜像 |
mount | 挂载镜像 |
unmount | 卸载镜像 |
pull | 下载镜像 |
push | 推送镜像 |
delete,del,remove,rm | 删除镜像 |
tag | 修改标记 |
label | 修改标签 |
convert | 转换镜像 |
images可以使用简写 例如列出帮助信息"ctr i -h"
containerd支持OCI标准的镜像,所以可以用dockerhub中的镜像或者dockerfile构建的镜像。
ctr i pull 镜像名称
[root@host ~]# ctr images pull docker.io/library/nginx:alpine
docker.io/library/nginx:alpine: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6: exists |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:c1b9fe3c0c015486cf1e4a0ecabe78d05864475e279638e9713eb55f013f907f: exists |++++++++++++++++++++++++++++++++++++++|
layer-sha256:c7a81ce22aacea2d1c67cfd6d3c335e4e14256b4ffb80bc052c3977193ba59ba: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16: exists |++++++++++++++++++++++++++++++++++++++|
layer-sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:83e90619bc2e4993eafde3a1f5caf5172010f30ba87bbc5af3d06ed5ed93a9e9: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:d52adec6f48bc3fe2c544a2003a277d91d194b4589bb88d47f4cfa72eb16015d: exists |++++++++++++++++++++++++++++++++++++++|
layer-sha256:10eb2ce358fad29dd5edb0d9faa50ff455c915138fdba94ffe9dd88dbe855fbe: exists |++++++++++++++++++++++++++++++++++++++|
layer-sha256:a1be370d6a525bc0ae6cf9840a642705ae1b163baad16647fd44543102c08581: exists |++++++++++++++++++++++++++++++++++++++|
layer-sha256:689b9959905b6f507f527ce377d7c742a553d2cda8d3529c3915fb4a95ad45bf: exists |++++++++++++++++++++++++++++++++++++++|
elapsed: 11.2s total: 15.7 M (1.4 MiB/s)
unpacking linux/amd64 sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6...
done: 709.697156ms
crt images <ls|list>
[root@host ~]# ctr images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -
查看镜像的文件系统
crt images mount 镜像名称 本地目录
[root@host ~]# mkdir /mnt/nginx-alpine
[root@host ~]# ctr images mount docker.io/library/nginx:alpine /mnt/nginx-alpine/
sha256:a71c46316a83c0ac8c2122376a89b305936df99fa354c265f5ad2c1825e94167
/mnt/nginx-alpine/
[root@host ~]# cd /mnt/nginx-alpine/
[root@host nginx-alpine]# ls
bin dev docker-entrypoint.d docker-entrypoint.sh etc home lib media mnt opt proc root run sbin srv sys tmp usr var
卸载已经挂载到本地的镜像文件系统
crt images unmount 本地目录
[root@host ~]# ctr images unmount /mnt/nginx-alpine/
/mnt/nginx-alpine/
[root@host ~]# ls /mnt/nginx-alpine/
ctr images export --platform 平台 导出的文件名称 镜像名称
[root@host ~]# ctr images export --platform linux/amd64 nginx.tar docker.io/library/nginx:alpine
[root@host ~]# ls
anaconda-ks.cfg containerd nginx.tar
ctr images delete|del|remove|rm 镜像名称
[root@host ~]# ctr images del docker.io/library/nginx:alpine
docker.io/library/nginx:alpine
[root@host ~]# ctr images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
ctr images import 镜像文件名称
[root@host ~]# ctr images import nginx.tar
unpacking docker.io/library/nginx:alpine (sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6)...done
[root@host ~]# ctr images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -
更改某个镜像的名称
ctr images tag 原镜像 新镜像名
[root@host ~]# ctr images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -
[root@host ~]# ctr images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -
[root@host ~]# ctr images tag docker.io/library/nginx:alpine nginx:alpine
nginx:alpine
[root@host ~]# ctr images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -
nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -
默认情况下,Kubernetes集群上的容器对计算资源维度没有任何限制,可能会导致个别容器资源过大导致影响其他容器正常工作,这时可以使用LimitRange定义容器默认CPU和内存请求值或者最大限制。
LimitRange维度限制:
LimitRange功能是一个准入控制插件,默认已经启用。检查是否开启LimitRange功能的方法如下:
[root@master ~]# kubectl -n kube-system get pod|grep apiserver
kube-apiserver-master.host.com 1/1 Running 28 (95m ago) 62d
[root@master ~]# kubectl -n kube-system exec kube-apiserver-master.host.com -- kube-apiserver -h|grep enable-admission-plugins
--admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
--enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
在"--enable-admission-plugins"中寻找"LimitRanger"发现已经开启。
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-memory-max-min
namespace: test
spec:
limits:
- max:
cpu: 1
memory: 1Gi
min:
cpu: 100m
memory: 100Mi
type: Container
max里面是容器能设置limit的最大值,min里面是容器能设置request的最小值
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-memory-max-min
namespace: test
spec:
limits:
- max:
cpu: 1
memory: 1Gi
min:
cpu: 100m
memory: 100Mi
default:
cpu: 500m
memory: 500Mi
defaultRequest:
cpu: 100m
memory: 100Mi
type: Container
"default"是设置limit的默认值,"defaultRequest"是设置request的默认值
apiVersion: v1
kind: LimitRange
metadata:
name: storage-max-min
namespace: test
spec:
limits:
- max:
storage: 10Gi
min:
storage: 1Gi
type: PersistentVolumeClaim
pvc的使用
[root@master cks]# kubectl get limits -n test
NAME CREATED AT
cpu-memory-max-min 2022-12-17T11:01:11Z
storage-max-min 2022-12-17T10:59:56Z
[root@master cks]# kubectl describe limits -n test
Name: cpu-memory-max-min
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory 100Mi 1Gi 100Mi 500Mi -
Container cpu 100m 1 100m 500m -
Name: storage-max-min
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
PersistentVolumeClaim storage 1Gi 10Gi - - -
]]>当多个团队去共享使用一个Kubernetes集群时,会出现不均匀的资源使用,默认情况下资源先到先得,这个时候可以通过ResourceQuota来对命名空间资源使用总量做限制,从而解决这个问题。
ResourceQuota功能是一个准入控制插件,默认已经启用。检查是否开启ResourceQuota功能的方法如下:
[root@master ~]# kubectl -n kube-system get pod|grep apiserver
kube-apiserver-master.host.com 1/1 Running 27 (17h ago) 61d
[root@master ~]# kubectl -n kube-system exec kube-apiserver-master.host.com -- kube-apiserver -h|grep enable-admission-plugins
--admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
--enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
在"--enable-admission-plugins"中寻找"ResourceQuota"发现已经开启。
支持的资源 | 描述 |
---|---|
limits.cpu/memory | 所有Pod上限资源配置总量不超过该值 (所有非终止状态的Pod) |
requests.cpu/memory | 所有Pod请求资源配置总量不超过该值 (所有非终止状态的Pod) |
cpu/memory | 等同于requests.cpu/requests.memory |
requests.storage | 所有PVC请求容量总和不超过该值 |
persistentvolumeclaims | 所有PVC数量总和不超过该值 |
\<storage-class-name\>.storageclass.storage.k8s.io/requests.storage | 所有与\<storage-class-name\>相关的PVC请求容量总和不超过该值 |
\<storage-class-name\>.storageclass.storage.k8s.io/persistentvolumeclaims | 所有与\<storage-class-name\>相关的PVC数量总和不超过该值 |
pods、 count/deployments.apps、count/statfulsets.apps、count/services(services.loadbalancers、 services.nodeports)count/secrets、 count/configmaps、count/job.batch、count/cronjobs.batch | 创建资源数量不超过该值 |
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
namespace: test
spec:
hard:
requests.cpu: "1"
requests.memory: 10Gi
limits.cpu: "4"
limits.memory: 20Gi
apiVersion: v1
kind: ResourceQuota
metadata:
name: storage-resources
namespace: test
spec:
hard:
requests.storage: 10Gi
managed-nfs-storage.storageclass.storage.k8s.io/requests.storage: 10Gi
"managed-nfs-storage"是动态存储类的名称。
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
namespace: test
spec:
hard:
pods: "10"
count/deployments.apps: "3"
count/services: "3"
限制的是个数,命名空间的总数量不能超过该值。
[root@master ~]# kubectl get quota -n test
NAME AGE REQUEST LIMIT
compute-resources 41m requests.cpu: 0/4, requests.memory: 0/10Gi limits.cpu: 0/6, limits.memory: 0/12Gi
object-counts 4m6s count/deployments.apps: 0/3, count/services: 0/3, pods: 0/10
storage-resources 6m16s managed-nfs-storage.storageclass.storage.k8s.io/requests.storage: 0/10Gi, requests.storage: 0/10Gi
通过上面的命令可以查看额配资源使用的情况。
]]>