BoyChai's Blog - 工具 https://blog.boychai.xyz/index.php/category/%E5%B7%A5%E5%85%B7/ [Nginx]ngx_lua模块 https://blog.boychai.xyz/index.php/archives/71/ 2024-04-25T00:51:00+00:00 概述淘宝开发的ngx_lua模块通过将lua解释器解释器集成Nginx,可以采用lua脚本实现业务逻辑,由于lua的紧凑、快速以及内建协程,所以在保证高并发服务能力的同时极大降低了业务逻辑实现成本。安装方式1(已弃用)lua-nginx-moduleLuaJIT是采用C语言编写的Lua代表的解释器。官网: http://luajit.org在官网找到对应下载地址: https://github.com/LuaJIT/LuaJIT/tags[root@work env]# wget https://github.com/LuaJIT/LuaJIT/archive/refs/tags/v2.0.5.tar.gz [root@work env]# tar xvf v2.0.5.tar.gz [root@work env]# cd LuaJIT-2.0.5/ [root@work LuaJIT-2.0.5]# make && make install make[1]: Leaving directory '/opt/env/LuaJIT-2.0.5/src' ==== Successfully built LuaJIT 2.0.5 ==== ==== Installing LuaJIT 2.0.5 to /usr/local ==== mkdir -p /usr/local/bin /usr/local/lib /usr/local/include/luajit-2.0 /usr/local/share/man/man1 /usr/local/lib/pkgconfig /usr/local/share/luajit-2.0.5/jit /usr/local/share/lua/5.1 /usr/local/lib/lua/5.1 cd src && install -m 0755 luajit /usr/local/bin/luajit-2.0.5 cd src && test -f libluajit.a && install -m 0644 libluajit.a /usr/local/lib/libluajit-5.1.a || : rm -f /usr/local/bin/luajit /usr/local/lib/libluajit-5.1.so.2.0.5 /usr/local/lib/libluajit-5.1.so /usr/local/lib/libluajit-5.1.so.2 cd src && test -f libluajit.so && \ install -m 0755 libluajit.so /usr/local/lib/libluajit-5.1.so.2.0.5 && \ ldconfig -n /usr/local/lib && \ ln -sf libluajit-5.1.so.2.0.5 /usr/local/lib/libluajit-5.1.so && \ ln -sf libluajit-5.1.so.2.0.5 /usr/local/lib/libluajit-5.1.so.2 || : cd etc && install -m 0644 luajit.1 /usr/local/share/man/man1 cd etc && sed -e "s|^prefix=.*|prefix=/usr/local|" -e "s|^multilib=.*|multilib=lib|" luajit.pc > luajit.pc.tmp && \ install -m 0644 luajit.pc.tmp /usr/local/lib/pkgconfig/luajit.pc && \ rm -f luajit.pc.tmp cd src && install -m 0644 lua.h lualib.h lauxlib.h luaconf.h lua.hpp luajit.h /usr/local/include/luajit-2.0 cd src/jit && install -m 0644 bc.lua v.lua dump.lua dis_x86.lua dis_x64.lua dis_arm.lua dis_ppc.lua dis_mips.lua dis_mipsel.lua bcsave.lua vmdef.lua /usr/local/share/luajit-2.0.5/jit ln -sf luajit-2.0.5 /usr/local/bin/luajit ==== Successfully installed LuaJIT 2.0.5 to /usr/local ====lua-nginx-modulenginx第三方模块lua-nginx-module官网: https://github.com/openresty/lua-nginx-module[root@work env]# wget https://github.com/openresty/lua-nginx-module/archive/refs/tags/v0.10.26.tar.gz [root@work env]# tar xvf v0.10.26.tar.gz [root@work env]# ln -s lua-nginx-module-0.10.26 lua-nginx-module环境变量设置[root@work ~]# tail -n2 /etc/profile export LUAJIT_LIB=/usr/local/lib export LUAJIT_INC=/usr/local/include/luajit-2.0 [root@work ~]# source /etc/profile扩展nginx模块打开nginx编译安装的位置 进行重新编译安装[root@work nginx-1.24.0]# ./configure --prefix=/usr/local/nginx --sbin-path=/usr/local/nginx/sbin/nginx --conf-path=/usr/local/nginx/conf/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock --user=nginx --group=nginx --with-http_ssl_module --with-http_stub_status_module --with-http_gzip_static_module --http-client-body-temp-path=/var/tmp/nginx/client/ --http-proxy-temp-path=/var/tmp/nginx/proxy/ --http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi --http-scgi-temp-path=/var/tmp/nginx/scgi --with-pcre --add-module=/opt/package/nginx/lua-nginx-module [root@work nginx-1.24.0]# make && make install扩展的重点是--with-pcre --add-module=/opt/package/nginx/lua-nginx-module这里就相当于重新安装了,之前安装的模块还需要再这里再添加一遍错误libluajit-5.1.so.2当在扩展号nginx模块后执行nginx相关命令出现以下错误[root@work ~]# nginx -V nginx: error while loading shared libraries: libluajit-5.1.so.2: cannot open shared object file: No such file or directory这个错误表明 Nginx 在启动时无法找到名为 libluajit-5.1.so.2 的共享库文件。这很可能是由于 Nginx 模块依赖 LuaJIT 库,但系统中缺少了该库所致。解决办法如下[root@work ~]# ln -s /usr/local/lib/libluajit-5.1.so.2 /lib64/liblua-5.1.so.2reason: module 'resty.core' not found[root@work conf]# nginx nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html) nginx: [alert] failed to load the 'resty.core' module (https://github.com/openresty/lua-resty-core); ensure you are using an OpenResty release from https://openresty.org/en/download.html (reason: module 'resty.core' not found: no field package.preload['resty.core'] no file './resty/core.lua' no file '/usr/local/share/luajit-2.0.5/resty/core.lua' no file '/usr/local/share/lua/5.1/resty/core.lua' no file '/usr/local/share/lua/5.1/resty/core/init.lua' no file './resty/core.so' no file '/usr/local/lib/lua/5.1/resty/core.so' no file '/usr/local/lib/lua/5.1/loadall.so' no file './resty.so' no file '/usr/local/lib/lua/5.1/resty.so' no file '/usr/local/lib/lua/5.1/loadall.so') in /usr/local/nginx/conf/nginx.conf:117原因似乎是缺少lua-resty-core模块,这里手动编译安装一下项目地址: https://github.com/openresty/lua-resty-core[root@work nginx]# tar xvf v0.1.28.tar.gz tar xvf make install安装方式2概述直接使用OpenRestry,它是由淘宝工程师开发的,它是基于Nginx与Lua的高性能Web平台,其内部集成了大量精良的Lua库,第三方模块以及大多数的依赖项,用于方便搭建能够处理高并发、扩展性极高的动态Web应用、Web服务和动态网关。所以本身OpenResty内部就已经集成了Nginx和Lua,我们用起来会更加方便安装参考: https://openresty.org/cn/linux-packages.html配置:/usr/local/openrestry/nginx/conf关于OpenRestryOpenRestry,它是由淘宝工程师开发的,它是基于Nginx与Lua的高性能Web平台,其内部集成了大量精良的Lua库,第三方模块以及大多数的依赖项,用于方便搭建能够处理高并发、扩展性极高的动态Web应用、Web服务和动态网关。所以本身OpenResty内部就已经集成了Nginx和Lua,我们用起来会更加方便。PS:本文只讲ngx_lua的使用,其他的基本和nginx配置无区别。ngx_lua相关指令块使用Lua编写Nginx脚本的基本构建块是指令。指令用于指定何时运行用户Lua代码以及如何使用结果。下图显示了执行指令的顺序。先来解释一下*的作用*:无 , 即 xxx_by_lua ,指令后面跟的是 lua指令 *:_file,即 xxx_by_lua_file 指令后面跟的是 lua文件 *:_block,即 xxx_by_lua_block 在0.9.17版后替换init_by_lua_fileinit_by_lua*该指令在每次Nginx重新加载配置时执行,可以用来完成一些耗时模块的加载,或者初始化一些全局配置。init_worker_by_lua*该指令用于启动一些定时任务,如心跳检查、定时拉取服务器配置等。set_by_lua*该指令只要用来做变量赋值,这个指令一次只能返回一个值,并将结果赋值给Nginx中指定的变量。rewrite_by_lua*该指令用于执行内部URL重写或者外部重定向,典型的如伪静态化URL重写,本阶段在rewrite处理阶段的最后默认执行。access_by_lua*该指令用于访问控制。例如,如果只允许内网IP访问。content_by_lua*该指令是应用最多的指令,大部分任务是在这个阶段完成的,其他的过程往往为这个阶段准备数据,正式处理基本都在本阶段。header_filter_by_lua*该指令用于设置应答消息的头部信息。body_filter_by_lua*该指令是对响应数据进行过滤,如截断、替换。log_by_lua*该指令用于在log请求处理阶段,用Lua代码处理日志,但并不替换原有log处理。balancer_by_lua*该指令主要的作用是用来实现上游服务器的负载均衡器算法ssl_certificate_by_*该指令作用在Nginx和下游服务开始一个SSL握手操作时将允许本配置项的Lua代码。案例1需求输出内容配置 location /lua { default_type 'text/html'; content_by_lua 'ngx.say("<h1>HELLO,OpenResty</h1>")'; }案例2需求http://xxx/?name=张三&gender=1 Nginx接收到请求后根据gender传入的值,如果是gender传入的是1,则展示张三先生,如果是0则展示张三女士,如果都不是则展示张三。配置 location /getByGender { default_type 'text/html'; set_by_lua $param " -- 获取请求URL上的参数对应的值 local uri_args = ngx.req.get_uri_args() local name = uri_args['name'] local gender = uri_args['gender'] -- 条件判断 if gender 1 先生 0 女士 if gender == '1' then return name..'先生' elseif gender == '0' then return name..'女士' else return name end "; # 解决中文乱码 charset utf-8; # 返回数据 return 200 $param; }ngx.req.get_uri_args()返回的是一个table类型案例3需求动态获取docker容器ip,做代理配置server{ listen 80; server_name code.boychai.xyz; client_max_body_size 4096M; set_by_lua $param ' local name = "gitea" local port = "3000" local command = string.format("echo -n `docker inspect --format=\'{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}\' %s`", name) local handle = io.popen(command) local result = handle:read("*a") handle:close() return "http://"..result..":"..port '; location / { if ( $param = 'http://:3000' ) { return 500 "Error in obtaining site IP"; } proxy_pass $param; proxy_set_header Host $proxy_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } [Tekton] 报错: more than one PersistentVolumeClaim is bound https://blog.boychai.xyz/index.php/archives/70/ 2024-04-24T11:45:00+00:00 复现task-nodejs.yamlapiVersion: tekton.dev/v1beta1 kind: Task metadata: name: build-node-project spec: workspaces: - name: cache mountPath: /root/.npm - name: source - name: output params: - name: imgTag type: string - name: run type: string - name: dir type: string steps: - name: build workingDir: "$(workspaces.source.path)/$(params.dir)" image: "node:$(params.imgTag)" script: | rm -rf package-lock.json npm install --registry=https://registry.npmmirror.com/ npm run $(params.run) cp -r dist/* $(workspaces.output.path)/taskrun.yamlapiVersion: tekton.dev/v1 kind: TaskRun metadata: generateName: build-node-project-run- generation: 1 namespace: cicd-services spec: params: - name: dir value: frontend - name: imgTag value: 21.6.2 - name: run value: build serviceAccountName: default taskRef: kind: Task name: build-node-project workspaces: - name: cache persistentVolumeClaim: claimName: node-cache-pvc - name: source persistentVolumeClaim: claimName: test-tekton-vue-pvc - name: output persistentVolumeClaim: claimName: test-tekton-vue-output-pvc运行之后会出现下面报错TaskRunValidationFailed [User error] more than one PersistentVolumeClaim is bound原因报错翻译TaskRunValidationFailed[用户错误]绑定了多个PersistentVolumeClaim,很明确他允许绑定多个pvc,这个蛮离谱的,cicd的过程中用到多个存储应该是很正常的事,tekton却默认不支持绑定多个pvc。解决修改tekton的配置把参数disable-affinity-assistant修改为true,即可kubectl -n tekton-pipelines edit cm feature-flags这个参数的作用如下设置为 true 将阻止 Tekton 为共享了 workspace 的每个 TaskRun 创建 Affinity Assistant Pod。 这样就可以保证这些 pod 运行在同一个节点上,避免了跨节点访问 pvc 的问题。还有就是这个功能在v0.60会被弃用,未来估计不会因为这个问题报这个错了。参考ISSUE: https://github.com/tektoncd/pipeline/issues/6543TektonDocs: https://github.com/tektoncd/pipeline/blob/main/docs/affinityassistants.md配置参考: https://www.soulchild.cn/post/tekton-operator%E9%85%8D%E7%BD%AE%E5%8F%82%E6%95%B0%E8%AF%A6%E8%A7%A3/ [资源]容器镜像代理站、加速站收集 https://blog.boychai.xyz/index.php/archives/63/ 2023-08-30T07:25:40+00:00 数据库通用语言-SQL https://blog.boychai.xyz/index.php/archives/61/ 2023-08-10T05:36:00+00:00 SQL分类类别全称概述DDLData Definition Language数据定义语言,用来定义数据库对象(数据库,表,字段)DMLData Manipulation Language数据操作语言,用来对数据库表中的数据进行删增改查DQLData Query Language数据查询语言,用来查询数据库中表的记录DCLData Control Language数据控制语言,用来创建数据库用户,控制数据库的访问权限DDL数据库操作查询查询所有数据库SHOW DATABASES;查询当前数据库SELECT DATABASE();创建CREATE DATABASE [ IF NOT EXISTS ] 数据库名称 [ DEFAULT CHARSET 字符集 ] [ COLLATE 排序规则 ];使用时可以在数据库名称前面加入"IF NOT EXISTS",意为当数据库不存在则创建否则不作为。删除DROP DATABASE [ IF EXISTS ] 数据库名称;使用时可以在数据库名称前面加入"IF EXISTS",意为当数据库存在则删除否则不作为。使用USE 数据库名称;表操作ps:表操作需要使用数据库之后才能操作查询查询当前数据库中所有的表SHOW TABLES;查询表字段结构DESC 表名称;查询指定表的建表语句SHOW CREATE TABLE 表名;创建CREATE TABLE 表名(​ 字段1 字段1数据类型 [ COMMENT 字段1注释 ],​ 字段2 字段2数据类型 [ COMMENT 字段2注释 ],​ 字段3 字段3数据类型 [ COMMENT 字段3注释 ]​ ......) [ COMMENT 表格注释 ];修改修改表名称ALTER TEABLE 表名 RENAME TO 新表名;删除删除表DROP TABLE [ IF EXISTS ] 表名;"IF EXISTS" 意为有则删除否则不作为删除指定表,并且重新创建该表(一般用于格式化)TRUNCATE TABLE 表名;字段操作修改添加字段ALTER TABLE 表名 ADD 字段名 类型(长度) [ COMMENT 注释 ] [约束];修改数据类型ALTER TABLE 表名 MODIFY 字段名 新数据类型(长度);修改字段名和字段类型ALTER TABLE 表名 CHANGE 旧字段名 新字段名 类型(长度) [ COMMENT 注释 ] [约束];删除ALTER TABLE 表名 DROP 字段名数据类型概述MYSQL的数据类型主要分为三种:数值类型、字符串类型、日期时间类型。数值类型类型大小有符号范围(SIGNED)无符号范围(UNSIGNED)概述TINYINT1 byte(-128,127)(0,255)小整数值SMALLINT2 byte(-32768,32767)(0,65535)大整数值MEDIUMINT3 byte(-8388608,8388607)(0,16777215)大整数值INT或INTEGER4 byte(-8388608,8388607)(0,4294967295)大整数值BIGINT8 byte(-2^63,2^63-1)(0,2^64-1)极大整数值FLOAT4 byte(-3.402823466 E+38,3.402823466351 E+38)0 和 (1.175494351 E38,3.402823466 E+38)单精度浮点数值DOUBLE8 byte(-1.7976931348623157E+308,1.7976931348623157E+308)0 和(2.2250738585072014E-308,1.7976931348623157E+308)双精度浮点数值DECIMAL 依赖于M(精度)和D(标度)的值依赖于M(精度)和D(标度)的值小数值(精确定点数)字符串类型类型大小概述CHAR0-255 bytes定长字符串(需要指定长度)VARCHAR0-65535 bytes变长字符串(需要指定长度)TINYBLOB0-255 bytes不超过255个字符的二进制数据TINYTEXT0-255 bytes短文本字符串BLOB0-65 535 bytes二进制形式的长文本数据TEXT0-65 535 bytes长文本数据MEDIUMBLOB0-16 777 215 bytes二进制形式的中等长度文本数据MEDIUMTEXT0-16 777 215 bytes中等长度文本数据LONGBLOB0-4 294 967 295 bytes二进制形式的极大文本数据LONGTEXT0-4 294 967 295 bytes 极极大文本数据日期类型类型大小范围格式概述DATE31000-01-01 至 9999-12-31YYYY-MM-DD日期值TIME3-838:59:59 至 838:59:59HH:MM:SS时间值或持续时间YEAR11901 至 2155YYYY年份值DATETIME81000-01-01 00:00:00 至9999-12-31 23:59:59YYYY-MM-DDHH:MM:SS混合日期和时间值TIMESTAMP41970-01-01 00:00:01 至2038-01-19 03:14:07YYYY-MM-DDHH:MM:SS混合日期和时间值,时间戳DML添加数据给指定字段添加数据INSERT INTO 表名(字段1,字段2,......) VALUES (值1,值2,......);给全部字段添加数据INSERT INTO 表名 VALUES (值1,值2,......);批量添加数据INSERT INTO 表名(字段1,字段2,......) VALUES (值1,值2,......),(值1,值2,......),(值1,值2,......);INSERT INTO 表名 VALUES (值1,值2,......),(值1,值2,......),(值1,值2,......);修改数据UPDATE 表名 SET 字段1=值1,字段2=值2,...... [ WHERE 条件 ]删除数据DELETE FROM 表名 [ WHERE 条件 ]注意修改删除数据的时候,如果不加where判断条件则调整的整张表的内容。DQL语法SELECT​ 字段列表FROM​ 表名列表WHERE​ 条件列表GROUP BY​ 分组字段列表HAVING​ 分组后条件列表ORDER BY​ 排序字段列表LIMIT​ 分页参数基本查询查询多个字段SELECT 字段1,字段2,.... FROM 表名;SELECT * FRIN 表名;设置别名SELECT 字段1 [ AS '别名' ],字段2 [ AS '别名2' ] FROM 表名;输出的时候字段名称会 替换成别名,这段SQL中的AS可以不写,例如SELECT 字段 '别名' FROM 表名;去除重复记录SELECT distinct 字段列表 FROM 表名;条件查询语法SELECT 字段列表 FROM 表名 WHERE 条件列表;条件比较运算符运算符功能>大于\>=大于等于<小于<=小于等于=等于<> 或 !=不等于BETWEEN ... AND ...在某个范围之内(含最小,最大值)IN(...)在in之后的列表中的值,多选一LIKE 占位符模糊匹配(_匹配单个字符,%匹配任意多个字符)IS NULL是NULL逻辑运算符运算符功能AND 或 &&并且(多个条件同时成立)OR 或 \\ 或者(多个条件任意一个成立)NOT 或 !非,不是聚合函数语法SELECT 聚合函数(字段) FROM 表名;函数函数功能count统计数量max最大值min最小值avg平均值sum求和分组查询语法SELECT 字段列表 FROM 表名 [ WHERE 条件 ] GROUP BY 分组字段名 [ HAVING 分组后过滤条件 ]排序查询语法SELECT 字段列表 FROM 表名 ORDER BY 字段1 排序方式1, 字段2,排序方式2排序方式ASC:升序(默认)DESC:降序分页查询语法SELECT 字段列表 FROM 表名 LIMIT 起始索引,查询记录数;执行顺序DQL的执行顺序为FROM 表 > WHERE 条件查询 > GROUP BY 分组查询 > SELECT 字段查询 > ORDER BY 排序查询 > LIMIT 分页查询DCL用户管理查询用户USE mysql; SELECT * FROM user;创建用户CREATE USER `用户名`@`主机名` IDENTIFIED BY `密码`;修改用户密码ALTER USER `用户名`@`主机名` IDENTIFIED WITH mysql_native_password BY `新密码`;删除用户DROP USER `用户名`@`主机名`;权限管理权限常用权限如下表权限说明ALL,ALL PRIVILEGES所有权限SELECT数据查询INSERT插入数据UPDATE更新数据DELETE删除数据ALTER修改表DROP删除数据库、表、试图CREATE创建数据库、表这是常用的 其他的可以去官网查看管理查询权限SHOW GRANTS FOR `用户名`@`主机名`;授予权限GRANT 权限列表 ON 数据库名.表名 TO `用户名`@`主机名`;撤销权限REVOKE 权限列表 ON 数据库名.表名 FROM `用户名`@`主机名`; HELM-Kubernetes包管理工具(Chart包简单使用) https://blog.boychai.xyz/index.php/archives/57/ 2023-05-29T04:34:00+00:00 目录结构文档地址:https://helm.sh/zh/docs/topics/charts/通过下面命令创建一个chart,指定chart名:mychart[root@Kubernetes charts]# helm create mychart Creating mychart [root@Kubernetes charts]# tree mychart mychart ├── charts ├── Chart.yaml ├── templates │ ├── deployment.yaml │ ├── _helpers.tpl │ ├── hpa.yaml │ ├── ingress.yaml │ ├── NOTES.txt │ ├── serviceaccount.yaml │ ├── service.yaml │ └── tests │ └── test-connection.yaml └── values.yaml 3 directories, 10 files目录介绍charts存放子chart的目录,目录里存放这个chart依赖的所有子chartChart.yaml保存chart的基本信息,包括名字、描述信息及版本等,这个变量文件都可以被template目录下文件所引用templates模板文件目录,目录里面存放所有yaml模板文件,包含了所有部署应用的yaml文件templates/deployment.yaml创建deployment对象的模板文件templates/_helpers.tpl_放置模板助手的文件,可以在整个chart中重复使用,是放一些templates目录下这些yaml都有可能会用的一些模板templates/hpa.yaml templates/ingress.yaml templates/NOTES.txt存放提示信息的文件,介绍chart帮助信息,helm install部署后展示给用户,如何使用chart等,是部署chart后给用户的提示信息templates/serviceaccount.yaml templates/service.yaml templates/tests用于测试的文件,测试完部署完chart后,如web,做一个连接,看看是否部署成功templates/tests/test-connection.yaml用于渲染模板的文件(变量文件,定义变量的值)定义templates目录下的yaml文件可能引用到的变量values.yamlvalues.yaml用于存储templates目录中模板文件中用到变量的值,这些变量定义都是为了让templates目录下yaml引用案例1需求编写一个chart,不引用内置对象的变量值(用HELM3发布创建一个ConfigMap,创建到K8s集群中,发布其他应用也一样,我们由浅入深进行学习)创建实例编写Chart:[root@Kubernetes charts]# helm create myConfigMapChart Creating myConfigMapChart [root@Kubernetes charts]# cd myConfigMapChart/templates/ [root@Kubernetes templates]# rm -rf ./* [root@Kubernetes templates]# vim configmap.yaml # 文件内容如下 apiVersion: v1 kind: ConfigMap metadata: name: my-cm-chart data: myValue: "Hello,World"创建release实例:[root@Kubernetes charts]# ls mychart myConfigMapChart [root@Kubernetes charts]# helm install mycm ./myConfigMapChart/ NAME: mycm LAST DEPLOYED: Thu May 11 00:26:16 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None查看release实例:[root@Kubernetes charts]# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mycm default 1 2023-05-11 00:26:16.392817959 +0800 CST deployed myConfigMapChart-0.1.0 1.16.0 查看release的详细信息:[root@Kubernetes charts]# helm get manifest mycm --- # Source: myConfigMapChart/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-cm-chart data: myValue: "Hello,World"删除release实例:[root@Kubernetes charts]# helm uninstall mycm release "mycm" uninstalled [root@Kubernetes charts]# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION案例2需求案例1只是简单的创建一个configmap实际和直接apply没啥区别,案例2将引入变量进行创建chart,把configmap的名称改为变量的方式进行创建。创建实例修改Chart:[root@Kubernetes charts]# vim myConfigMapChart/templates/configmap.yaml [root@Kubernetes charts]# cat myConfigMapChart/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: # name: my-cm-chart name: {{ .Release.Name }}-configmap data: # myValue: "Hello,World" myValue: {{ .Values.MY_VALUE }} [root@Kubernetes charts]# > myConfigMapChart/values.yaml [root@Kubernetes charts]# vim myConfigMapChart/values.yaml [root@Kubernetes charts]# cat !$ cat myConfigMapChart/values.yaml MY_VALUE: "Hello,World"创建实例:[root@Kubernetes charts]# helm install mycm2 ./myConfigMapChart/ NAME: mycm2 LAST DEPLOYED: Thu May 11 00:49:06 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None查看实例:[root@Kubernetes charts]# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mycm2 default 1 2023-05-11 00:49:06.855993522 +0800 CST deployed myConfigMapChart-0.1.0 1.16.0 [root@Kubernetes charts]# kubectl get cm NAME DATA AGE kube-root-ca.crt 1 77d mycm2-configmap 1 87sYAML解释:apiVersion: v1 kind: ConfigMap metadata: # name: my-cm-chart name: {{ .Release.Name }}-configmap data: # myValue: "Hello,World" myValue: {{ .Values.MY_VALUE }}{{ .Release.Name }}-configmap 最前面的.从作用域最顶层命名空间开始,即:在顶层命名空间中开始寻找Release对象,再查找Name对象。这个就是通过内置对象获取内置对象的变量值(Release的名称)作为拼接成ConfigMap的名字。{{ .Values.MY_VALUE }} 这个是其他变量会去values.yaml文件中寻找对应的变量引用内置对象或其他变量的好处:如果metadata.name中色湖之的值就是一个固定值,这样的模板是无法在k8s中多次部署的,所以我们可以试着在每次安装chart时,都自动metadata.name的设置为release的名称,因为每次部署release的时候实例名称是不一样的,这样部署的时候里面的资源名也就可以作为一个分区,而可以进行重复部署。测试渲染HELM提供了一个用来渲染模板的命令,该命令可以将模板内容渲染出来,但是不会进行任何安装的操作。可以用该命令来测试模板渲染的内容是否正确。语法如下:helm install [release实例名] chart目录 --debug --dry-run例:[root@Kubernetes charts]# helm install mycm3 ./myConfigMapChart/ --debug --dry-run install.go:200: [debug] Original chart version: "" install.go:217: [debug] CHART PATH: /root/charts/myConfigMapChart NAME: mycm3 LAST DEPLOYED: Thu May 11 01:03:15 2023 NAMESPACE: default STATUS: pending-install REVISION: 1 TEST SUITE: None USER-SUPPLIED VALUES: {} COMPUTED VALUES: MY_VALUE: Hello,World HOOKS: MANIFEST: --- # Source: myConfigMapChart/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: # name: my-cm-chart name: mycm3-configmap data: # myValue: "Hello,World" myValue: Hello,World [root@Kubernetes charts]# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION HELM-Kubernetes包管理工具(介绍与安装) https://blog.boychai.xyz/index.php/archives/56/ 2023-05-13T01:27:00+00:00 传统部署方式传统服务部署到K8s集群的流程:拉取代码-----》打包编译-----》构建镜像-----》准备相关的yaml文件-----》kubectl apply进行部署传统方式部署引发的问题:随着引用的增多,需要维护大量的yaml文件不能根据一套yaml文件来创建多个环境,需要手动进行修改。PS: 一般环境都分为dev、预生产、生产环境,部署完了dev这套环境,后面再部署预生产和生产环境,还需要复制出两套,并手动修改才行。什么是HELMHelm是K8s的包管理工具,可以方便地发现、共享和构建K8s应用Helm是k8s的包管理器,相当于centos系统中的yum工具,可以将一个服务相关的所有资源信息整合到一个chart中,并且可以使用一套资源发布到多个环境中,可以将应用程序的所有资源和部署信息组合到单个部署包中。就像Linux下的rpm包管理器,如yum/apt等,可以很方便的将之前打包好的yaml文件部署到k8s上。HELM的组件Chart:就是helm的一个整合后的chart包,包含一个应用所有的K8s声明模板,类似于yum的rpm包或者apt的dpkg文件。helm将打包的应用程序部署到k8s并将它们构成chart。这些chart将所有预配置的应用程序资源以及所有版本都包含再一个已于管理的包中。HELM客户端:helm的客户端组件,负责和K8s apiserver通信Repository:用于发布和存储chart包的仓库,类似yum仓库和docker仓库。Release:用chart包部署的一个实例。通过chart在K8s中部署的应用都会产生一个唯一的Release,统一chart部署多次就会产生多个Release。HELM2和HELM3helm3移除了Tiller组件 helm2中helm客户端和K8s通信时通过Tiller组件和K8s通信,helm3移除了Tiller组件,直接使用kubeconfig文件和K8s apiserver通信。删除release命令变更 helm delete release-name --purge --------》 helm uninstall release-name查看charts信息命令变更 helm inspect release-name -------》 helm show chart-name拉取charts包命令变更 helm fetch chart-name -------》 helm pull chart-namehelm3中必须指定release名称,如果需要生成一个随机名称,需要加选项--generate-name,helm2中如果不指定release名称,可以自动生成一个随机名称 helm install ./mychart --generate-name安装HELM相关连接:Github:https://github.com/helm/helm/releases官网:https://helm.sh/zh/docs/intro/install/下载好对应的版本上传到服务器进行解压缩[root@Kubernetes helm]# ls helm-v3.11.3-linux-amd64.tar.gz [root@Kubernetes helm]# tar xvf helm-v3.11.3-linux-amd64.tar.gz linux-amd64/ linux-amd64/LICENSE linux-amd64/README.md linux-amd64/helm之后配置一下系统环境,让命令可以在系统中随机调用[root@Kubernetes helm]# ls helm-v3.11.3-linux-amd64.tar.gz linux-amd64 [root@Kubernetes helm]# cd linux-amd64/ [root@Kubernetes linux-amd64]# pwd /opt/helm/linux-amd64 [root@Kubernetes linux-amd64]# vim /etc/profile #追加下面内容 export PATH=$PATH:/opt/helm/linux-amd64 [root@Kubernetes linux-amd64]# source /etc/profile检查一下是否安装成功[root@Kubernetes ~]# helm version version.BuildInfo{Version:"v3.11.3", GitCommit:"323249351482b3bbfc9f5004f65d400aa70f9ae7", GitTreeState:"clean", GoVersion:"go1.20.3"} [分区工具]Fdisk(MBR分区工具) https://blog.boychai.xyz/index.php/archives/52/ 2023-03-30T12:22:00+00:00 概述fdisk是一种用于管理磁盘分区的工具,常用于Linux和其他Unix-like操作系统中。它可以用于创建、删除和修改磁盘分区,并支持多种文件系统类型,例如FAT、ext2、ext3等。fdisk还可以显示当前系统中所有磁盘的分区信息,包括磁盘标识符、分区类型、分区大小等。使用fdisk,用户可以轻松地管理磁盘空间,为不同的操作系统或应用程序分配不同的存储空间。除此之外,fdisk还支持MBR(Master Boot Record)分区方案,它是一种常见的磁盘分区方案,能够在BIOS引导下启动操作系统。MBRMBR(Master Boot Record)分区是指使用MBR分区方案的磁盘分区方式。MBR分区方案是一种常见的分区方案,能够在BIOS引导下启动操作系统。MBR分区方案将磁盘的前512个字节(即MBR)用于存储分区表和引导程序。其中分区表记录了磁盘分区的信息,包括分区类型、分区起始位置、分区大小等。MBR分区方案最多支持4个主分区或3个主分区和1个扩展分区,扩展分区可以划分为多个逻辑分区。MBR分区方案已经存在了很长时间,但是它有一个缺点,即它只支持最大2TB的磁盘容量。如果需要使用更大的磁盘,就需要使用GPT(GUID Partition Table)分区方案。环境系统:Rocky Linux release 8.5工具:fdisk from util-linux 2.32.1硬盘:虚拟机添加了一块20G的硬盘实践主分区[root@host ~]# lsblk -p // 通过lsblk来查看一下新增的硬盘位置(/dev/sdb) NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT /dev/sda 8:0 0 20G 0 disk ├─/dev/sda1 8:1 0 1G 0 part /boot └─/dev/sda2 8:2 0 19G 0 part ├─/dev/mapper/rl-root 253:0 0 17G 0 lvm / └─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP] /dev/sdb 8:16 0 20G 0 disk /dev/sr0 11:0 1 1024M 0 rom [root@host ~]# fdisk /dev/sdb // 使用fdisk工具对/dev/sdb这块硬盘进行分区 Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0x178d8de5. Command (m for help): n // 输入n进行新建分区操作 Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p // p是创建主分区,e是创建逻辑分区 Partition number (1-4, default 1): 1 // 选择分区号,默认为1,这里可以改其他的(MBR分区最多有4个分区) First sector (2048-41943039, default 2048): // 起始扇区选择默认即可 Last sector, +sectors or +size{K,M,G,T,P} (2048-41943039, default 41943039): +5G //设置分区大小我这里设置5G Created a new partition 1 of type 'Linux' and of size 5 GiB. //提示创建成功 Command (m for help): p // p查看分区表 Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x178d8de5 Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 10487807 10485760 5G 83 Linux // 创建好的分区 Command (m for help): w // 保存之前的分区操作 The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. [root@host ~]# mkfs.xfs /dev/sdb1 // 格式化创建好的分区 meta-data=/dev/sdb1 isize=512 agcount=4, agsize=327680 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=1310720, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@host ~]# mkdir /sdb1 // 创建挂载位置 [root@host ~]# mount /dev/sdb1 /sdb1 // 将格式化好的分区进行挂载 [root@host ~]# lsblk -p // 查看分区情况 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT /dev/sda 8:0 0 20G 0 disk ├─/dev/sda1 8:1 0 1G 0 part /boot └─/dev/sda2 8:2 0 19G 0 part ├─/dev/mapper/rl-root 253:0 0 17G 0 lvm / └─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP] /dev/sdb 8:16 0 20G 0 disk └─/dev/sdb1 8:17 0 5G 0 part /sdb1 // 已经挂载好可以使用了 /dev/sr0 11:0 1 1024M 0 rom逻辑分区[root@host ~]# fdisk /dev/sdb // 对/dev/sdb进行分区 Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): n // 创建分区 Partition type p primary (1 primary, 0 extended, 3 free) e extended (container for logical partitions) Select (default p): e // 创建逻辑分区 Partition number (2-4, default 2): // 分区号 First sector (10487808-41943039, default 10487808): // 设置起始扇区选择默认即可 Last sector, +sectors or +size{K,M,G,T,P} (10487808-41943039, default 41943039): // 设置大小,逻辑分区的默认大小是剩余全部存储空间,这里我选择默认。 Created a new partition 2 of type 'Extended' and of size 15 GiB. Command (m for help): p // 查看分区表 Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x178d8de5 Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 10487807 10485760 5G 83 Linux /dev/sdb2 10487808 41943039 31455232 15G 5 Extended // 创建好了一个逻辑分区大小为15G Command (m for help): w // 保存分区操作 The partition table has been altered. Syncing disks. [root@host ~]# lsblk -p NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT /dev/sda 8:0 0 20G 0 disk ├─/dev/sda1 8:1 0 1G 0 part /boot └─/dev/sda2 8:2 0 19G 0 part ├─/dev/mapper/rl-root 253:0 0 17G 0 lvm / └─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP] /dev/sdb 8:16 0 20G 0 disk ├─/dev/sdb1 8:17 0 5G 0 part /sdb1 └─/dev/sdb2 8:18 0 15G 0 part /dev/sr0 11:0 1 1024M 0 rom逻辑子分区创建好逻辑分区之后就可以在逻辑分区里面创建无数个子分区,这样就逃离了MBR分区方式的分区限制。[root@host ~]# fdisk /dev/sdb // 对/dev/sdb进行分区 Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): n // 新建分区 All space for primary partitions is in use. Adding logical partition 5 // 当创建好逻辑分区之后就会发现分区号会从5开始增加 First sector (10489856-41943039, default 10489856): // 磁盘起始位置,默认即可 Last sector, +sectors or +size{K,M,G,T,P} (10489856-41943039, default 41943039): +1G //设置大小 Created a new partition 5 of type 'Linux' and of size 1 GiB. Command (m for help): n // 继续创建 All space for primary partitions is in use. Adding logical partition 6 First sector (12589056-41943039, default 12589056): Last sector, +sectors or +size{K,M,G,T,P} (12589056-41943039, default 41943039): +1G Created a new partition 6 of type 'Linux' and of size 1 GiB. Command (m for help): n // 继续创建 All space for primary partitions is in use. Adding logical partition 7 First sector (14688256-41943039, default 14688256): Last sector, +sectors or +size{K,M,G,T,P} (14688256-41943039, default 41943039): +1G Created a new partition 7 of type 'Linux' and of size 1 GiB. Command (m for help): w // 保存 The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. [root@host ~]# lsblk -p // 查看分区表 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT /dev/sda 8:0 0 20G 0 disk ├─/dev/sda1 8:1 0 1G 0 part /boot └─/dev/sda2 8:2 0 19G 0 part ├─/dev/mapper/rl-root 253:0 0 17G 0 lvm / └─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP] /dev/sdb 8:16 0 20G 0 disk ├─/dev/sdb1 8:17 0 5G 0 part /sdb1 ├─/dev/sdb2 8:18 0 1K 0 part ├─/dev/sdb5 8:21 0 1G 0 part ├─/dev/sdb6 8:22 0 1G 0 part └─/dev/sdb7 8:23 0 1G 0 part /dev/sr0 11:0 1 1024M 0 rom这里会发现sdb磁盘一共创建了7个分区。删除分区以sdb1为例,sdb1这个分区已经挂载到了/sdb1目录已经在使用了,在删除分区前需要取消挂载。具体操作方法如下[root@host ~]# lsblk -p // 查看分区状态 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT /dev/sda 8:0 0 20G 0 disk ├─/dev/sda1 8:1 0 1G 0 part /boot └─/dev/sda2 8:2 0 19G 0 part ├─/dev/mapper/rl-root 253:0 0 17G 0 lvm / └─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP] /dev/sdb 8:16 0 20G 0 disk ├─/dev/sdb1 8:17 0 5G 0 part /sdb1 // 发现/dev/sdb1已经挂载到/sdb1目录下 ├─/dev/sdb2 8:18 0 1K 0 part ├─/dev/sdb5 8:21 0 1G 0 part ├─/dev/sdb6 8:22 0 1G 0 part └─/dev/sdb7 8:23 0 1G 0 part /dev/sr0 11:0 1 1024M 0 rom [root@host ~]# umount /sdb1 // 取消挂载 [root@host ~]# lsblk -p NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT /dev/sda 8:0 0 20G 0 disk ├─/dev/sda1 8:1 0 1G 0 part /boot └─/dev/sda2 8:2 0 19G 0 part ├─/dev/mapper/rl-root 253:0 0 17G 0 lvm / └─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP] /dev/sdb 8:16 0 20G 0 disk ├─/dev/sdb1 8:17 0 5G 0 part // 已经取消挂载了 ├─/dev/sdb2 8:18 0 1K 0 part ├─/dev/sdb5 8:21 0 1G 0 part ├─/dev/sdb6 8:22 0 1G 0 part └─/dev/sdb7 8:23 0 1G 0 part /dev/sr0 11:0 1 1024M 0 rom [root@host ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): d // d(delete)删除分区 Partition number (1,2,5-7, default 7): 1 // 删除分区号,这里选择1 Partition 1 has been deleted. Command (m for help): w // 保存位置 The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. [root@host ~]# lsblk -p // 查看磁盘状态 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT /dev/sda 8:0 0 20G 0 disk ├─/dev/sda1 8:1 0 1G 0 part /boot └─/dev/sda2 8:2 0 19G 0 part ├─/dev/mapper/rl-root 253:0 0 17G 0 lvm / └─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP] /dev/sdb 8:16 0 20G 0 disk // 发现sdb1已经消失 ├─/dev/sdb2 8:18 0 1K 0 part ├─/dev/sdb5 8:21 0 1G 0 part ├─/dev/sdb6 8:22 0 1G 0 part └─/dev/sdb7 8:23 0 1G 0 part /dev/sr0 11:0 1 1024M 0 rom使用分区格式化分区创建好分区之后需要格式化之后才可以挂载使用,格式化需要使用mkfs工具,这里不多讲。关于格式化的格式可以使用mkfs.来查看[root@host ~]# mkfs. mkfs.cramfs mkfs.ext2 mkfs.ext3 mkfs.ext4 mkfs.minix mkfs.xfs这里使用xfs来格式化创建的分区(逻辑分区只能格式化子分区不能直接格式化逻辑分区)。[root@host ~]# lsblk -p // 查看分区状态 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT /dev/sda 8:0 0 20G 0 disk ├─/dev/sda1 8:1 0 1G 0 part /boot └─/dev/sda2 8:2 0 19G 0 part ├─/dev/mapper/rl-root 253:0 0 17G 0 lvm / └─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP] /dev/sdb 8:16 0 20G 0 disk ├─/dev/sdb2 8:18 0 1K 0 part ├─/dev/sdb5 8:21 0 1G 0 part ├─/dev/sdb6 8:22 0 1G 0 part └─/dev/sdb7 8:23 0 1G 0 part /dev/sr0 11:0 1 1024M 0 rom [root@host ~]# mkfs.xfs /dev/sdb5 // 使用xfs类型格式化sdb5 meta-data=/dev/sdb5 isize=512 agcount=4, agsize=65536 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=262144, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@host ~]# mkfs.xfs /dev/sdb6 // 使用xfs类型格式化sdb6 meta-data=/dev/sdb6 isize=512 agcount=4, agsize=65536 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=262144, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@host ~]# mkfs.xfs /dev/sdb7 // 使用xfs类型格式化sdb7 meta-data=/dev/sdb7 isize=512 agcount=4, agsize=65536 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=262144, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@host ~]# lsblk -p // 查看分区状态 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT /dev/sda 8:0 0 20G 0 disk ├─/dev/sda1 8:1 0 1G 0 part /boot └─/dev/sda2 8:2 0 19G 0 part ├─/dev/mapper/rl-root 253:0 0 17G 0 lvm / └─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP] /dev/sdb 8:16 0 20G 0 disk ├─/dev/sdb2 8:18 0 1K 0 part ├─/dev/sdb5 8:21 0 1G 0 part ├─/dev/sdb6 8:22 0 1G 0 part └─/dev/sdb7 8:23 0 1G 0 part /dev/sr0 11:0 1 1024M 0 rom挂载分区当格式化分区之后可以通过mount来进行挂载,挂载好的分区就可以使用了。[root@host ~]# mkdir -p /disk/sdb{5..7} // 创建挂载目录 [root@host ~]# mount /dev/sdb5 /disk/sdb5 // 挂载sdb5 [root@host ~]# mount /dev/sdb6 /disk/sdb6 // 挂载sdb6 [root@host ~]# mount /dev/sdb7 /disk/sdb7 // 挂载sdb7 [root@host ~]# lsblk -p // 查看分区状态 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT /dev/sda 8:0 0 20G 0 disk ├─/dev/sda1 8:1 0 1G 0 part /boot └─/dev/sda2 8:2 0 19G 0 part ├─/dev/mapper/rl-root 253:0 0 17G 0 lvm / └─/dev/mapper/rl-swap 253:1 0 2G 0 lvm [SWAP] /dev/sdb 8:16 0 20G 0 disk ├─/dev/sdb2 8:18 0 1K 0 part ├─/dev/sdb5 8:21 0 1G 0 part /disk/sdb5 ├─/dev/sdb6 8:22 0 1G 0 part /disk/sdb6 └─/dev/sdb7 8:23 0 1G 0 part /disk/sdb7 /dev/sr0 11:0 1 1024M 0 rom Containerd-容器管理 https://blog.boychai.xyz/index.php/archives/47/ 2023-01-12T14:30:00+00:00 Containerd概述什么是ContainerdContainerd是一个行业标准的容器运行时,强调简单性、健壮性和可移植性。它可以作为Linux和Windows的守护进程使用,它可以管理其主机系统的完整容器生命周期:映像传输和存储、容器执行和监督、低级存储和网络附件等。Docker和Containerd的关系最开始Containerd是Docker的一部分,但是Docker的公司把Containerd剥离出来并捐赠给了一个开源社区(CNCF)独发展和运营。阿里云,AWS, Google,IBM和Microsoft将参与到Containerd的开发中。为什么要学习Containerdkubernetes在1.5版本就发布了CRI(Container Runtime Interface)容器运行时接口,但是Docker是不符合这个标准的,Docker在当时又占据了大部分市场直接弃用Docker是不可能的,所以当时kubernetes单独维护了一个适配器(dockershim)单独给Docker用。Docker的功能有很多,实际kubernetes用到的功能只是一小部分,而那些用不到的功能半身就可能带来安全隐患。在1.20版本就发消息打算弃用Docker不再默认支持Docker当容器运行时。在1.24版本正式弃用(移除dockershim)。在1.24之后的版本如果还想使用Docker作为底层的容器管理工具则需要单独安装dockershim。Containerd是支持CRI标准的,所以自然也就将容器运行时切换到Containerd上面了。安装ContainerdYUM直接使用docker的镜像源安装即可。[root@host ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo [root@host ~]# yum -y install containerd.io ...... [root@host ~]# rpm -qa containerd.io containerd.io-1.6.15-3.1.el8.x86_64使用下面命令设置开机自启并启动containerdsystemctl enable --now containerd二进制安装包Containerd有两种安装包,区别如下第一种是containerd-xxx,这种包用于单机测试没问题,不包含runC,需要提前安装。第二种是cri-containerd-cni-xxxx,包含runc和k8s里的所需的相关文件。k8s集群里面需要用到此包。虽然包含runC,但是依赖系统中的seccomp(安全计算模式,用来限制系统资源的模式)本文采用第二种包进行安装。获取安装包下载地址:Github 本文采用的版本是cri-containerd-cni-1.6.15-linux-amd64.tar.gz下载好上传到服务器里面即可[root@host ~]# mkdir containerd [root@host ~]# mv cri-containerd-cni-1.6.15-linux-amd64.tar.gz containerd/ [root@host ~]# cd containerd [root@host containerd]# tar xvf cri-containerd-cni-1.6.15-linux-amd64.tar.gz [root@host containerd]# ls cri-containerd-cni-1.6.15-linux-amd64.tar.gz etc opt usr手动安装[root@host containerd]# cp ./etc/systemd/system/containerd.service /etc/systemd/system/ [root@host containerd]# cp usr/local/sbin/runc /usr/sbin/ [root@host containerd]# cp usr/local/bin/ctr /usr/bin/ [root@host containerd]# cp ./usr/local/bin/containerd /usr/local/bin/ [root@host containerd]# mkdir /etc/containerd [root@host containerd]# containerd config default > /etc/containerd/config.toml修改配置[root@host containerd]# cat /etc/containerd/config.toml |grep sandbox sandbox_image = "registry.k8s.io/pause:3.6"这个参数是指向了一个镜像地址,这个地址在国内是被墙的,通过下面命令替换,下面的地址是我在dockerhub上面做的副本。[root@host containerd]# sed -i 's/registry.k8s.io\/pause:3.6/docker.io\/boychai\/pause:3.6/g' /etc/containerd/config.toml [root@test containerd]# cat /etc/containerd/config.toml |grep sandbox_image sandbox_image = "docker.io/boychai/pause:3.6"启动服务[root@host containerd]# systemctl enable --now containerd Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service. [root@host containerd]# ctr version Client: Version: v1.6.15 Revision: 5b842e528e99d4d4c1686467debf2bd4b88ecd86 Go version: go1.18.9 Server: Version: v1.6.15 Revision: 5b842e528e99d4d4c1686467debf2bd4b88ecd86 UUID: ebf1fe8b-37f7-4d94-8277-788e9f2c2a17 [root@test containerd]# runc -v runc version 1.1.4 commit: v1.1.4-0-g5fd4c4d1 spec: 1.0.2-dev go: go1.18.9 libseccomp: 2.5.1镜像管理帮助信息[root@host ~]# ctr images -h NAME: ctr images - manage images USAGE: ctr images command [command options] [arguments...] COMMANDS: check check existing images to ensure all content is available locally export export images import import images list, ls list images known to containerd mount mount an image to a target path unmount unmount the image from the target pull pull an image from a remote push push an image to a remote delete, del, remove, rm remove one or more images by reference tag tag an image label set and clear labels for an image convert convert an image OPTIONS: --help, -h show help命令概述check检查镜像export导出镜像import导入镜像list,ls列出镜像mount挂载镜像unmount卸载镜像pull下载镜像push推送镜像delete,del,remove,rm删除镜像tag修改标记label修改标签convert转换镜像images可以使用简写 例如列出帮助信息"ctr i -h"下载镜像containerd支持OCI标准的镜像,所以可以用dockerhub中的镜像或者dockerfile构建的镜像。ctr i pull 镜像名称[root@host ~]# ctr images pull docker.io/library/nginx:alpine docker.io/library/nginx:alpine: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6: exists |++++++++++++++++++++++++++++++++++++++| manifest-sha256:c1b9fe3c0c015486cf1e4a0ecabe78d05864475e279638e9713eb55f013f907f: exists |++++++++++++++++++++++++++++++++++++++| layer-sha256:c7a81ce22aacea2d1c67cfd6d3c335e4e14256b4ffb80bc052c3977193ba59ba: done |++++++++++++++++++++++++++++++++++++++| config-sha256:c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16: exists |++++++++++++++++++++++++++++++++++++++| layer-sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:83e90619bc2e4993eafde3a1f5caf5172010f30ba87bbc5af3d06ed5ed93a9e9: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:d52adec6f48bc3fe2c544a2003a277d91d194b4589bb88d47f4cfa72eb16015d: exists |++++++++++++++++++++++++++++++++++++++| layer-sha256:10eb2ce358fad29dd5edb0d9faa50ff455c915138fdba94ffe9dd88dbe855fbe: exists |++++++++++++++++++++++++++++++++++++++| layer-sha256:a1be370d6a525bc0ae6cf9840a642705ae1b163baad16647fd44543102c08581: exists |++++++++++++++++++++++++++++++++++++++| layer-sha256:689b9959905b6f507f527ce377d7c742a553d2cda8d3529c3915fb4a95ad45bf: exists |++++++++++++++++++++++++++++++++++++++| elapsed: 11.2s total: 15.7 M (1.4 MiB/s) unpacking linux/amd64 sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6... done: 709.697156ms查看镜像crt images <ls|list>[root@host ~]# ctr images ls REF TYPE DIGEST SIZE PLATFORMS LABELS docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -镜像挂载查看镜像的文件系统crt images mount 镜像名称 本地目录[root@host ~]# mkdir /mnt/nginx-alpine [root@host ~]# ctr images mount docker.io/library/nginx:alpine /mnt/nginx-alpine/ sha256:a71c46316a83c0ac8c2122376a89b305936df99fa354c265f5ad2c1825e94167 /mnt/nginx-alpine/ [root@host ~]# cd /mnt/nginx-alpine/ [root@host nginx-alpine]# ls bin dev docker-entrypoint.d docker-entrypoint.sh etc home lib media mnt opt proc root run sbin srv sys tmp usr var镜像卸载卸载已经挂载到本地的镜像文件系统crt images unmount 本地目录[root@host ~]# ctr images unmount /mnt/nginx-alpine/ /mnt/nginx-alpine/ [root@host ~]# ls /mnt/nginx-alpine/镜像导出ctr images export --platform 平台 导出的文件名称 镜像名称[root@host ~]# ctr images export --platform linux/amd64 nginx.tar docker.io/library/nginx:alpine [root@host ~]# ls anaconda-ks.cfg containerd nginx.tar镜像删除ctr images delete|del|remove|rm 镜像名称[root@host ~]# ctr images del docker.io/library/nginx:alpine docker.io/library/nginx:alpine [root@host ~]# ctr images ls REF TYPE DIGEST SIZE PLATFORMS LABELS镜像导入ctr images import 镜像文件名称[root@host ~]# ctr images import nginx.tar unpacking docker.io/library/nginx:alpine (sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6)...done [root@host ~]# ctr images ls REF TYPE DIGEST SIZE PLATFORMS LABELS docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x - 镜像名称更改某个镜像的名称ctr images tag 原镜像 新镜像名[root@host ~]# ctr images ls REF TYPE DIGEST SIZE PLATFORMS LABELS docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x - [root@host ~]# ctr images ls REF TYPE DIGEST SIZE PLATFORMS LABELS docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x - [root@host ~]# ctr images tag docker.io/library/nginx:alpine nginx:alpine nginx:alpine [root@host ~]# ctr images ls REF TYPE DIGEST SIZE PLATFORMS LABELS docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x - nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x - 容器管理 Kubernetes-容器编排引擎(资源维度-LimitRange) https://blog.boychai.xyz/index.php/archives/46/ 2022-12-20T02:57:00+00:00 LimitRange概述默认情况下,Kubernetes集群上的容器对计算资源维度没有任何限制,可能会导致个别容器资源过大导致影响其他容器正常工作,这时可以使用LimitRange定义容器默认CPU和内存请求值或者最大限制。LimitRange维度限制:限制容器配置requests.cpu/memory,limits.cpu/memory的最小、最大值限制容器配置requests.cpu/memory,limits.cpu/memory的默认值限制PVC配置requests.storage的最小、最大值使用前提LimitRange功能是一个准入控制插件,默认已经启用。检查是否开启LimitRange功能的方法如下:[root@master ~]# kubectl -n kube-system get pod|grep apiserver kube-apiserver-master.host.com 1/1 Running 28 (95m ago) 62d [root@master ~]# kubectl -n kube-system exec kube-apiserver-master.host.com -- kube-apiserver -h|grep enable-admission-plugins --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.) --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.在"--enable-admission-plugins"中寻找"LimitRanger"发现已经开启。资源清单计算资源最大、最小限制apiVersion: v1 kind: LimitRange metadata: name: cpu-memory-max-min namespace: test spec: limits: - max: cpu: 1 memory: 1Gi min: cpu: 100m memory: 100Mi type: Containermax里面是容器能设置limit的最大值,min里面是容器能设置request的最小值计算资源默认值apiVersion: v1 kind: LimitRange metadata: name: cpu-memory-max-min namespace: test spec: limits: - max: cpu: 1 memory: 1Gi min: cpu: 100m memory: 100Mi default: cpu: 500m memory: 500Mi defaultRequest: cpu: 100m memory: 100Mi type: Container"default"是设置limit的默认值,"defaultRequest"是设置request的默认值存储资源最大、最小限制apiVersion: v1 kind: LimitRange metadata: name: storage-max-min namespace: test spec: limits: - max: storage: 10Gi min: storage: 1Gi type: PersistentVolumeClaimpvc的使用维度状态[root@master cks]# kubectl get limits -n test NAME CREATED AT cpu-memory-max-min 2022-12-17T11:01:11Z storage-max-min 2022-12-17T10:59:56Z [root@master cks]# kubectl describe limits -n test Name: cpu-memory-max-min Namespace: test Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Container memory 100Mi 1Gi 100Mi 500Mi - Container cpu 100m 1 100m 500m - Name: storage-max-min Namespace: test Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- PersistentVolumeClaim storage 1Gi 10Gi - - - Kubernetes-容器编排引擎(资源配额-ResourceQuota) https://blog.boychai.xyz/index.php/archives/45/ 2022-12-17T06:36:00+00:00 ResourceQuota概述当多个团队去共享使用一个Kubernetes集群时,会出现不均匀的资源使用,默认情况下资源先到先得,这个时候可以通过ResourceQuota来对命名空间资源使用总量做限制,从而解决这个问题。使用前提ResourceQuota功能是一个准入控制插件,默认已经启用。检查是否开启ResourceQuota功能的方法如下:[root@master ~]# kubectl -n kube-system get pod|grep apiserver kube-apiserver-master.host.com 1/1 Running 27 (17h ago) 61d [root@master ~]# kubectl -n kube-system exec kube-apiserver-master.host.com -- kube-apiserver -h|grep enable-admission-plugins --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.) --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter. 在"--enable-admission-plugins"中寻找"ResourceQuota"发现已经开启。支持的资源支持的资源描述limits.cpu/memory所有Pod上限资源配置总量不超过该值 (所有非终止状态的Pod)requests.cpu/memory所有Pod请求资源配置总量不超过该值 (所有非终止状态的Pod)cpu/memory等同于requests.cpu/requests.memoryrequests.storage所有PVC请求容量总和不超过该值persistentvolumeclaims所有PVC数量总和不超过该值\<storage-class-name\>.storageclass.storage.k8s.io/requests.storage所有与\<storage-class-name\>相关的PVC请求容量总和不超过该值\<storage-class-name\>.storageclass.storage.k8s.io/persistentvolumeclaims所有与\<storage-class-name\>相关的PVC数量总和不超过该值pods、 count/deployments.apps、count/statfulsets.apps、count/services(services.loadbalancers、 services.nodeports)count/secrets、 count/configmaps、count/job.batch、count/cronjobs.batch创建资源数量不超过该值资源清单计算资源配额apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources namespace: test spec: hard: requests.cpu: "1" requests.memory: 10Gi limits.cpu: "4" limits.memory: 20Gi存储资源配额apiVersion: v1 kind: ResourceQuota metadata: name: storage-resources namespace: test spec: hard: requests.storage: 10Gi managed-nfs-storage.storageclass.storage.k8s.io/requests.storage: 10Gi"managed-nfs-storage"是动态存储类的名称。对象数量配额apiVersion: v1 kind: ResourceQuota metadata: name: object-counts namespace: test spec: hard: pods: "10" count/deployments.apps: "3" count/services: "3"限制的是个数,命名空间的总数量不能超过该值。配额状态[root@master ~]# kubectl get quota -n test NAME AGE REQUEST LIMIT compute-resources 41m requests.cpu: 0/4, requests.memory: 0/10Gi limits.cpu: 0/6, limits.memory: 0/12Gi object-counts 4m6s count/deployments.apps: 0/3, count/services: 0/3, pods: 0/10 storage-resources 6m16s managed-nfs-storage.storageclass.storage.k8s.io/requests.storage: 0/10Gi, requests.storage: 0/10Gi通过上面的命令可以查看额配资源使用的情况。