avatar

甄天祥-Linux-个人小站

A text-focused Halo theme

  • 首页
  • 分类
  • 标签
  • 关于
Home Centos7 Kubernetes 1.23 升级 1.24
文章

Centos7 Kubernetes 1.23 升级 1.24

Posted 2025-03-9 Updated 2025-03- 9
By Administrator
53~68 min read

本文档详细介绍了如何将 k8s 集群从1.23 版本升级到1.24.x,重点包括控制平面和工作节点的升级步骤。升级涉及 kubeadm、kubelet 和kubectl 的更新,以及节点的腾空和保护解除。1.24.x 版本后,k8s 不再依赖 docker,推荐使用 containerd 。升级过程中应注意节点的调度状态,确保集群稳定性。

1. 首先查看当前版本

[root@k8s-master1 ~]# kubectl get node
NAME          STATUS   ROLES                  AGE     VERSION
k8s-master1   Ready    control-plane,master   6m29s   v1.23.0
k8s-master2   Ready    control-plane,master   5m26s   v1.23.0
k8s-master3   Ready    control-plane,master   5m26s   v1.23.0
k8s-node1     Ready    <none>                 4m38s   v1.23.0

2. 查看当前 pod 运行状况

[root@k8s-master1 ~]# kubectl get pods -A
NAMESPACE       NAME                                            READY   STATUS      RESTARTS   AGE
ingress-nginx   ingress-nginx-admission-create-4scb5            0/1     Completed   0          4m30s
ingress-nginx   ingress-nginx-admission-patch-pn4x7             0/1     Completed   0          4m30s
kube-system     calico-kube-controllers-84985dc8d9-g48f4        1/1     Running     0          4m32s
kube-system     calico-node-9gr6s                               1/1     Running     0          4m32s
kube-system     calico-node-dh5mz                               1/1     Running     0          4m32s
kube-system     calico-node-dzqc7                               1/1     Running     0          4m32s
kube-system     calico-node-r8p2n                               1/1     Running     0          4m32s
kube-system     coredns-65c54cc984-2nmhc                        1/1     Running     0          6m24s
kube-system     coredns-65c54cc984-wkwsz                        1/1     Running     0          6m24s
kube-system     kube-apiserver-k8s-master1                      1/1     Running     0          6m30s
kube-system     kube-apiserver-k8s-master2                      1/1     Running     0          5m29s
kube-system     kube-apiserver-k8s-master3                      1/1     Running     0          5m29s
kube-system     kube-controller-manager-k8s-master1             1/1     Running     0          6m30s
kube-system     kube-controller-manager-k8s-master2             1/1     Running     0          5m29s
kube-system     kube-controller-manager-k8s-master3             1/1     Running     0          5m29s
kube-system     kube-proxy-7jwqh                                1/1     Running     0          6m25s
kube-system     kube-proxy-b2nd6                                1/1     Running     0          5m29s
kube-system     kube-proxy-fgn9h                                1/1     Running     0          5m29s
kube-system     kube-proxy-m6msm                                1/1     Running     0          4m41s
kube-system     kube-scheduler-k8s-master1                      1/1     Running     0          6m30s
kube-system     kube-scheduler-k8s-master2                      1/1     Running     0          5m28s
kube-system     kube-scheduler-k8s-master3                      1/1     Running     0          5m28s
openebs         openebs-localpv-provisioner-7c5c5fc45b-8qkq9    1/1     Running     0          4m29s
openebs         openebs-ndm-cluster-exporter-66b9b998f7-pzlsk   1/1     Running     0          4m29s
openebs         openebs-ndm-fph66                               1/1     Running     0          4m1s
openebs         openebs-ndm-node-exporter-sccwd                 1/1     Running     0          4m1s
openebs         openebs-ndm-operator-69f9f59c4f-lkzh5           1/1     Running     0          4m29s

3. 停止节点调度

[root@k8s-master1 ~]# kubectl cordon k8s-master1 
node/k8s-master1 cordoned
[root@k8s-master1 ~]# kubectl get node
NAME          STATUS                     ROLES           AGE   VERSION
k8s-master1   Ready,SchedulingDisabled   control-plane   99m   v1.23.0
k8s-master2   Ready                      control-plane   98m   v1.23.0
k8s-master3   Ready                      control-plane   98m   v1.23.0
k8s-node1     Ready                      <none>          97m   v1.23.0

4. 安装指定版本的 kubeadm

[root@k8s-master1 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes |grep 1.24.0
kubeadm.x86_64                       1.24.0-0                        kubernetes
[root@k8s-master1 ~]# yum install -y kubeadm-1.24.0-0 --disableexcludes=kubernetes

5. 验证升级计划是否可以

[root@k8s-master1 ~]#  kubeadm upgrade plan

如下是输出内容

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0309 20:03:10.393145   35173 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.23.0
[upgrade/versions] kubeadm version: v1.24.0
I0309 20:03:16.033598   35173 version.go:255] remote version is much newer: v1.32.2; falling back to: stable-1.24
[upgrade/versions] Target version: v1.24.17
[upgrade/versions] Latest version in the v1.23 series: v1.23.17

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     4 x v1.23.0   v1.23.17

Upgrade to the latest version in the v1.23 series:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.23.0   v1.23.17
kube-controller-manager   v1.23.0   v1.23.17
kube-scheduler            v1.23.0   v1.23.17
kube-proxy                v1.23.0   v1.23.17
CoreDNS                   v1.8.6    v1.8.6

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.23.17

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     4 x v1.23.0   v1.24.17

Upgrade to the latest stable version:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.23.0   v1.24.17
kube-controller-manager   v1.23.0   v1.24.17
kube-scheduler            v1.23.0   v1.24.17
kube-proxy                v1.23.0   v1.24.17
CoreDNS                   v1.8.6    v1.8.6

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.24.17

Note: Before you can perform this upgrade, you have to update kubeadm to v1.24.17.

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

6. 通过 Unix 套接字与 containerd 通信

[root@k8s-master1 ~]# crictl config runtime-endpoint unix:///run/containerd/containerd.sock
[root@k8s-master1 ~]# crictl config image-endpoint unix:///run/containerd/containerd.sock

7. 确保 kubelet 使用 containerd 作为容器运行时

默认 docker 运行时如下

[root@k8s-node1 ~]# cat /var/lib/kubelet/kubeadm-flags.env 
KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"

修改为 containerd 运行时

[root@k8s-master1 ~]# vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"

8. 生成 containerd 配置

[root@k8s-master1 ~]# containerd config default > /etc/containerd/config.toml

[root@k8s-master1 ~]# vim /etc/containerd/config.toml
63     sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
125             SystemdCgroup = true
147       config_path = "/etc/containerd/certs.d"

[root@k8s-master1 ~]# mkdir -p /etc/containerd/certs.d/{docker.io,registry.k8s.io}

[root@k8s-master1 ~]# cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://dockerproxy.com"]
  capabilities = ["pull", "resolve", "push"]
  insecure_skip_verify = true
[host."https://docker.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
  insecure_skip_verify = true
[host."https://reg-mirror.qiniu.com"]
  capabilities = ["pull", "resolve", "push"]
  insecure_skip_verify = true
[host."https://registry.docker-cn.com"]
  capabilities = ["pull", "resolve", "push"]
  insecure_skip_verify = true
[host."http://hub-mirror.c.163.com"]
  capabilities = ["pull", "resolve", "push"]
  insecure_skip_verify = true
EOF
[root@k8s-master1 ~]# cat > /etc/containerd/certs.d/registry.k8s.io/hosts.toml << EOF
server = "https://registry.k8s.io"

[host."https://k8s.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
  insecure_skip_verify = true
EOF

[root@k8s-master1 ~]# systemctl restart containerd

9. 开始升级

[root@k8s-master1 ~]# kubeadm upgrade apply v1.24.0

一下是输出内容

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0309 20:17:57.016533   68419 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.24.0"
[upgrade/versions] Cluster version: v1.23.0
[upgrade/versions] kubeadm version: v1.24.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.24.0" (timeout: 5m0s)...
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2250779979"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-03-09-20-18-22/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-03-09-20-18-22/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-03-09-20-18-22/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Removing the deprecated label node-role.kubernetes.io/master='' from all control plane Nodes. After this step only the label node-role.kubernetes.io/control-plane='' will be present on control plane Nodes.
[upgrade/postupgrade] Adding the new taint &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} to all control plane Nodes. After this step both taints &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} and &Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,} should be present on control plane Nodes.
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
W0309 20:19:51.720046   68419 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.24.0". Enjoy!

10. 升级 kubectl 和 kubelet

[root@k8s-master1 ~]# yum install -y kubelet-1.24.0-0 kubectl-1.24.0-0 --disableexcludes=kubernetes

11. 修改节点配置信息

修改为 containerd 运行时

[root@k8s-master1 ~]# kubectl get nodes k8s-master1 -o yaml | sed 's/unix:\/\/\/var\/run\/dockershim.sock/unix:\/\/\/run\/containerd\/containerd.sock/g' | kubectl replace -f -

[root@k8s-master1 ~]# systemctl daemon-reload

[root@k8s-master1 ~]# systemctl restart kubelet

12. 检查升级情况

[root@k8s-master1 ~]# kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:38:19Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}

[root@k8s-master1 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:44:24Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}

[root@k8s-master1 ~]# kubelet --version
Kubernetes v1.24.0

[root@k8s-master1 ~]# kubectl get node
NAME          STATUS   ROLES           AGE   VERSION
k8s-master1   Ready    control-plane   44m   v1.24.0
k8s-master2   Ready    control-plane   43m   v1.23.0
k8s-master3   Ready    control-plane   43m   v1.23.0
k8s-node1     Ready    <none>          42m   v1.23.0

13. 启动服务测试

[root@k8s-master1 ~]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
  labels:
    app: busybox
spec:
  containers:
  - name: busybox
    image: busybox:latest
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo Hello Kubernetes1.24.0! && sleep 10; done"]
  restartPolicy: OnFailure
  tolerations:  # 添加污点容忍
  - key: "node-role.kubernetes.io/master"  # 污点的 key
    operator: "Equal"
    value: "true"  # 污点的 value
    effect: "NoSchedule"  # 污点的 effect
  nodeName: k8s-master1  # 直接指定运行在 k8s-master1 节点上
[root@k8s-master1 ~]# kubectl apply -f pod.yaml 
pod/busybox created
[root@k8s-master1 ~]# kubectl get pod -o wide 
NAME      READY   STATUS    RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
busybox   1/1     Running   0          2m    192.18.159.129   k8s-master1   <none>           <none>
[root@k8s-master1 ~]# kubectl logs busybox -f
Hello Kubernetes1.24.0!
Hello Kubernetes1.24.0!

14. 手动测试拉取镜像

[root@k8s-master1 ~]# ctr -n k8s.io images pull docker.io/library/nginx:latest --hosts-dir=/etc/containerd/certs.d
INFO[0000] trying next host                              error="failed to do request: Head \"https://dockerproxy.com/v2/library/nginx/manifests/latest?ns=docker.io\": read tcp 10.0.0.11:45515->144.24.81.189:443: read: connection reset by peer" host=dockerproxy.com
docker.io/library/nginx:latest:                                                   resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44: done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:103f50cb3e9f200431b555078cce5e8df3db6ddc2e54d714a10b994e430e98a3:    done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:b52e0b094bc0e26c9eddc9e4ab7a64ce0033c3360d8b7ad4ff4132c4e03e8f7b:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:7cf63256a31a4cc44f6defe8e1af95363aee5fa75f30a248d95cae684f87c53c:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:bf9acace214a6c23630803d90911f1fd7d1ba06a3083f0a62fd036a6d1d8e274:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:513c3649bb1480ca9a04c73f320b6b5a909e24e4ac18ae72fd56b818241d6730:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:d014f92d532d416c7b9eadb244f14f73fdb3d2ead120264b749e342700824f3c:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:9dd21ad5a4a6a856d82bb6bb6147c30ad90a9768c3651c55775354e7649bc74d:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:943ea0f0c2e42ccacc72ac65701347eadb2b0cb22828fac30f1400bba3d37088:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 0.5 s                                                                    total:   0.0 B (0.0 B/s)                                         
unpacking linux/amd64 sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496...
done: 12.047049ms

云原生与容器技术
Containerd Docker kubernetes
License:  CC BY 4.0
Share

Further Reading

Nov 19, 2025

Kubernetes 安装部署 Alist 并配置 Onlyoffice

Alist 是一个支持多种存储的目录列表程序,能够将网盘、对象存储和本地存储等挂载为统一目录,提供文件浏览、管理和分享功能。它支持 OneDrive、Google Drive、阿里云盘、百度网盘等多种存储方式,界面简洁美观,基于 Material Design 设计。Alist 功能强大,包括文件预览、下载、分享、搜索和权限管理等,并且开源免费。部署 Alist 服务可以通过 Docker、宝塔面板或直接运行等方式实现,文中以 K8S 部署为例,详细介绍了配置步骤及 OnlyOffice 的集成方法,用于在线预览和编辑 Office 文档。此外,还提供了如何通过 HTTPS 和自签名证书确保服务安全访问的指导。

Oct 23, 2025

KubeSphere-04-Dev-ops 流水线插件的使用

KubeSphere 基于 Jenkins 的 DevOps 系统专为 Kubernetes 中的 CI/CD 工作流设计,提供了一站式的解决方案,包括插件管理、Binary-to-Image (B2I)、Source-to-Image (S2I)等功能。该系统兼容第三方私有镜像仓库和代码库,提供了全面的可视化 CI/CD 流水线。本文档指导用户开启 KubeSphere 的 DevOps 插件,规划流水线并编写 Jenkinsfile,通过实战案例让用户掌握从理论到实践的全过程。文档详细介绍了如何解决开启 DevOps 组件时可能遇到的问题、配置步骤以及验证方法,并演示了创建和管理 DevOps 项目的过程,涵盖用户创建、企业空间与项目的建立等。此外,还提供了简化版的 DevOps 流水线设计示例,涉及从源代码检出到部署环境的整个流程,包括单元测试、编译、构建推送镜像及多环境部署策略。最后,通过一系列准备工作的说明和实际操作步骤,确保用户能够顺利实现自动化持续集成和部署。

Oct 14, 2025

KubeSphere-03-Logging 日志插件的使用

KubeSphere Logging 是 KubeSphere 平台的日志管理插件,基于 Elasticsearch 或 OpenSearch 构建,支持多租户日志收集、查询和分析。它自动采集容器、工作负载及平台审计日志,并通过 Fluent Bit 进行预处理。该插件提供直观的查询界面、灵活的日志保留策略(默认7天)、Sidecar模式增强可靠性以及外部日志系统集成等功能,帮助企业快速定位问题并满足合规要求。开启插件需编辑 ks-installer 配置文件以选择性启用 Elasticsearch 或 OpenSearch,并设置相关参数。此外,还介绍了一款基于 Go 的 OpenSearch 告警与可视化系统,支持多种通知渠道,可通过本地构建 Docker 镜像并在 Kubernetes 环境中部署使用。

OLDER

Kubernetes 部署 Gitlab-Runner

NEWER

Container 命令ctr、crictl 命令使用说明

Recently Updated

  • Kubernetes 安装部署 Alist 并配置 Onlyoffice
  • KubeSphere-04-Dev-ops 流水线插件的使用
  • KubeSphere-03-Logging 日志插件的使用
  • KubeSphere-02-Service Mesh 的使用
  • KubeSphere-01-介绍与基础使用

Trending Tags

KVM Service Mesh Docker shell 路由规则 Mysql Containerd GitOps 网络设备 Prometheus

Contents

©2025 甄天祥-Linux-个人小站. Some rights reserved.

Using the Halo theme Chirpy