avatar

甄天祥-Linux-个人小站

A text-focused Halo theme

  • 首页
  • 分类
  • 标签
  • 关于
Home Kubernetes 部署 GlusterFS 分布式存储
文章

Kubernetes 部署 GlusterFS 分布式存储

Posted 2025-04-5 Updated 2025-04- 6
By Administrator
125~161 min read

一、前言

环境:centos7.9 、k8s 1.23.0

glusterfs的官网:https://docs.gluster.org/en/latest

glusterfs于 v1.25 弃用,具体可以查看https://kubernetes.io/zh-cn/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes

注意注意了:k8s中可以使用静态、动态的方式供给glusterfs存储资源,但是两者的配置不相同,如果是静态,那么直接挂盘,创建lvm,创建文件系统,挂载,安装glusterfs集群,创建卷,k8s使用gluster即可。但是动态方式的话,需要结合heketi服务+glusterfs,由heketi服务管理、创建glusterfs集群,k8s中配置存储类指定heketi即可。heketi服务要求glusterfs服务器的磁盘必须是裸盘

Heketi 是一个用于 管理 GlusterFS 卷 的 RESTful 存储管理工具,主要用于 Kubernetes 或 OpenShift 等容器平台中动态提供 GlusterFS 存储资源。

Heketi 的主要功能

  1. 自动化 GlusterFS 存储管理

    • Heketi 可以自动创建、扩展和删除 GlusterFS 卷,无需手动操作。

    • 它管理 bricks(存储块)、volumes(卷) 和 storage pools(存储池)。

  2. 动态存储分配(Dynamic Provisioning)

    • 在 Kubernetes 中,Heketi 可以作为 StorageClass 的后端,按需自动创建 Persistent Volumes (PV)。

  3. 高可用性(HA)支持

    • Heketi 可以管理多个 GlusterFS 集群,并确保数据的高可用性。

  4. REST API 接口

    • 提供 API 供其他系统(如 Kubernetes StorageClass)调用,实现自动化存储管理。

Heketi 的典型应用场景

  • Kubernetes/OpenShift 持久化存储:

    • 通过 StorageClass 动态创建 GlusterFS 卷,供 Pod 使用。

  • 云原生存储管理:

    • 在私有云或混合云环境中,提供可扩展的分布式存储。

  • 替代传统 NFS/iSCSI:

    • GlusterFS 提供分布式、可扩展的存储,而 Heketi 使其更易于管理。

二、部署

我这里就直接部署动态存储了,静态存储的就不再演示了。

我的实验环境:

配套部署工具:https://github.com/zhentianxiang/Centos7-ansible-k8s-kubeadm-on-line-deploy

[root@k8s-master1 ~]# kubectl get node -o wide
NAME            STATUS   ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-master1     Ready    control-plane,master   2d20h   v1.23.0   11.0.1.11     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-master2     Ready    control-plane,master   2d20h   v1.23.0   11.0.1.12     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-master3     Ready    control-plane,master   2d20h   v1.23.0   11.0.1.13     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-node1       Ready    <none>                 2d20h   v1.23.0   11.0.1.17     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-node2       Ready    <none>                 2d20h   v1.23.0   11.0.1.18     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-node3       Ready    <none>                 2d20h   v1.23.0   11.0.1.19     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-storage-1   Ready    <none>                 171m    v1.23.0   11.0.1.21     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-storage-2   Ready    <none>                 171m    v1.23.0   11.0.1.22     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-storage-3   Ready    <none>                 171m    v1.23.0   11.0.1.23     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-storage-4   Ready    <none>                 171m    v1.23.0   11.0.1.24     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-storage-5   Ready    <none>                 82m     v1.23.0   11.0.1.25     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9

hosts 解析

[root@k8s-master1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
# k8s 高可用 VIP
11.0.1.20   apiserver.cluster.local

# 输出 k8s 组主机
11.0.1.11 k8s-master1
11.0.1.12 k8s-master2
11.0.1.13 k8s-master3
11.0.1.17 k8s-node1
11.0.1.18 k8s-node2
11.0.1.19 k8s-node3
11.0.1.21 k8s-storage-1
11.0.1.22 k8s-storage-2
11.0.1.23 k8s-storage-3
11.0.1.24 k8s-storage-4
11.0.1.25 k8s-storage-5

# 输出 etcd 组主机
11.0.1.14 etcd1
11.0.1.15 etcd2
11.0.1.16 etcd3

# 输出自定义的 hosts 解析
127.0.0.1   registry.example.com

1. 安装 heketi

1.1 配置用户权限

[root@k8s-master1 ~]# vim heketi-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heketi-service-account
  namespace: default  # 明确指定 namespace(如果未指定,默认为 default)
[root@k8s-master1 ~]# vim heketi-gluster-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: heketi-gluster-admin
subjects:
- kind: ServiceAccount
  name: heketi-service-account
  namespace: default  # 必须与 ServiceAccount 的 namespace 一致
roleRef:
  kind: ClusterRole
  name: edit  # 使用内置的 "edit" 角色(允许管理 Pods、PVCs 等)
  apiGroup: rbac.authorization.k8s.io

1.2 配置存

[root@k8s-master1 ~]# vim heketi-db-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: heketi-db-pv
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/heketi-db"
  persistentVolumeReclaimPolicy: Retain
[root@k8s-master1 ~]# vim heketi-db-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: heketi-db-pvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

1.3 配置 serivce

[root@k8s-master1 ~]# vim heketi-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: deploy-heketi
  labels:
    glusterfs: heketi-service
    deploy-heketi: support
  annotations:
    description: Exposes Heketi Service
spec:
  selector:
    name: deploy-heketi
  ports:
    - name: deploy-heketi
      port: 8080
      targetPort: 8080

1.4 创建配置文件

[root@k8s-master1 ~]# vim heketi.json
{
  "_port_comment": "Heketi Server Port Number",
  "port": "8080",

  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": false,

  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "MySecret"
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "MySecret"
    }
  },

  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
    "executor": "kubernetes",

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db",

    "kubeexec": {
      "rebalance_on_expansion": true
    },

    "sshexec": {
      "rebalance_on_expansion": true,
      "keyfile": "/etc/heketi/private_key",
      "fstab": "/etc/fstab",
      "port": "22",
      "user": "root",
      "sudo": false
    }
  },

  "_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
  "backup_db_to_kube_secret": false
}
[root@k8s-master1 ~]# kubectl create secret generic heketi-config-secret --from-file=./heketi.json

1.5 配置主服务

[root@k8s-master1 ~]# vim heketi-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-heketi
  labels:
    glusterfs: heketi-deployment
    deploy-heketi: deployment
  annotations:
    description: Defines how to deploy Heketi
spec:
  selector:
    matchLabels:
      name: deploy-heketi
      glusterfs: heketi-pod
      deploy-heketi: pod
  replicas: 1
  template:
    metadata:
      name: deploy-heketi
      labels:
        name: deploy-heketi
        glusterfs: heketi-pod
        deploy-heketi: pod
    spec:
      serviceAccountName: heketi-service-account
      nodeName: k8s-storage-1
      containers:
        - image: harbor.linuxtian.com/glusterfs/heketi:dev
          imagePullPolicy: Always
          name: deploy-heketi
          env:
            - name: HEKETI_EXECUTOR
              value: kubernetes
            - name: HEKETI_DB_PATH
              value: /var/lib/heketi/heketi.db
            - name: HEKETI_FSTAB
              value: /var/lib/heketi/fstab
            - name: HEKETI_SNAPSHOT_LIMIT
              value: '14'
            - name: HEKETI_KUBE_GLUSTER_DAEMONSET
              value: 'y'
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: db
              mountPath: /var/lib/heketi
            - name: config
              mountPath: /etc/heketi
          readinessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 3
            httpGet:
              path: /hello
              port: 8080
          livenessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 30
            httpGet:
              path: /hello
              port: 8080
      volumes:
        - name: db
          persistentVolumeClaim:
            claimName: heketi-db-pvc
        - name: config
          secret:
            secretName: heketi-config-secret

1.6 启动

[root@k8s-master1 ~]# kubectl apply -f .

2. 安装 glusterfs

选择要部署的机器,注意,机器必须是裸盘(/dev/sdb)

2.1 打标签

[root@k8s-master1 ~]# kubectl label nodes {k8s-storage-1,k8s-storage-2,k8s-storage-3,k8s-storage-4,k8s-storage-5} storagenode=glusterfs

2.2 配置主服务

[root@k8s-master1 ~]# cat glusterfs-daemonset.yaml 
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: glusterfs
  labels:
    glusterfs: deployment
  annotations:
    description: GlusterFS Daemon Set
    tags: glusterfs
spec:
  selector:
    matchLabels:
      glusterfs-node: daemonset
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs-node: daemonset
    spec:
      nodeSelector:
        storagenode: glusterfs
      hostNetwork: true
      containers:
        - image: harbor.linuxtian.com/glusterfs/gluster-centos:gluster3u12_centos7
          imagePullPolicy: Always
          name: glusterfs
          volumeMounts:
            - name: glusterfs-heketi
              mountPath: /var/lib/heketi
            - name: glusterfs-run
              mountPath: /run
            - name: glusterfs-lvm
              mountPath: /run/lvm
            - name: glusterfs-etc
              mountPath: /etc/glusterfs
            - name: glusterfs-logs
              mountPath: /var/log/glusterfs
            - name: glusterfs-config
              mountPath: /var/lib/glusterd
            - name: glusterfs-dev
              mountPath: /dev
            - name: glusterfs-cgroup
              mountPath: /sys/fs/cgroup
          securityContext:
            capabilities: {}
            privileged: true
          readinessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 60
            exec:
              command:
                - /bin/bash
                - '-c'
                - systemctl status glusterd.service
          livenessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 60
            exec:
              command:
                - /bin/bash
                - '-c'
                - systemctl status glusterd.service
      volumes:
        - name: glusterfs-heketi
          hostPath:
            path: /var/lib/heketi
        - name: glusterfs-run
        - name: glusterfs-lvm
          hostPath:
            path: /run/lvm
        - name: glusterfs-etc
          hostPath:
            path: /etc/glusterfs
        - name: glusterfs-logs
          hostPath:
            path: /var/log/glusterfs
        - name: glusterfs-config
          hostPath:
            path: /var/lib/glusterd
        - name: glusterfs-dev
          hostPath:
            path: /dev
        - name: glusterfs-cgroup
          hostPath:
            path: /sys/fs/cgroup

2.3 启动服务

[root@k8s-master1 ~]# kubectl apply -f .
[root@k8s-master1 ~]# kubectl get pods  -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE            NOMINATED NODE   READINESS GATES
deploy-heketi-76db74548-c6m2p      1/1     Running   0          34m   192.18.254.70   k8s-storage-1   <none>           <none>
glusterfs-6b544                    1/1     Running   0          37m   11.0.1.24       k8s-storage-4   <none>           <none>
glusterfs-7qtfl                    1/1     Running   0          37m   11.0.1.25       k8s-storage-5   <none>           <none>
glusterfs-dbzpz                    1/1     Running   0          37m   11.0.1.23       k8s-storage-3   <none>           <none>
glusterfs-pft5x                    1/1     Running   0          37m   11.0.1.22       k8s-storage-2   <none>           <none>
glusterfs-q62jn                    1/1     Running   0          37m   11.0.1.21       k8s-storage-1   <none>           <none>
[root@k8s-master1 ~]# kubectl get svc 
NAME                                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
deploy-heketi                                            ClusterIP   10.96.92.195    <none>        8080/TCP       35m
glusterfs-dynamic-4e5ee5ee-7d36-467d-b473-14a8f7c1c3b9   ClusterIP   10.96.21.153    <none>        1/TCP          24m
glusterfs-dynamic-4ea8afa9-d797-4614-83f8-b1909289faee   ClusterIP   10.96.235.192   <none>        1/TCP          25m
glusterfs-dynamic-4fe90a27-9b3e-4518-9568-a697223dc048   ClusterIP   10.96.60.252    <none>        1/TCP          80m
glusterfs-dynamic-dbc8b754-2c58-4c22-848c-acc2ad0f92f4   ClusterIP   10.96.167.127   <none>        1/TCP          27m
glusterfs-dynamic-f13a97d9-264b-461e-a4c2-181461778803   ClusterIP   10.96.37.155    <none>        1/TCP          69m

3. 加载 glusterfs 节点到 heketi

注意:我在 master1 上配置的,同时这台机器也是免密登录机器的源机器

3.1 配置环境变量

[root@k8s-master1 ~]# cat ~/.bash_profile 
export HEKETI_CLI_SERVER=http://10.96.92.195:8080
export HEKETI_CLI_USER=admin
export HEKETI_CLI_KEY=MySecret

3.2 配置拓扑资源文件

[root@k8s-master1 join]# cat topology.json
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "k8s-storage-1"
              ],
              "storage": [
                "11.0.1.21"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "k8s-storage-2"
              ],
              "storage": [
                "11.0.1.22"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "k8s-storage-3"
              ],
              "storage": [
                "11.0.1.23"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "k8s-storage-4"
              ],
              "storage": [
                "11.0.1.24"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "k8s-storage-5"
              ],
              "storage": [
                "11.0.1.25"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        }
      ]
    }
  ]
}

3.3 执行加载

[root@k8s-master1 ~]# heketi-cli topology load --json=topology.json --user=admin --secret=MySecret
        Found node k8s-storage-1 on cluster 60bcdef21cf3b279e66d4d2ee7cf6abc
                Found device /dev/sdb ... OK
        Found node k8s-storage-2 on cluster 60bcdef21cf3b279e66d4d2ee7cf6abc
                Found device /dev/sdb ... OK
        Found node k8s-storage-3 on cluster 60bcdef21cf3b279e66d4d2ee7cf6abc
                Found device /dev/sdb ... OK
        Found node k8s-storage-4 on cluster 60bcdef21cf3b279e66d4d2ee7cf6abc
                Found device /dev/sdb ... OK
        Found node k8s-storage-5 on cluster 60bcdef21cf3b279e66d4d2ee7cf6abc
                Adding device /dev/sdb ... OK

如果不幸磁盘不干净,用如下脚本的方式进行清理。

[root@k8s-master1 ~]# cat clean-dev.sh
#!/bin/bash

# Define node list as an array
NODES=("k8s-storage-1" "k8s-storage-2" "k8s-storage-3" "k8s-storage-4" "k8s-storage-5" "k8s-storage-6")
DEVICE="/dev/sdb"

# Loop through each node
for NODE in "${NODES[@]}"; do
  echo "===== 正在处理节点 $NODE ====="
  
  # 1. 清理文件系统签名
  ssh "$NODE" "echo '>>> 执行 wipefs'; wipefs -a $DEVICE"
  
  # 2. 覆盖前100MB(清理分区表和元数据)
  ssh "$NODE" "echo '>>> 执行 dd'; dd if=/dev/zero of=$DEVICE bs=1M count=100"
  
  # 3. 安装 gdisk(如果未安装)
  ssh "$NODE" "echo '>>> 安装 gdisk'; yum -y install gdisk || apt-get install -y gdisk"
  
  # 4. 清除GPT分区表
  ssh "$NODE" "echo '>>> 执行 sgdisk'; sgdisk --zap-all $DEVICE"
  
  # 5. 验证设备状态
  ssh "$NODE" "echo '>>> 检查设备状态'; lsblk -f $DEVICE"
  
  echo "===== 节点 $NODE 处理完成 ====="
  echo
done

echo "所有节点磁盘清理完成!"

3.4 查看状态信息

[root@k8s-master1 ~]# heketi-cli topology info  --user admin --secret 'MySecret'

Cluster Id: 60bcdef21cf3b279e66d4d2ee7cf6abc

    File:  true
    Block: true

    Volumes:


    Nodes:

        Node Id: 3b94663709bb2ba3a81f640a80d6389e
        State: online
        Cluster Id: 60bcdef21cf3b279e66d4d2ee7cf6abc
        Zone: 1
        Management Hostnames: k8s-storage-4
        Storage Hostnames: 11.0.1.24
        Devices:
                Id:ffcf85d91f51c52a1b4b7237dbc56636   State:online    Size (GiB):2047    Used (GiB):0       Free (GiB):2047    
                        Known Paths: /dev/sdb

                        Bricks:

        Node Id: 45ff8f919539faeb80180bd92dd0d263
        State: online
        Cluster Id: 60bcdef21cf3b279e66d4d2ee7cf6abc
        Zone: 1
        Management Hostnames: k8s-storage-5
        Storage Hostnames: 11.0.1.25
        Devices:
                Id:c187279bc07cb3c1bd0ac03ed237822f   State:online    Size (GiB):2047    Used (GiB):0       Free (GiB):2047    
                        Known Paths: /dev/sdb

                        Bricks:

        Node Id: 8e4688921620da7f9080c5874a7b2ff0
        State: online
        Cluster Id: 60bcdef21cf3b279e66d4d2ee7cf6abc
        Zone: 1
        Management Hostnames: k8s-storage-2
        Storage Hostnames: 11.0.1.22
        Devices:
                Id:b3ada6db2696ffb6ba142a7e0a5b4c59   State:online    Size (GiB):2047    Used (GiB):0       Free (GiB):2047    
                        Known Paths: /dev/sdb

                        Bricks:

        Node Id: bc38a45da38cb9bacf3d65a5d141df58
        State: online
        Cluster Id: 60bcdef21cf3b279e66d4d2ee7cf6abc
        Zone: 1
        Management Hostnames: k8s-storage-3
        Storage Hostnames: 11.0.1.23
        Devices:
                Id:fc43c74c8c48dd2e8bc61e5f5c50cc02   State:online    Size (GiB):2047    Used (GiB):0       Free (GiB):2047    
                        Known Paths: /dev/sdb

                        Bricks:

        Node Id: e8dec71ec50cb20c5103674ce8005854
        State: online
        Cluster Id: 60bcdef21cf3b279e66d4d2ee7cf6abc
        Zone: 1
        Management Hostnames: k8s-storage-1
        Storage Hostnames: 11.0.1.21
        Devices:
                Id:6adeb2c5f5c014c40c42a16ea08a6b42   State:online    Size (GiB):2047    Used (GiB):0       Free (GiB):2047    
                        Known Paths: /dev/sdb

                        Bricks:

4. 验证存储

4.1 配置存储类

[root@k8s-master1 ~]# cat storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-storageclass                          #存储名称
provisioner: kubernetes.io/glusterfs                    #指定存储类的provisioner,这个provisioner是k8s内置的驱动程序
reclaimPolicy: Retain                                   #pvc删除,pv采用何种方式处理,这里是保留
volumeBindingMode: WaitForFirstConsumer                 #卷的绑定模式,表示pod请求后才会创建
allowVolumeExpansion: true                              #是否运行卷的扩容,配置为true才能实现扩容pvc
parameters:                                             #glusterfs的配置参数
  resturl: "http://10.96.92.195:8080"          #heketi服务的地址和端口
  clusterid: "60bcdef21cf3b279e66d4d2ee7cf6abc"         #集群id,在heketi服务其上执行heketi-cli info能看到
  restauthenabled: "true"                               #Gluster REST服务身份验证布尔值,用于启用对 REST 服务器的身份验证
  restuser: "admin"                                     #heketi的用户admin
  secretNamespace: "default"                            #secret所属的密码空间
  secretName: "heketi-secret"                           #secret,使用secret存储了heketi的用户admin的登陆密码
  volumetype: "none"                                    #卷的类型,none表示分布式卷,还有其他类型的卷,详见官网参考文档

4.2 配置 secret

[root@k8s-master1 ~]# cat heketi-secret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
  namespace: default
type: kubernetes.io/glusterfs
data:
  #echo -n "MySecret" | base64
  key: TXlTZWNyZXQ=

4.3 配置 pvc

[root@k8s-master1 ~]# cat nginx-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    pvc: glusterfs
  name: glusterfs-pvc
  namespace: default
spec:
  storageClassName: glusterfs-storageclass  # 指定使用的存储类
  accessModes:
  - ReadWriteMany  # 共享类型存储
  resources:
    requests:
      storage: 600Mi

4.4 配置 nginx 服务

[root@k8s-master1 ~]# cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: default  # 确保与 PVC 同命名空间
spec:
  replicas: 3  # 启动 2 个 Pod 测试共享存储
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
        volumeMounts:
        - name: shared-storage
          mountPath: /usr/share/nginx/html  # 挂载到 Nginx 默认静态页面目录
      volumes:
      - name: shared-storage
        persistentVolumeClaim:
          claimName: glusterfs-pvc  # 使用您已创建的 PVC
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: NodePort

4.4 查看挂载情况

[root@k8s-master1 test]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP              NODE            NOMINATED NODE   READINESS GATES
deploy-heketi-76db74548-c6m2p       1/1     Running   0          58m   192.18.254.70   k8s-storage-1   <none>           <none>
glusterfs-6b544                     1/1     Running   0          60m   11.0.1.24       k8s-storage-4   <none>           <none>
glusterfs-7qtfl                     1/1     Running   0          60m   11.0.1.25       k8s-storage-5   <none>           <none>
glusterfs-dbzpz                     1/1     Running   0          60m   11.0.1.23       k8s-storage-3   <none>           <none>
glusterfs-pft5x                     1/1     Running   0          60m   11.0.1.22       k8s-storage-2   <none>           <none>
glusterfs-q62jn                     1/1     Running   0          60m   11.0.1.21       k8s-storage-1   <none>           <none>
nginx-deployment-55cd5bd96c-cd99l   1/1     Running   0          56s   192.18.247.6    k8s-storage-2   <none>           <none>
nginx-deployment-55cd5bd96c-lvnr2   1/1     Running   0          56s   192.18.247.5    k8s-storage-2   <none>           <none>
nginx-deployment-55cd5bd96c-sdwxp   1/1     Running   0          56s   192.18.247.7    k8s-storage-2   <none>           <none>

[root@k8s-master1 test]# kubectl exec -it nginx-deployment-55cd5bd96c-cd99l -- bash

root@nginx-deployment-55cd5bd96c-cd99l:/# df -h
Filesystem                                      Size  Used Avail Use% Mounted on
overlay                                          99G  6.7G   93G   7% /
tmpfs                                            64M     0   64M   0% /dev
tmpfs                                           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root                          99G  6.7G   93G   7% /etc/hosts
shm                                              64M     0   64M   0% /dev/shm
11.0.1.25:vol_70d0c30b91791648568a7906653075d4 1014M   33M  982M   4% /usr/share/nginx/html
tmpfs                                           7.7G   12K  7.7G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                           3.9G     0  3.9G   0% /proc/acpi
tmpfs                                           3.9G     0  3.9G   0% /proc/scsi
tmpfs                                           3.9G     0  3.9G   0% /sys/firmware

4.5 检查 GlusterFS 卷状态

上面进入容器可以看到存储坐落到了 11.0.1.25 机器上

[root@k8s-master1 ~]# kubectl exec -it glusterfs-7qtfl -- bash
[root@k8s-storage-5 /]# gluster volume list
vol_70d0c30b91791648568a7906653075d4
[root@k8s-storage-5 /]# gluster volume info vol_70d0c30b91791648568a7906653075d4
 
Volume Name: vol_70d0c30b91791648568a7906653075d4
Type: Distribute
Volume ID: 03c2b3a5-4167-49ed-899d-7efd625c9d45
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 11.0.1.21:/var/lib/heketi/mounts/vg_6adeb2c5f5c014c40c42a16ea08a6b42/brick_80e6abd85df85f1b3805b11cfdfcabc2/brick
Options Reconfigured:
user.heketi.id: 70d0c30b91791648568a7906653075d4
transport.address-family: inet
nfs.disable: on

但是又看到这个卷实际的数据存储是 11.0.1.21 提供的,这是因为当前卷是 分布式卷(Distribute),数据分散在不同 Brick 上并且只有一个 Brick ,属于单点情况

5. volumetype 存储卷类型

5.1 Replicate:5(副本卷)

  • 含义:数据会在集群中保存 5 个副本(即每份数据写入 5 份,分别存储在不同的节点/砖块上)。

  • 存储使用:

    • 实际空间:假设原始数据大小为 1GB,则实际占用空间为 5GB(5 副本)。

    • 冗余性:允许最多同时损坏 4 个副本而不丢失数据。

    • 适用场景:例如金融核心交易系统、军事级数据存储,要求容忍多节点/数据中心故障。

  • 示例:

    • 如果有 5 个节点,写入文件 A 时,A 会同时存在于这 5 个节点上。

    • 任意一个节点故障,数据仍可从其他四个节点读取。


5.2 Disperse:4:2(纠删码卷)

  • 含义:使用纠删码(Erasure Coding)技术,将数据分块为 4 个数据块,并计算 2 个校验块(共 6 块)。允许最多丢失 2 块(数据块或校验块)而不丢数据。

  • 存储使用:

    • 实际空间:假设原始数据大小为 4GB,实际占用空间为 6GB(4+2)。存储效率为 4/6 ≈ 66.7%。

    • 冗余性:允许任意 2 个节点故障(或丢失 2 块数据)而不影响数据完整性。

    • 适用场景:需要兼顾空间效率和冗余的场景(如冷数据归档)。

  • 示例:

    • 文件被分割为 D1、D2、D3、D4 四个数据块,并生成 P1、P2 两个校验块。

    • 这些块分散存储在 6 个不同节点上。即使丢失 D1 和 P1,仍可通过剩余块恢复原始数据。


5.3 其他常见卷类型

  • Distribute(分布式卷):

    • 数据无冗余,文件分散在不同节点上。

    • 存储效率 100%,但无容错能力(任一节点故障会导致部分数据丢失)。

  • Distributed-Replicate(分布式副本卷):

    • 结合分布式和副本特性。例如 replicate:3 跨多个节点组,每组内保存 3 副本。


5.4 关键对比

类型

冗余机制

存储开销

允许故障数

存储效率

replicate:5

完整副本

5x 原始数据

4

33.3%

disperse:4:2

纠删码

1.5x 原始数据

2

66.7%

distribute

无

1x 原始数据

0

100%

6. 使用纠错码类型

6.1 首先先删除之前创建的存储卷,可能会有影响,至少我是这样的

[root@k8s-master1 ~]# kubectl delete deployment nginx-deployment
[root@k8s-master1 ~]# kubectl delete pvc nginx-html

[root@k8s-master1 ~]# heketi-cli volume list
Id:291f741a641b68730bf7b473063b7f1c    Cluster:60bcdef21cf3b279e66d4d2ee7cf6abc    Name:vol_291f741a641b68730bf7b473063b7f1c
Id:59aaa9b9bb03e4c780e95f669629a423    Cluster:60bcdef21cf3b279e66d4d2ee7cf6abc    Name:vol_59aaa9b9bb03e4c780e95f669629a423
Id:70d0c30b91791648568a7906653075d4    Cluster:60bcdef21cf3b279e66d4d2ee7cf6abc    Name:vol_70d0c30b91791648568a7906653075d4
Id:f0fe3fe0d880433c8d359f4087f2078d    Cluster:60bcdef21cf3b279e66d4d2ee7cf6abc    Name:vol_f0fe3fe0d880433c8d359f4087f2078d

[root@k8s-master1 ~]# heketi-cli volume delete 291f741a641b68730bf7b473063b7f1c
[root@k8s-master1 ~]# heketi-cli volume delete 59aaa9b9bb03e4c780e95f669629a423
[root@k8s-master1 ~]# heketi-cli volume delete 70d0c30b91791648568a7906653075d4
[root@k8s-master1 ~]# heketi-cli volume delete f0fe3fe0d880433c8d359f4087f2078d

6.2 新增一台机器,因为 GlusterFS brick 数必须为偶数

[root@k8s-master1 ~]# kubectl get node -o wide -l storagenode=glusterfs
NAME            STATUS   ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-storage-1   Ready    <none>   4h47m   v1.23.0   11.0.1.21     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-storage-2   Ready    <none>   4h47m   v1.23.0   11.0.1.22     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-storage-3   Ready    <none>   4h47m   v1.23.0   11.0.1.23     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-storage-4   Ready    <none>   4h47m   v1.23.0   11.0.1.24     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-storage-5   Ready    <none>   3h19m   v1.23.0   11.0.1.25     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
k8s-storage-6   Ready    <none>   3m6s    v1.23.0   11.0.1.26     <none>        CentOS Linux 7 (Core)   5.4.160-1.el7.elrepo.x86_64   docker://20.10.9
[root@k8s-master1 ~]# kubectl label nodes k8s-storage-6 storagenode=glusterfs
node/k8s-storage-6 labeled
[root@k8s-master1 ~]# kubectl get pods 
NAME                            READY   STATUS              RESTARTS   AGE
deploy-heketi-76db74548-c6m2p   1/1     Running             0          136m
glusterfs-6b544                 1/1     Running             0          139m
glusterfs-7qtfl                 1/1     Running             0          139m
glusterfs-8fp42                 0/1     ContainerCreating   0          3s
glusterfs-dbzpz                 1/1     Running             0          139m
glusterfs-pft5x                 1/1     Running             0          139m
glusterfs-q62jn                 1/1     Running             0          139m

6.3 新节点加入 heketi

# 新加信息
[root@k8s-master1 ~]# vim topology.json
        {
          "node": {
            "hostnames": {
              "manage": [
                "k8s-storage-6"
              ],
              "storage": [
                "11.0.1.26"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        }
[root@k8s-master1 ~]# heketi-cli topology load --json=topology.json --user=admin --secret=MySecret
        Found node k8s-storage-1 on cluster 60bcdef21cf3b279e66d4d2ee7cf6abc
                Found device /dev/sdb
        Found node k8s-storage-2 on cluster 60bcdef21cf3b279e66d4d2ee7cf6abc
                Found device /dev/sdb
        Found node k8s-storage-3 on cluster 60bcdef21cf3b279e66d4d2ee7cf6abc
                Found device /dev/sdb
        Found node k8s-storage-4 on cluster 60bcdef21cf3b279e66d4d2ee7cf6abc
                Found device /dev/sdb
        Found node k8s-storage-5 on cluster 60bcdef21cf3b279e66d4d2ee7cf6abc
                Found device /dev/sdb
        Creating node k8s-storage-6 ... ID: cedafe6c806ffd4d14b031e41826730d
                Adding device /dev/sdb ... OK

6.4 修改 storageclass 的卷类型和参数

[root@k8s-master1 ~]# cat storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-storageclass                          #存储名称
provisioner: kubernetes.io/glusterfs                    #指定存储类的provisioner,这个provisioner是k8s内置的驱动程序
reclaimPolicy: Retain                                   #pvc删除,pv采用何种方式处理,这里是保留
volumeBindingMode: WaitForFirstConsumer                 #卷的绑定模式,表示pod请求后才会创建
allowVolumeExpansion: true                              #是否运行卷的扩容,配置为true才能实现扩容pvc
parameters:                                             #glusterfs的配置参数
  resturl: "http://10.96.92.195:8080"                   #heketi服务的地址和端口
  clusterid: "60bcdef21cf3b279e66d4d2ee7cf6abc"         #集群id,在heketi服务其上执行heketi-cli cluster list能看到
  restauthenabled: "true"                               #Gluster REST服务身份验证布尔值,用于启用对 REST 服务器的身份验证
  restuser: "admin"                                     #heketi的用户admin
  secretNamespace: "default"                            #secret所属的密码空间
  secretName: "heketi-secret"                           #secret,使用secret存储了heketi的用户admin的登陆密码
  volumetype: "disperse:4:2"                            #卷的类型
  volumeoptions: "performance.quick-read off, performance.read-ahead off" # 禁用快速读取

6.5 创建 pvc 遇到报错

[root@k8s-master1 ~]# kubectl logs --tail=100 -f deploy-heketi-76db74548-c6m2p
[heketi] INFO 2025/04/05 10:47:52 Cleaned 0 nodes from health cache
[negroni] 2025-04-05T10:48:40Z | 400 |   742.817µs | 10.96.92.195:8080 | POST /volumes
[heketi] ERROR 2025/04/05 10:48:40 heketi/apps/glusterfs/app_volume.go:143:glusterfs.(*App).VolumeCreate: Requested volume size (1 GB) is smaller than the minimum supported volume size (4194304)

原因是 pvc 的大小最小 4 G,所以要修改

[root@k8s-master1 ~]# cat nginx-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    pvc: nginx
  name: nginx-html
  namespace: default
spec:
  storageClassName: glusterfs-storageclass              #指定使用的存储类
  accessModes:
  - ReadWriteMany       # 共享类型存储
  resources:
    requests:
      storage: 5Gi

不知道为啥还是报错,懒得弄了,索性直接删除重建了。

[root@k8s-master1 ~]# kubectl delete daemonsets.apps glusterfs
[root@k8s-master1 ~]# kubectl delete deployments.apps deploy-heketi
[root@k8s-master1 ~]# kubectl delete pvc heketi-db-pvc
[root@k8s-master1 ~]# kubectl delete pv heketi-db-pv
[root@k8s-master1 ~]# ssh k8s-storage-1 rm -rf /mnt/*
[root@k8s-master1 ~]# kubectl apply -f heketi-deployment.yaml
[root@k8s-master1 ~]# kubectl apply -f glusterfs-daemonset.yaml
[root@k8s-master1 ~]# heketi-cli node list
[root@k8s-master1 ~]# heketi-cli topology load --json=topology.json --user=admin --secret=MySecret
Creating cluster ... ID: 8a801840667d689a4b1ece3f5bc2a3bf
        Allowing file volumes on cluster.
        Allowing block volumes on cluster.
        Creating node k8s-storage-1 ... ID: 949f4b85e7094aba556ad3988744a898
                Adding device /dev/sdb ... OK
        Creating node k8s-storage-2 ... ID: f5fad5aae2fde315dfa52348c74f7f28
                Adding device /dev/sdb ... OK
        Creating node k8s-storage-3 ... ID: 26d5272ff1e173d59d45223e5c297f64
                Adding device /dev/sdb ... OK
        Creating node k8s-storage-4 ... ID: a698c16b19fd6aad7f58ee1071923a2b
                Adding device /dev/sdb ... OK
        Creating node k8s-storage-5 ... ID: 837f5f4ed19f8b2862833f2c88651453
                Adding device /dev/sdb ... OK
        Creating node k8s-storage-6 ... ID: 5006297fd0b2b6b1e06a619b95b0460d
[root@k8s-master1 ~]# heketi-cli topology info  --user admin --secret 'MySecret'

6.5 继续创建 pvc

# 修改一下新的集群 ID
[root@k8s-master1 ~]# cat storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-storageclass                          #存储名称
provisioner: kubernetes.io/glusterfs                    #指定存储类的provisioner,这个provisioner是k8s内置的驱动程序
reclaimPolicy: Retain                                   #pvc删除,pv采用何种方式处理,这里是保留
volumeBindingMode: WaitForFirstConsumer                 #卷的绑定模式,表示pod请求后才会创建
allowVolumeExpansion: true                              #是否运行卷的扩容,配置为true才能实现扩容pvc
parameters:                                             #glusterfs的配置参数
  resturl: "http://10.96.92.195:8080"                   #heketi服务的地址和端口
  clusterid: "8a801840667d689a4b1ece3f5bc2a3bf"         #集群id,在heketi服务其上执行heketi-cli cluster list能看到
  restauthenabled: "true"                               #Gluster REST服务身份验证布尔值,用于启用对 REST 服务器的身份验证
  restuser: "admin"                                     #heketi的用户admin
  secretNamespace: "default"                            #secret所属的密码空间
  secretName: "heketi-secret"                           #secret,使用secret存储了heketi的用户admin的登陆密码
  volumetype: "disperse:4:2"
  volumeoptions: "performance.quick-read off, performance.read-ahead off" # 禁用快速读取
[root@k8s-master1 test]# kubectl apply -f .
[root@k8s-master1 ~]# kubectl logs --tail=100 -f deploy-heketi-76db74548-jqznx
[asynchttp] INFO 2025/04/05 11:21:42 Completed job 9b47dbb2b1452e13b45b710a75d02cd5 in 5.077856787s
[negroni] 2025-04-05T11:21:42Z | 303 |   112.788µs | 10.96.92.195:8080 | GET /queue/9b47dbb2b1452e13b45b710a75d02cd5
[negroni] 2025-04-05T11:21:42Z | 200 |   1.880966ms | 10.96.92.195:8080 | GET /volumes/486482aaefc98d7b4c2c09247545a5a4
[negroni] 2025-04-05T11:21:42Z | 200 |   237.843µs | 10.96.92.195:8080 | GET /clusters/8a801840667d689a4b1ece3f5bc2a3bf
[negroni] 2025-04-05T11:21:42Z | 200 |   428.679µs | 10.96.92.195:8080 | GET /nodes/26d5272ff1e173d59d45223e5c297f64
[negroni] 2025-04-05T11:21:42Z | 200 |   489.267µs | 10.96.92.195:8080 | GET /nodes/5006297fd0b2b6b1e06a619b95b0460d
[negroni] 2025-04-05T11:21:42Z | 200 |   549.412µs | 10.96.92.195:8080 | GET /nodes/837f5f4ed19f8b2862833f2c88651453
[negroni] 2025-04-05T11:21:42Z | 200 |   541.01µs | 10.96.92.195:8080 | GET /nodes/949f4b85e7094aba556ad3988744a898
[negroni] 2025-04-05T11:21:42Z | 200 |   477.147µs | 10.96.92.195:8080 | GET /nodes/a698c16b19fd6aad7f58ee1071923a2b
[negroni] 2025-04-05T11:21:42Z | 200 |   320.111µs | 10.96.92.195:8080 | GET /nodes/f5fad5aae2fde315dfa52348c74f7f28

这次不报错了

[root@k8s-master1 test]# kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
heketi-db-pvc   Bound    heketi-db-pv                               1Gi        RWO            manual                   6m37s
nginx-html      Bound    pvc-152ba2e9-fd25-4e89-8089-cc56c8011795   5Gi        RWX            glusterfs-storageclass   84s

[root@k8s-master1 test]# heketi-cli node list
Id:26d5272ff1e173d59d45223e5c297f64     Cluster:8a801840667d689a4b1ece3f5bc2a3bf
Id:5006297fd0b2b6b1e06a619b95b0460d     Cluster:8a801840667d689a4b1ece3f5bc2a3bf
Id:837f5f4ed19f8b2862833f2c88651453     Cluster:8a801840667d689a4b1ece3f5bc2a3bf
Id:949f4b85e7094aba556ad3988744a898     Cluster:8a801840667d689a4b1ece3f5bc2a3bf
Id:a698c16b19fd6aad7f58ee1071923a2b     Cluster:8a801840667d689a4b1ece3f5bc2a3bf
Id:f5fad5aae2fde315dfa52348c74f7f28     Cluster:8a801840667d689a4b1ece3f5bc2a3bf

[root@k8s-master1 test]# heketi-cli volume list
Id:486482aaefc98d7b4c2c09247545a5a4    Cluster:8a801840667d689a4b1ece3f5bc2a3bf    Name:vol_486482aaefc98d7b4c2c09247545a5a4

[root@k8s-master1 test]# heketi-cli volume info 486482aaefc98d7b4c2c09247545a5a4
Name: vol_486482aaefc98d7b4c2c09247545a5a4
Size: 5
Volume Id: 486482aaefc98d7b4c2c09247545a5a4
Cluster Id: 8a801840667d689a4b1ece3f5bc2a3bf
Mount: 11.0.1.23:vol_486482aaefc98d7b4c2c09247545a5a4
Mount Options: backup-volfile-servers=11.0.1.26,11.0.1.25,11.0.1.21,11.0.1.24,11.0.1.22
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: disperse
Distribute Count: 1
Disperse Data Count: 4
Disperse Redundancy Count: 2
Snapshot Factor: 1.00

6.6 测试写入速度

[root@k8s-master1 test]# kubectl exec -it nginx-deployment-55cd5bd96c-pljrz -- bash
root@nginx-deployment-55cd5bd96c-pljrz:/# cd /usr/share/nginx/
root@nginx-deployment-55cd5bd96c-pljrz:/usr/share/nginx/html# dd if=/dev/zero of=/tmp/testfile bs=1M count=1024 oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.18966 s, 903 MB/s
root@nginx-deployment-55cd5bd96c-pljrz:/usr/share/nginx/html# dd if=/dev/zero of=/tmp/testfile bs=10M count=1024 oflag=direct
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 8.08307 s, 1.3 GB/s

云原生与容器技术
kubernetes
License:  CC BY 4.0
Share

Further Reading

Nov 19, 2025

Kubernetes 安装部署 Alist 并配置 Onlyoffice

Alist 是一个支持多种存储的目录列表程序,能够将网盘、对象存储和本地存储等挂载为统一目录,提供文件浏览、管理和分享功能。它支持 OneDrive、Google Drive、阿里云盘、百度网盘等多种存储方式,界面简洁美观,基于 Material Design 设计。Alist 功能强大,包括文件预览、下载、分享、搜索和权限管理等,并且开源免费。部署 Alist 服务可以通过 Docker、宝塔面板或直接运行等方式实现,文中以 K8S 部署为例,详细介绍了配置步骤及 OnlyOffice 的集成方法,用于在线预览和编辑 Office 文档。此外,还提供了如何通过 HTTPS 和自签名证书确保服务安全访问的指导。

Oct 23, 2025

KubeSphere-04-Dev-ops 流水线插件的使用

KubeSphere 基于 Jenkins 的 DevOps 系统专为 Kubernetes 中的 CI/CD 工作流设计,提供了一站式的解决方案,包括插件管理、Binary-to-Image (B2I)、Source-to-Image (S2I)等功能。该系统兼容第三方私有镜像仓库和代码库,提供了全面的可视化 CI/CD 流水线。本文档指导用户开启 KubeSphere 的 DevOps 插件,规划流水线并编写 Jenkinsfile,通过实战案例让用户掌握从理论到实践的全过程。文档详细介绍了如何解决开启 DevOps 组件时可能遇到的问题、配置步骤以及验证方法,并演示了创建和管理 DevOps 项目的过程,涵盖用户创建、企业空间与项目的建立等。此外,还提供了简化版的 DevOps 流水线设计示例,涉及从源代码检出到部署环境的整个流程,包括单元测试、编译、构建推送镜像及多环境部署策略。最后,通过一系列准备工作的说明和实际操作步骤,确保用户能够顺利实现自动化持续集成和部署。

Oct 14, 2025

KubeSphere-03-Logging 日志插件的使用

KubeSphere Logging 是 KubeSphere 平台的日志管理插件,基于 Elasticsearch 或 OpenSearch 构建,支持多租户日志收集、查询和分析。它自动采集容器、工作负载及平台审计日志,并通过 Fluent Bit 进行预处理。该插件提供直观的查询界面、灵活的日志保留策略(默认7天)、Sidecar模式增强可靠性以及外部日志系统集成等功能,帮助企业快速定位问题并满足合规要求。开启插件需编辑 ks-installer 配置文件以选择性启用 Elasticsearch 或 OpenSearch,并设置相关参数。此外,还介绍了一款基于 Go 的 OpenSearch 告警与可视化系统,支持多种通知渠道,可通过本地构建 Docker 镜像并在 Kubernetes 环境中部署使用。

OLDER

Centos8 安装高版本 k8s(cri-docker)

NEWER

H3C 路由器 IPSec VPN 配置文档

Recently Updated

  • Kubernetes 安装部署 Alist 并配置 Onlyoffice
  • KubeSphere-04-Dev-ops 流水线插件的使用
  • KubeSphere-03-Logging 日志插件的使用
  • KubeSphere-02-Service Mesh 的使用
  • KubeSphere-01-介绍与基础使用

Trending Tags

KVM Service Mesh Docker shell 路由规则 Mysql Containerd GitOps 网络设备 Prometheus

Contents

©2025 甄天祥-Linux-个人小站. Some rights reserved.

Using the Halo theme Chirpy