avatar

甄天祥-Linux-个人小站

A text-focused Halo theme

  • 首页
  • 分类
  • 标签
  • 关于
Home Kubernetes 单 Master 集群安装
文章

Kubernetes 单 Master 集群安装

Posted 2025-02-19 Updated 10 days ago
By Administrator
52~67 min read

本次安装为虚拟机测试环境安装

一、基础环境准备

1. 关闭防火墙和 selinux

[root@k8s-master ~]# systemctl stop firewalld && systemctl disable firewalld
[root@k8s-master ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config  && setenforce 0

2. 关闭 swap 分区

在安装 Kubernetes 时,需要禁用或关闭系统的 swap 分区,主要是由于以下几个原因:

1. Kubernetes 的调度和资源管理

Kubernetes 在调度 Pod 时会根据节点的可用资源(CPU 和内存)来决定 Pod 的位置和数量。为了确保集群中的资源管理准确,Kubernetes 假设节点的内存是固定的,并且不会随时被“交换”到磁盘中。

如果启用 swap,操作系统可能会将一部分内存数据移到磁盘上(即交换空间),这会导致以下问题:

  • 资源监控不准确:Kubernetes 根据节点的内存使用情况来做调度,如果交换空间被启用,系统可能会出现内存被虚拟化、甚至处于磁盘上的情况,导致 Kubernetes 在做调度决策时无法准确评估节点的实际内存资源。

  • 性能不稳定:将内存交换到磁盘上会显著降低性能,因为磁盘 I/O 的速度远远低于内存访问速度。如果 Kubernetes 调度了需要大量内存的应用,而节点的实际内存已经被交换到磁盘上,可能会导致性能急剧下降,甚至引发容器崩溃。

2. Kubernetes 的内存管理

Kubernetes 使用 cgroup(控制组)来限制和管理容器的资源(包括内存和 CPU)。如果节点启用了 swap 分区,Kubernetes 无法准确地限制容器的内存使用,可能会导致容器使用超出设定的内存限制,进而影响节点和其他容器的正常运行。

例如,如果一个容器超出了其内存限制,Kubernetes 会尝试终止该容器并重新调度它。但是,如果系统启用了 swap,容器可能会被交换到磁盘上,这就没有办法有效地清理容器,可能导致内存不被释放,最终导致节点资源紧张甚至崩溃。

3. 保证节点的稳定性

禁用 swap 可以保证 Kubernetes 对内存资源的控制和管理更加严格,确保节点上的资源不会被超限使用,从而提高集群的稳定性。在 Kubernetes 中,通常会要求设置容器的内存限制(如 memory: 1Gi),并禁止节点使用 swap,确保容器在内存限制内运行。如果系统启用了 swap,这种限制将无法生效。

4. 避免节点和容器的“死锁”

如果启用了 swap,并且系统内存非常紧张时,操作系统可能会将部分内存转移到 swap 空间,这可能导致节点的响应变慢,甚至可能会发生容器被“换出”到磁盘中,进而引发容器无法访问所需资源,造成节点崩溃或容器宕机,影响应用的高可用性。

 # 临时
[root@k8s-master ~]# swapoff -a
 # 永久
[root@k8s-master ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

5. 修改 su-l 文件

sudo vim /etc/pam.d/su-l

# 向下添加追加如下
session required pam_lastlog.so showfail

保存后,再执行 sudo su - 就会显示:

  • last login

  • last failed login

  • failed login attempts

3. 修改主机名称

[root@k8s-master ~]# hostnamectl set-hostname k8s-master

4. 配置hosts解析

hosts 文件中有 master 节点和 node 节点,node 节点的基础环境配置如同 master 一致

[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.11 k8s-master
10.0.0.12 k8s-node01
10.0.0.13 k8s-node02

# k8s 集群初始化和集群通信要用的
10.0.0.11 apiserver.cluster.local

5. 内核调整,将桥接的IPv4流量传递到iptables的链

[root@k8s-master ~]# lsmod | grep br_netfilter #确认是否有加载此模块
[root@k8s-master ~]# sudo modprobe br_netfilter  #没有的话可以先加载

[root@k8s-master ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

[root@k8s-master ~]# sysctl --system

6. 设置系统时间同步

1. 首先配置 master 机器作为时间同步的主节点

[root@k8s-master ~]# vim /etc/ntp.conf
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).
 
driftfile /var/lib/ntp/drift
 
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
#restrict default nomodify notrap nopeer noquery
 
# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
#restrict 127.0.0.1
#restrict ::1
 
restrict 10.0.0.2 mask 255.255.255.0 nomodify notrap
 
# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp3.aliyun.com iburst
server ntp4.aliyun.com iburst

#server 0.cn.pool.ntp.org
#server 1.asia.pool.ntp.org
#server 2.192.168.148.128
 
server ntp1.aliyun.com
server time1.aliyun.com
 
restrict time1.aliyun.com nomodify notrap noquery
restrict ntp1.aliyun.com nomodify notrap noquery
 
server 127.0.0.1
fudge 127.0.0.1 stratum 10
#broadcast 192.168.1.255 autokey # broadcast server
#broadcastclient   # broadcast client
#broadcast 224.0.1.1 autokey  # multicast server
#multicastclient 224.0.1.1  # multicast client
#manycastserver 239.255.254.254  # manycast server
#manycastclient 239.255.254.254 autokey # manycast client
 
# Enable public key cryptography.
#crypto
 
includefile /etc/ntp/crypto/pw
 
# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys
 
# Specify the key identifiers which are trusted.
#trustedkey 4 8 42
 
# Specify the key identifier to use with the ntpdc utility.
#requestkey 8
 
# Specify the key identifier to use with the ntpq utility.
#controlkey 8
 
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
 
# Disable the monitoring facility to prevent amplification attacks using ntpdc
# monlist command when default restrict does not include the noquery flag. See
# CVE-2013-5211 for more details.
# Note: Monitoring will not be disabled with the limited restriction flag.
disable monitor

[root@k8s-master ~]# systemctl restart ntpd
# 查看同步状态(5-10分钟)
[root@k8s-master ~]# ntpstat

修改说明

IP地址从10.0.0.2 到192.168.254.254,默认网关为255.255.255.0的机器都可以从NTP服务器进行同步时间

restrict 10.0.0.2 mask 255.255.255.0 nomodify notrap

定义使用的上游ntp服务器,将原来的注释掉

server ntp1.aliyun.com

server time1.aliyun.com

允许上层时间服务器主动修改本机时间

restrict time1.aliyun.com nomodify notrap noquery

restrict ntp1.aliyun.com nomodify notrap noquery

外部时间不可用时,使用本地时间作为时间服务。

server 127.0.0.1

fudge 127.0.0.1 stratum 10

配置文件修改完成之后,重启服务

systemctl restart ntpd

2. node 节点配置

两个子节点服务器 k8s-node01 和 k8s-node02 服务器分别修改配置

[root@k8s-node01~]# vim /etc/ntp.conf
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).
 
driftfile /var/lib/ntp/drift
 
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
#restrict default nomodify notrap nopeer noquery
 
# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
#restrict 127.0.0.1
#restrict ::1
 
server 10.0.0.11
restrict 10.0.0.11 nomodify notrap noquery
 
server 127.0.0.1
fudge 127.0.0.1 stratum 10
# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
 
#broadcast 192.168.1.255 autokey # broadcast server
#broadcastclient   # broadcast client
#broadcast 224.0.1.1 autokey  # multicast server
#multicastclient 224.0.1.1  # multicast client
#manycastserver 239.255.254.254  # manycast server
#manycastclient 239.255.254.254 autokey # manycast client
 
# Enable public key cryptography.
#crypto
 
includefile /etc/ntp/crypto/pw
 
# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys
 
# Specify the key identifiers which are trusted.
#trustedkey 4 8 42
 
# Specify the key identifier to use with the ntpdc utility.
#requestkey 8
 
# Specify the key identifier to use with the ntpq utility.
#controlkey 8
 
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
 
# Disable the monitoring facility to prevent amplification attacks using ntpdc
# monlist command when default restrict does not include the noquery flag. See
# CVE-2013-5211 for more details.
# Note: Monitoring will not be disabled with the limited restriction flag.
disable monitor
[root@k8s-node01~]# systemctl restart ntpd

# 查看同步状态
[root@k8s-node01~]# ntpstat

配置时间服务器为上面搭建的ntp服务器

server 10.0.0.11

配置允许ntp服务器主动修改本机的时间

restrict 10.0.0.11 nomodify notrap noquery

同样配置本地服务器

server 127.0.0.1

fudge 127.0.0.1 stratum 10

二、主要服务安装

1. Docker 安装配置

master 节点和 node 节点均同样操作

[root@k8s-master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

[root@k8s-master ~]# yum list docker-ce --showduplicates | sort -r
[root@k8s-master ~]# yum -y install docker-ce-20.10.10 docker-ce-cli-20.10.10
# 修改docker 配置
[root@k8s-master ~]# vim /etc/docker/daemon.json
{
  "data-root": "/var/lib/docker",
  "registry-mirrors": [
      "https://6130e0dd.cf-workers-docker-io-upw.pages.dev",
      "https://docker-mirror-proxy.zhenmourener.workers.dev/"
  ],
  "insecure-registries": [
      "example.com:5000"
  ],
  "live-restore": true,
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "10"
  }
}
[root@k8s-master ~]# systemctl enable docker --now
[root@k8s-master ~]# docker --version

2. kubernetes 安装

所有主机都需要操作

[root@k8s-master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 由于版本更新频繁,这里指定版本号部署
[root@k8s-master ~]# yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0
[root@k8s-master ~]# systemctl enable kubelet --now

# 修改kubelet的cgroups也为systemd
[root@k8s-master ~]# vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
# 新起一行添加如下变量
Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=systemd"
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart kubelet

3. Ubuntu 22.04 配置安装源

# 1. apt 安装源
root@k8s-master1:~# cat > /etc/apt/sources.list <<EOF
deb http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse
EOF

root@k8s-master1:~# apt update && apt -y install ca-certificates gnupg lsb-release

# 2. docker 安装源
root@k8s-master1:~# curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
root@k8s-master1:~# echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# 3. k8s 安装源
root@k8s-master1:~# curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/k8s.gpg
root@k8s-master1:~# echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/k8s.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main"  | sudo tee /etc/apt/sources.list.d/k8s.list > /dev/null

root@k8s-master1:~# apt update

4. 初始化集群

只需要在Master 节点执行,这里的 apiserve 需要修改成自己的 master 地址

[root@k8s-master ~]# kubeadm init \
--apiserver-advertise-address=10.0.0.11 \
--control-plane-endpoint=apiserver.cluster.local:6443 \
--apiserver-bind-port=6443 \
--kubernetes-version=v1.23.0 \
--pod-network-cidr=192.18.0.0/16 \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--ignore-preflight-errors=swap

以下是执行初始化后的提示输出

..........................................................

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

######## 这条命令用来初始化其他的 master 节点
  kubeadm join apiserver.cluster.local:6443 --token 9037x2.tcaqnpaqkra9vsbw \
    --discovery-token-ca-cert-hash sha256:23e4b3729d998e3a97d3dd72989080572a0e5ca9e9a2cd708b5a8cc7bfd09f36 \
    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

####### 这条命了用来初始化 worker 节点
kubeadm join apiserver.cluster.local:6443 --token 9037x2.tcaqnpaqkra9vsbw \
    --discovery-token-ca-cert-hash sha256:23e4b3729d998e3a97d3dd72989080572a0e5ca9e9a2cd708b5a8cc7bfd09f36

根据输出提示操作

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

#添加命令补齐
[root@k8s-master ~]# source /usr/share/bash-completion/bash_completion
[root@k8s-master ~]# source <(kubectl completion bash)
[root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

5. node 节点加入集群

[root@k8s-node01 ~]# kubeadm join apiserver.cluster.local:6443 --token 9037x2.tcaqnpaqkra9vsbw \
    --discovery-token-ca-cert-hash sha256:23e4b3729d998e3a97d3dd72989080572a0e5ca9e9a2cd708b5a8cc7bfd09f36

[root@k8s-node02 ~]# kubeadm join apiserver.cluster.local:6443 --token 9037x2.tcaqnpaqkra9vsbw \
    --discovery-token-ca-cert-hash sha256:23e4b3729d998e3a97d3dd72989080572a0e5ca9e9a2cd708b5a8cc7bfd09f36

如果不小心弄丢了上面的提示内容可以用如下命令找到

# 查看 token 默认存放 24 小时
[root@k8s-master ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
9037x2.tcaqnpaqkra9vsbw   22h       2019-09-08T20:58:34+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

获取 ca 证书编码哈希值

[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | \
    openssl rsa -pubin -outform der 2>/dev/null | \
    openssl dgst -sha256 -hex | sed 's/^.* //'

# 以下是证书 hash 值
23e4b3729d998e3a97d3dd72989080572a0e5ca9e9a2cd708b5a8cc7bfd09f36

整合初始化命令

[root@k8s-node02 ~]# kubeadm join apiserver.cluster.local:6443 --token 9037x2.tcaqnpaqkra9vsbw \
    --discovery-token-ca-cert-hash sha256:23e4b3729d998e3a97d3dd72989080572a0e5ca9e9a2cd708b5a8cc7bfd09f36

如果 token 过期的话使用如下命令创建即可

[root@k8s-master ~]# kubeadm token create

# 或者如下命令
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join apiserver.cluster.local:6443 --token zlsg61.dmqsi74zgrspsm68 --discovery-token-ca-cert-hash sha256:1ebf8fd56529b55e7f02712643962bda685052b15858facbccbfb8f96d5b9714 

三、网络插件安装

1. calico 安装

[root@k8s-master ~]# wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml --no-check-certificate
[root@k8s-master ~]# kubectl apply -f calico.yaml
[root@k8s-master ~]# kubectl get pods -A

四、验证集群功能性

1. 创建一个 Nginx 服务

[root@k8s-master ~]# vim nginx-ds.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    k8s-app: nginx
spec:
  type: NodePort
  selector:
    k8s-app: nginx
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30019
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    k8s-app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      k8s-app: nginx
  template:
    metadata:
      labels:
        k8s-app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
      restartPolicy: Always
[root@k8s-master ~]# kubectl apply -f nginx-ds.yaml

浏览器访问nginx测试图.png

集群部署与实施
Linux Docker kubernetes
License:  CC BY 4.0
Share

Further Reading

Feb 19, 2025

Kubernetes 单 Master 集群安装

本次安装为虚拟机测试环境安装 一、基础环境准备 1. 关闭防火墙和 selinux [root@k8s-master ~]# systemctl stop firewalld && systemctl disable firewalld [root@k8s-master ~]# sed -i 's/

OLDER

关于 Kubernetes 集群

NEWER

Helm 部署 kube-prometheus-stack

Recently Updated

  • Kubernetes 安装部署 Alist 并配置 Onlyoffice
  • KubeSphere-04-Dev-ops 流水线插件的使用
  • KubeSphere-03-Logging 日志插件的使用
  • KubeSphere-02-Service Mesh 的使用
  • KubeSphere-01-介绍与基础使用

Trending Tags

KVM Service Mesh Docker shell 路由规则 Mysql Containerd GitOps 网络设备 Prometheus

Contents

©2025 甄天祥-Linux-个人小站. Some rights reserved.

Using the Halo theme Chirpy