傻瓜式搭建k8s集群(Kubernetes v1.27.3 + Containerd + Calico v3.25)

1.服务器准备

准备三台主机

IP OS Hostname
172.16.1.101 Ubuntu 22.04 k8s-master
172.16.1.102 Ubuntu 22.04 k8s-worker1
172.16.1.103 Ubuntu 22.04 k8s-worker2
1.1 设置/etc/hosts及各主机的hostname
cat << EOF | sudo tee -a /etc/hosts
172.16.1.101 k8s-master
172.16.1.102 k8s-worker1
172.16.1.103 k8s-worker2
EOF

# 设置主节点hostname,如果是worker节点,按上面的名称进行替换
sudo hostnamectl hostname k8s-master
1.2 主机时间同步
sudo apt install -y chrony
sudo systemctl start chrony
sudo systemctl enable chrony
1.3 各节点防火墙设定
sudo ufw disable  && sudo ufw status
1.4 禁用Swap设备
sudo swapoff -a
sudo sed -ri 's/.*swap.*/#&/' /etc/fstab
1.5 Forwarding IPv4 and letting iptables see bridged traffic
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

2.安装containerd

sudo apt update
sudo apt-get install ca-certificates curl gnupg lsb-release
sudo mkdir -p /etc/apt/keyrings
sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y containerd.io

containerd config default | sudo tee /etc/containerd/config.toml > /dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sudo sed -i 's/registry.k8s.io/registry.aliyuncs.com\/google_containers/g' /etc/containerd/config.toml

sudo systemctl restart containerd
sudo systemctl enable containerd

参考:
https://github.com/containerd/containerd/blob/main/docs/getting-started.md
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd

3.安装kubeadm、kubelet、kubectl

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/

sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl
sudo curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
sudo echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

如果主机都是通过虚拟机创建的,那么到了这一步,可以采用克隆方式创建其余的主机,再修改IP和hostname,省去很多麻烦。

4.初始化主节点

sudo kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.27.3 \
--control-plane-endpoint=k8s-master \
--pod-network-cidr=10.10.0.0/16

完成后结果如下,根据提示完成进一步的操作。

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-master:6443 --token y24d2k.3prnyxd9ltafe01b \
        --discovery-token-ca-cert-hash sha256:f056a04a1105b98929a005322971bb2060fcfa5c29a04a39bfc9d3d6a5a6523f \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master:6443 --token y24d2k.3prnyxd9ltafe01b \
        --discovery-token-ca-cert-hash sha256:f056a04a1105b98929a005322971bb2060fcfa5c29a04a39bfc9d3d6a5a6523f

5.加入worker节点

$ sudo kubeadm join k8s-master:6443 --token y24d2k.3prnyxd9ltafe01b \
        --discovery-token-ca-cert-hash sha256:f056a04a1105b98929a005322971bb2060fcfa5c29a04a39bfc9d3d6a5a6523f
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

回到主节点,使用kubectl查看集群中的节点

$ kubectl get nodes
NAME          STATUS      ROLES           AGE   VERSION
k8s-master    NotReady    control-plane   45m   v1.27.3
k8s-worker1   NotReady    <none>          44m   v1.27.3
k8s-worker2   NotReady    <none>          44m   v1.27.3

STATUS为NotReady状态,这是由于没有为集群配置网络插件。

6.配置网络插件

curl https://docs.tigera.io/archive/v3.25/manifests/calico.yaml -O
sed -i "s#192\.168\.0\.0/16#10\.10\.0\.0/16#" calico.yaml
kubectl apply -f calico.yaml

calico.yamlCALICO_IPV4POOL_CIDR需要与k8s集群初始化时指定的pod-network-cidr一致,如果pod-network-cidr刚好为calico中的默认值192.168.0.0/16,则无需调整calico.yml。

此外,需要注意一下,如果calico.yaml文件中以下两行被注释掉了,那么需要手动取消注释。

- name: CALICO_IPV4POOL_CIDR
  value: "10.10.0.0/16"

安装完成后,查看kube-system命名空间下的pod,结果如下

$ kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6c99c8747f-wmz9s   1/1     Running   0          17m
calico-node-65kc4                          1/1     Running   0          17m
calico-node-kk75c                          1/1     Running   0          17m
calico-node-qqm8b                          1/1     Running   0          17m
coredns-7bdc4cb885-8d9bq                   1/1     Running   0          48m
coredns-7bdc4cb885-dhxz2                   1/1     Running   0          48m
etcd-master                                1/1     Running   3          48m
kube-apiserver-master                      1/1     Running   3          49m
kube-controller-manager-master             1/1     Running   3          48m
kube-proxy-7pdx5                           1/1     Running   0          47m
kube-proxy-g7h9c                           1/1     Running   0          47m
kube-proxy-l2kqh                           1/1     Running   0          48m
kube-scheduler-master                      1/1     Running   3          48m

再次查看集群中的节点,都已经是Ready状态了

$ kubectl get nodes
NAME      STATUS   ROLES           AGE   VERSION
master    Ready    control-plane   50m   v1.27.3
worker1   Ready    <none>          49m   v1.27.3
worker2   Ready    <none>          48m   v1.27.3

这里有个注意点,如果使用的是云主机,可能云厂商默认只开放了部分端口,那么会出现如下情况

$ kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7bdbfc669-bxn9g   1/1     Running   0          7m53s
calico-node-5m5x8                         0/1     Running   0          7m53s
calico-node-bpphv                         0/1     Running   0          7m53s
calico-node-lbvq8                         0/1     Running   0          7m53s
coredns-5bbd96d687-8xvvq                  1/1     Running   0          16m
coredns-5bbd96d687-pjwrc                  1/1     Running   0          16m
etcd-master.test.com                      1/1     Running   0          16m
kube-apiserver-master.test.com            1/1     Running   0          16m
kube-controller-manager-master.test.com   1/1     Running   0          16m
kube-proxy-5qjvp                          1/1     Running   0          14m
kube-proxy-87bpn                          1/1     Running   0          16m
kube-proxy-bp6zz                          1/1     Running   0          14m
kube-scheduler-master.test.com            1/1     Running   0          16m

那么只要在所有节点开放179端口即可。


至此,k8s集群基本搭建完成了,除了后续的Ingress Controller。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 159,458评论 4 363
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,454评论 1 294
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 109,171评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 44,062评论 0 207
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,440评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,661评论 1 219
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,906评论 2 313
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,609评论 0 200
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,379评论 1 246
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,600评论 2 246
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,085评论 1 261
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,409评论 2 254
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,072评论 3 237
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,088评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,860评论 0 195
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,704评论 2 276
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,608评论 2 270

推荐阅读更多精彩内容