一、kubeasz 项目地址:https://github.com/easzlab/kubeasz
二、kubernetes 集群快速部署过程
1、OS 版本
# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
2、机器信息
// 3 个 master 节点
k8s-1 192.168.1.111 etcd kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy docker
k8s-2 192.168.1.112 etcd kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy docker
k8s-3 192.168.1.113 etcd kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy docker
// 3 个 node 节点
k8s-4 192.168.1.114 kubelet kube-proxy docker
k8s-5 192.168.1.115 kubelet kube-proxy docker
k8s-6 192.168.1.116 kubelet kube-proxy docker
这里直接复用 k8s-1 安装 kubeasz 作为部署节点
3、在 k8s-1 上使用 pip 来安装 ansible
# curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py
# python get-pip.py
# python -m pip install --upgrade "pip < 21.0"
# pip install ansible -i https://mirrors.aliyun.com/pypi/simple/
4、配置 k8s-1 到 6 台机器的免密登录(包括 k8s-1 自身)
// 在 k8s-1 生成 ssh 密钥对
# ssh-keygen -t rsa -b 2048 -N '' -f ~/.ssh/id_rsa
// 在 k8s-1 配置到 6 台机器的免密登录(包括 k8s-1 自身)
# ssh-copy-id root@192.168.1.111
# ssh-copy-id root@192.168.1.112
# ssh-copy-id root@192.168.1.113
# ssh-copy-id root@192.168.1.114
# ssh-copy-id root@192.168.1.115
# ssh-copy-id root@192.168.1.116
5、在 k8s-1 上下载 kubeasz 部署工具
# export release=3.1.0
# curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
# chmod +x ./ezdown
// 下载 kubeasz 代码、二进制、离线镜像到 /etc/kubeasz 目录(视网络情况,耗时可能会有点长)
# ./ezdown -D
# ls /etc/kubeasz/
ansible.cfg bin clusters docs down example ezctl ezdown manifests pics playbooks README.md roles tools
# ls /opt/kube/bin/
containerd ctr docker-init etcd kube-controller-manager runc
containerd-shim docker docker-proxy etcdctl kubectl
containerd-shim-runc-v2 dockerd docker-tag kube-apiserver kube-scheduler
6、在 k8s-1 上生成集群 k8s-test 的配置文件,用来后续创建集群
# cd /etc/kubeasz/
# ./ezctl new k8s-test
7、在 k8s-1 上修改 k8s-test 集群的配置信息,根据需求修改 /etc/kubeasz/clusters/k8s-test/hosts
和 /etc/kubeasz/clusters/k8s-test/config.yml
配置文件
// 修改 /etc/kubeasz/clusters/k8s-test/hosts 文件的信息,主要是 [etcd] 、[kube_master]、[kube_node] 等信息
# egrep -v "^$|^#" /etc/kubeasz/clusters/k8s-test/hosts
[etcd]
192.168.1.111
192.168.1.112
192.168.1.113
[kube_master]
192.168.1.111
192.168.1.112
192.168.1.113
[kube_node]
192.168.1.114
192.168.1.115
192.168.1.116
[harbor]
[ex_lb]
[chrony]
[all:vars]
SECURE_PORT="6443"
CONTAINER_RUNTIME="docker"
CLUSTER_NETWORK="flannel"
PROXY_MODE="ipvs"
SERVICE_CIDR="10.68.0.0/16"
CLUSTER_CIDR="172.20.0.0/16"
NODE_PORT_RANGE="30000-32767"
CLUSTER_DNS_DOMAIN="cluster.local"
bin_dir="/opt/kube/bin"
base_dir="/etc/kubeasz"
cluster_dir="{{ base_dir }}/clusters/k8s-test"
ca_dir="/etc/kubernetes/ssl"
// 修改 /etc/kubeasz/clusters/k8s-test/config.yml 文件的信息,主要是 CLUSTER_NAME 等信息
# egrep -v "^$|^#" /etc/kubeasz/clusters/k8s-test/config.yml
INSTALL_SOURCE: "online"
OS_HARDEN: false
ntp_servers:
- "ntp1.aliyun.com"
- "time1.cloud.tencent.com"
- "0.cn.pool.ntp.org"
local_network: "0.0.0.0/0"
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"
CLUSTER_NAME: "k8s-test"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""
ENABLE_MIRROR_REGISTRY: true
SANDBOX_IMAGE: "easzlab/pause-amd64:3.4.1"
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"
DOCKER_STORAGE_DIR: "/var/lib/docker"
ENABLE_REMOTE_API: false
INSECURE_REG: '["127.0.0.1/8"]'
MASTER_CERT_HOSTS:
- 192.168.1.111
NODE_CIDR_LEN: 24
KUBELET_ROOT_DIR: "/var/lib/kubelet"
MAX_PODS: 110
KUBE_RESERVED_ENABLED: "yes"
SYS_RESERVED_ENABLED: "no"
BALANCE_ALG: "roundrobin"
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"
flannel_offline: "flannel_{{ flannelVer }}.tar"
CALICO_IPV4POOL_IPIP: "Always"
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
CALICO_NETWORKING_BACKEND: "brid"
calico_ver: "v3.15.3"
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
calico_offline: "calico_{{ calico_ver }}.tar"
ETCD_CLUSTER_SIZE: 1
cilium_ver: "v1.4.1"
cilium_offline: "cilium_{{ cilium_ver }}.tar"
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"
OVERLAY_TYPE: "full"
FIREWALL_ENABLE: "true"
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"
dns_install: "yes"
corednsVer: "1.8.0"
ENABLE_LOCAL_DNS_CACHE: true
dnsNodeCacheVer: "1.17.0"
LOCAL_DNS_CACHE: "169.254.20.10"
metricsserver_install: "yes"
metricsVer: "v0.3.6"
dashboard_install: "yes"
dashboardVer: "v2.2.0"
dashboardMetricsScraperVer: "v1.0.6"
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "9.12.3"
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.1"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443
HARBOR_SELF_SIGNED_CERT: true
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true
8、使用 ./ezctl setup k8s-test all 一键安装 kubernetes 集群即可
# ./ezctl setup k8s-test all
ansible-playbook -i clusters/k8s-test/hosts -e @clusters/k8s-test/config.yml playbooks/90.setup.yml
省略安装过程...
9、查看安装后的 kubernetes 集群状态
// 部署好的集群信息
# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
192.168.1.111 Ready,SchedulingDisabled master 40m v1.21.0 192.168.1.111 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.5
192.168.1.112 Ready,SchedulingDisabled master 41m v1.21.0 192.168.1.112 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.5
192.168.1.113 Ready,SchedulingDisabled master 41m v1.21.0 192.168.1.113 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.5
192.168.1.114 Ready node 33m v1.21.0 192.168.1.114 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.5
192.168.1.115 Ready node 32m v1.21.0 192.168.1.115 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.5
192.168.1.116 Ready node 32m v1.21.0 192.168.1.116 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.5
# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 45m <none>
# kubectl get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-74c56d8f8d-pkr6h 1/1 Running 0 24m 172.20.3.2 192.168.1.114 <none> <none>
kube-system dashboard-metrics-scraper-856586f554-pksh6 1/1 Running 0 20m 172.20.3.3 192.168.1.114 <none> <none>
kube-system kube-flannel-ds-amd64-8vpqr 1/1 Running 0 26m 192.168.1.112 192.168.1.112 <none> <none>
kube-system kube-flannel-ds-amd64-kxz8w 1/1 Running 0 26m 192.168.1.116 192.168.1.116 <none> <none>
kube-system kube-flannel-ds-amd64-qx7jh 1/1 Running 0 26m 192.168.1.115 192.168.1.115 <none> <none>
kube-system kube-flannel-ds-amd64-r57n8 1/1 Running 0 26m 192.168.1.111 192.168.1.111 <none> <none>
kube-system kube-flannel-ds-amd64-sxsl4 1/1 Running 0 26m 192.168.1.113 192.168.1.113 <none> <none>
kube-system kube-flannel-ds-amd64-wpq24 1/1 Running 0 26m 192.168.1.114 192.168.1.114 <none> <none>
kube-system kubernetes-dashboard-c4ff5556c-t2bhg 1/1 Running 0 20m 172.20.5.2 192.168.1.115 <none> <none>
kube-system metrics-server-8568cf894b-td4ch 1/1 Running 0 23m 172.20.4.2 192.168.1.116 <none> <none>
kube-system node-local-dns-5lslm 1/1 Running 0 24m 192.168.1.113 192.168.1.113 <none> <none>
kube-system node-local-dns-khp5f 1/1 Running 0 24m 192.168.1.112 192.168.1.112 <none> <none>
kube-system node-local-dns-l9pd9 1/1 Running 0 24m 192.168.1.116 192.168.1.116 <none> <none>
kube-system node-local-dns-ng2lh 1/1 Running 0 24m 192.168.1.115 192.168.1.115 <none> <none>
kube-system node-local-dns-qdrp9 1/1 Running 0 24m 192.168.1.114 192.168.1.114 <none> <none>
kube-system node-local-dns-v4pcc 1/1 Running 0 24m 192.168.1.111 192.168.1.111 <none> <none>
# kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
KubeDNSUpstream is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns-upstream:dns/proxy
kubernetes-dashboard is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
// 查看 kubernetes 集群的组件状态(基本都是通过 systemctl 管理的)
// 在 master 节点上查看
# systemctl status etcd
# systemctl status kube-apiserver
# systemctl status kube-scheduler
# systemctl status kube-controller-manager
// 在 master 和 node 节点上查看
# systemctl status kubelet
# systemctl status kube-proxy
# systemctl status docker