使用 kubeasz 快速部署 kubernetes 集群

一、kubeasz 项目地址:https://github.com/easzlab/kubeasz

二、kubernetes 集群快速部署过程

1、OS 版本

# cat /etc/redhat-release 
CentOS Linux release 7.9.2009 (Core)

2、机器信息

// 3 个 master 节点
k8s-1 192.168.1.111 etcd kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy docker
k8s-2 192.168.1.112 etcd kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy docker
k8s-3 192.168.1.113 etcd kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy docker
// 3 个 node 节点
k8s-4 192.168.1.114 kubelet kube-proxy docker
k8s-5 192.168.1.115 kubelet kube-proxy docker
k8s-6 192.168.1.116 kubelet kube-proxy docker

这里直接复用 k8s-1 安装 kubeasz 作为部署节点

3、在 k8s-1 上使用 pip 来安装 ansible

# curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py
# python get-pip.py
# python -m pip install --upgrade "pip < 21.0"
# pip install ansible -i https://mirrors.aliyun.com/pypi/simple/

4、配置 k8s-1 到 6 台机器的免密登录(包括 k8s-1 自身)

// 在 k8s-1 生成 ssh 密钥对
# ssh-keygen -t rsa -b 2048 -N '' -f ~/.ssh/id_rsa

// 在 k8s-1 配置到 6 台机器的免密登录(包括 k8s-1 自身)
# ssh-copy-id root@192.168.1.111
# ssh-copy-id root@192.168.1.112
# ssh-copy-id root@192.168.1.113
# ssh-copy-id root@192.168.1.114
# ssh-copy-id root@192.168.1.115
# ssh-copy-id root@192.168.1.116

5、在 k8s-1 上下载 kubeasz 部署工具

# export release=3.1.0
# curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
# chmod +x ./ezdown

// 下载 kubeasz 代码、二进制、离线镜像到 /etc/kubeasz 目录(视网络情况,耗时可能会有点长)
# ./ezdown -D

# ls /etc/kubeasz/
ansible.cfg  bin  clusters  docs  down  example  ezctl  ezdown  manifests  pics  playbooks  README.md  roles  tools


# ls /opt/kube/bin/
containerd               ctr      docker-init   etcd            kube-controller-manager  runc
containerd-shim          docker   docker-proxy  etcdctl         kubectl
containerd-shim-runc-v2  dockerd  docker-tag    kube-apiserver  kube-scheduler

6、在 k8s-1 上生成集群 k8s-test 的配置文件,用来后续创建集群

# cd /etc/kubeasz/
# ./ezctl new k8s-test

7、在 k8s-1 上修改 k8s-test 集群的配置信息,根据需求修改 /etc/kubeasz/clusters/k8s-test/hosts/etc/kubeasz/clusters/k8s-test/config.yml 配置文件

// 修改 /etc/kubeasz/clusters/k8s-test/hosts 文件的信息,主要是 [etcd] 、[kube_master]、[kube_node] 等信息
# egrep -v "^$|^#" /etc/kubeasz/clusters/k8s-test/hosts
[etcd]
192.168.1.111
192.168.1.112
192.168.1.113
[kube_master]
192.168.1.111
192.168.1.112
192.168.1.113
[kube_node]
192.168.1.114
192.168.1.115
192.168.1.116
[harbor]
[ex_lb]
[chrony]
[all:vars]
SECURE_PORT="6443"
CONTAINER_RUNTIME="docker"
CLUSTER_NETWORK="flannel"
PROXY_MODE="ipvs"
SERVICE_CIDR="10.68.0.0/16"
CLUSTER_CIDR="172.20.0.0/16"
NODE_PORT_RANGE="30000-32767"
CLUSTER_DNS_DOMAIN="cluster.local"
bin_dir="/opt/kube/bin"
base_dir="/etc/kubeasz"
cluster_dir="{{ base_dir }}/clusters/k8s-test"
ca_dir="/etc/kubernetes/ssl"

// 修改 /etc/kubeasz/clusters/k8s-test/config.yml 文件的信息,主要是 CLUSTER_NAME 等信息
# egrep -v "^$|^#" /etc/kubeasz/clusters/k8s-test/config.yml
INSTALL_SOURCE: "online"
OS_HARDEN: false
ntp_servers:
  - "ntp1.aliyun.com"
  - "time1.cloud.tencent.com"
  - "0.cn.pool.ntp.org"
local_network: "0.0.0.0/0"
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"
CLUSTER_NAME: "k8s-test"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""
ENABLE_MIRROR_REGISTRY: true
SANDBOX_IMAGE: "easzlab/pause-amd64:3.4.1"
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"
DOCKER_STORAGE_DIR: "/var/lib/docker"
ENABLE_REMOTE_API: false
INSECURE_REG: '["127.0.0.1/8"]'
MASTER_CERT_HOSTS:
  - 192.168.1.111
NODE_CIDR_LEN: 24
KUBELET_ROOT_DIR: "/var/lib/kubelet"
MAX_PODS: 110
KUBE_RESERVED_ENABLED: "yes"
SYS_RESERVED_ENABLED: "no"
BALANCE_ALG: "roundrobin"
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"
flannel_offline: "flannel_{{ flannelVer }}.tar"
CALICO_IPV4POOL_IPIP: "Always"
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
CALICO_NETWORKING_BACKEND: "brid"
calico_ver: "v3.15.3"
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
calico_offline: "calico_{{ calico_ver }}.tar"
ETCD_CLUSTER_SIZE: 1
cilium_ver: "v1.4.1"
cilium_offline: "cilium_{{ cilium_ver }}.tar"
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"
OVERLAY_TYPE: "full"
FIREWALL_ENABLE: "true"
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"
dns_install: "yes"
corednsVer: "1.8.0"
ENABLE_LOCAL_DNS_CACHE: true
dnsNodeCacheVer: "1.17.0"
LOCAL_DNS_CACHE: "169.254.20.10"
metricsserver_install: "yes"
metricsVer: "v0.3.6"
dashboard_install: "yes"
dashboardVer: "v2.2.0"
dashboardMetricsScraperVer: "v1.0.6"
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "9.12.3"
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.1"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443
HARBOR_SELF_SIGNED_CERT: true
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true

8、使用 ./ezctl setup k8s-test all 一键安装 kubernetes 集群即可

#  ./ezctl setup k8s-test all 
ansible-playbook -i clusters/k8s-test/hosts -e @clusters/k8s-test/config.yml  playbooks/90.setup.yml
省略安装过程...

9、查看安装后的 kubernetes 集群状态

// 部署好的集群信息
# kubectl get node -o wide
NAME            STATUS                     ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
192.168.1.111   Ready,SchedulingDisabled   master   40m   v1.21.0   192.168.1.111   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.5
192.168.1.112   Ready,SchedulingDisabled   master   41m   v1.21.0   192.168.1.112   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.5
192.168.1.113   Ready,SchedulingDisabled   master   41m   v1.21.0   192.168.1.113   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.5
192.168.1.114   Ready                      node     33m   v1.21.0   192.168.1.114   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.5
192.168.1.115   Ready                      node     32m   v1.21.0   192.168.1.115   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.5
192.168.1.116   Ready                      node     32m   v1.21.0   192.168.1.116   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.5

# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
kubernetes   ClusterIP   10.68.0.1    <none>        443/TCP   45m   <none>

# kubectl get pod -A -o wide
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE   IP              NODE            NOMINATED NODE   READINESS GATES
kube-system   coredns-74c56d8f8d-pkr6h                     1/1     Running   0          24m   172.20.3.2      192.168.1.114   <none>           <none>
kube-system   dashboard-metrics-scraper-856586f554-pksh6   1/1     Running   0          20m   172.20.3.3      192.168.1.114   <none>           <none>
kube-system   kube-flannel-ds-amd64-8vpqr                  1/1     Running   0          26m   192.168.1.112   192.168.1.112   <none>           <none>
kube-system   kube-flannel-ds-amd64-kxz8w                  1/1     Running   0          26m   192.168.1.116   192.168.1.116   <none>           <none>
kube-system   kube-flannel-ds-amd64-qx7jh                  1/1     Running   0          26m   192.168.1.115   192.168.1.115   <none>           <none>
kube-system   kube-flannel-ds-amd64-r57n8                  1/1     Running   0          26m   192.168.1.111   192.168.1.111   <none>           <none>
kube-system   kube-flannel-ds-amd64-sxsl4                  1/1     Running   0          26m   192.168.1.113   192.168.1.113   <none>           <none>
kube-system   kube-flannel-ds-amd64-wpq24                  1/1     Running   0          26m   192.168.1.114   192.168.1.114   <none>           <none>
kube-system   kubernetes-dashboard-c4ff5556c-t2bhg         1/1     Running   0          20m   172.20.5.2      192.168.1.115   <none>           <none>
kube-system   metrics-server-8568cf894b-td4ch              1/1     Running   0          23m   172.20.4.2      192.168.1.116   <none>           <none>
kube-system   node-local-dns-5lslm                         1/1     Running   0          24m   192.168.1.113   192.168.1.113   <none>           <none>
kube-system   node-local-dns-khp5f                         1/1     Running   0          24m   192.168.1.112   192.168.1.112   <none>           <none>
kube-system   node-local-dns-l9pd9                         1/1     Running   0          24m   192.168.1.116   192.168.1.116   <none>           <none>
kube-system   node-local-dns-ng2lh                         1/1     Running   0          24m   192.168.1.115   192.168.1.115   <none>           <none>
kube-system   node-local-dns-qdrp9                         1/1     Running   0          24m   192.168.1.114   192.168.1.114   <none>           <none>
kube-system   node-local-dns-v4pcc                         1/1     Running   0          24m   192.168.1.111   192.168.1.111   <none>           <none>

# kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
KubeDNSUpstream is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns-upstream:dns/proxy
kubernetes-dashboard is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy



// 查看 kubernetes 集群的组件状态(基本都是通过 systemctl 管理的)
// 在 master 节点上查看
# systemctl status etcd
# systemctl status kube-apiserver
# systemctl status kube-scheduler
# systemctl status kube-controller-manager
// 在 master 和 node 节点上查看
# systemctl status kubelet 
# systemctl status kube-proxy 
# systemctl status docker
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 158,425评论 4 361
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,058评论 1 291
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 108,186评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,848评论 0 204
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,249评论 3 286
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,554评论 1 216
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,830评论 2 312
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,536评论 0 197
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,239评论 1 241
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,505评论 2 244
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,004评论 1 258
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,346评论 2 253
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 32,999评论 3 235
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,060评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,821评论 0 194
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,574评论 2 271
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,480评论 2 267

推荐阅读更多精彩内容