快速搭建k8s集群(kubeadm方式)

2020-09-03

快速搭建k8s集群

环境准备:
(至少3台机器)

  1. k8s-master centos7.6 192.168.191.133 内存4G,CPU2,硬盘40G
  2. k8s-node1 centos7.6 192.168.191.134 内存4G,CPU2,硬盘40G
  3. k8s-node2 centos7.6 192.168.191.135 内存4G,CPU2,硬盘40G

kubeadm方式部署Kubernetes

1. 以下操作在master和所有node节点都需要进行
#修改主机名
[root@localhost ~]# hostnamectl set-hostname k8s-master        #192.168.191.133
[root@localhost ~]# hostnamectl set-hostname k8s-node1         #192.168.191.134
[root@localhost ~]# hostnamectl set-hostname k8s-node2         #192.168.191.135

#关闭防火墙
[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# systemctl disable firewalld

#关闭selinux
[root@k8s-master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@k8s-master ~]# setenforce 0

#关闭swap
[root@k8s-master ~]# swapoff -a          # 临时
[root@k8s-master ~]# vim /etc/fstab        #永久,把文件中带有swap的那行注释掉

#添加主机名与IP对应关系
[root@k8s-master ~]# vim /etc/hosts
192.168.191.133  k8s-master
192.168.191.134  k8s-node1
192.168.191.135  k8s-node2

#将桥接的IPv4流量传递到iptables的链   (k8s1.14版本的)
[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@k8s-master ~]# sysctl --system

附:1.17版本的k8s的优化和流量传递
vim /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
2. 所有节点安装Docker、kubeadm、kubelet

docker 18.06.1
kubeadm 1.14.0
kubelet 1.14.0
kubectl 1.14.0

安装docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
docker --version

添加阿里的kubernetes源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm、kubelet、kubectl

#因版本更新频繁,所以指定版本号,默认下载最新版(注意所有节点版本必须一致)
yum -y install kubeadm-1.14.0
yum -y install kubelet-1.14.0
yum -y install kubectl-1.14.0
systemctl start kubelet && systemctl enable kubelet
3. 部署k8s master

以下操作在master上执行

#由于默认拉取的镜像地址为k8s.gcr.io(国内无法访问),因此指向阿里云的镜像仓库
[root@k8s-master ~]# kubeadm init \
>   --apiserver-advertise-address=192.168.191.133 \
>   --image-repository registry.aliyuncs.com/google_containers \
>   --kubernetes-version v1.14.0 \
>   --service-cidr=10.1.0.0/16 \
>   --pod-network-cidr=10.244.0.0/16

执行完上面的操作后会出现如下提示,先不要清屏,建议把它复制到文本中保存

bb1.png

根据这些提示继续操作

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
4. 安装Pod网络插件(CNI,即flannel)
#通过官方的yaml文件拉取镜像(这个方式比较慢,而且这个网址貌似失效了)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

#以下操作可加快速度
浏览器打开[https://github.com/mrlxxx/kube-flannel.yml](https://github.com/mrlxxx/kube-flannel.yml),将kube-flannel.yml克隆下来(下载zip压缩包也行)
[root@k8s-master ~]# cd kube-flannel.yml  && ls 
[root@k8s-master ~]# grep image kube-flannel.yml      #查看yaml中定义的需要下载的镜像
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-s390x
        image: quay.io/coreos/flannel:v0.12.0-s390x

#手动把镜像pull下来
docker pull quay.io/coreos/flannel:v0.12.0-amd64
docker pull quay.io/coreos/flannel:v0.12.0-arm64
docker pull quay.io/coreos/flannel:v0.12.0-arm
docker pull quay.io/coreos/flannel:v0.12.0-ppc64le
docker pull quay.io/coreos/flannel:v0.12.0-s390x

docker images |grep coreos      #查看所需镜像是否拉取成功

kubectl apply -f kube-flannel.yml        #将kube-flannel.yml里的配置应用到flannel中

kubectl get pods -n kube-system        #查看创建了哪些pod
5. node节点加入k8s集群

以下操作在node1和node2上相同
(准备好在master上执行kubeadm init输出的kubeadm join......的指令,注:每个集群的token是唯一的)

kubeadm join 192.168.191.133:6443 --token xvnp3x.pl6i8ikcdoixkaf0 \
    --discovery-token-ca-cert-hash sha256:9f90161043001c0c75fac7d61590734f844ee507526e948f3647d7b9cfc1362d
#node1加入集群(node2同操作)
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.14.0

docker pull quay.io/coreos/flannel:v0.12.0-amd64       #拉取flannel网络插件的镜像,注意版本与master一致

docker pull registry.aliyuncs.com/google_containers/pause:3.1

kubeadm join 192.168.191.133:6443 --token xvnp3x.pl6i8ikcdoixkaf0 \
    --discovery-token-ca-cert-hash sha256:9f90161043001c0c75fac7d61590734f844ee507526e948f3647d7b9cfc1362d

出现如下提示说明node节点加入集群成功

[root@k8s-node1 ~]# kubeadm join 192.168.191.133:6443 --token xvnp3x.pl6i8ikcdoixkaf0     --discovery-token-ca-cert-hash sha256:9f90161043001c0c75fac7d61590734f844ee507526e948f3647d7b9cfc1362d
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在master上查询集群状态和数量

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   24h   v1.14.0
k8s-node1    Ready    <none>   56m   v1.14.0
k8s-node2    Ready    <none>   16s   v1.14.0

若网络插件镜像拉取很慢,可将master上的镜像打包,然后拷贝到node节点再导入

#在master操作
[root@k8s-master ~]# docker images        #查看所有镜像
[root@k8s-master ~]# docker save -o flannel-v0.12.0-amd64.tar quay.io/coreos/flannel:v0.12.0-amd64      #打包镜像
[root@k8s-master ~]# scp flannel-v0.12.0-amd64.tar k8s-node2:/root/        #把镜像拷贝到node上
#在node操作
[root@k8s-node2 ~]# docker load < flannel-v0.12.0-amd64.tar      #导入镜像
[root@k8s-node2 ~]# docker images

若node节点加入集群失败见另一篇文档:https://www.jianshu.com/p/6a38c100e3d1

6. k8s集群测试和使用
[root@k8s-master ~]# kubectl create deployment nginx --image=daocloud.io/library/nginx       #创建一个资源对象deployment,拉取nginx的镜像
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort     #启动nginx
service/nginx exposed
[root@k8s-master ~]# kubectl get pod,svc
[root@k8s-master ~]# kubectl get pod,svc -o wide      #查看nginx运行在哪个节点上,还有具体的访问端口
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
pod/nginx-5f965696dd-q5jt5   1/1     Running   0          10m   10.244.1.2   k8s-node1   <none>           <none>

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/kubernetes   ClusterIP   10.1.0.1       <none>        443/TCP        25h     <none>
service/nginx        NodePort    10.1.128.102   <none>        80:31438/TCP   5m33s   app=nginx
#浏览访问集群中任意节点的ip:31438,均可访问到nginx的默认主页,则说明集群搭建成功且可用
[root@k8s-master ~]# kubectl get pod nginx-5f965696dd-q5jt5 -o yaml      #查nginx的yaml文件内容

7. 下载安装Dashboard(官方版)

在master上操作
1.准备Dashboard镜像
[root@k8s-master ~]# docker pull tigerfive/kubernetes-dashboard-amd64:v1.10.1
(把这个镜像打包发到其他node节点上,防止master故障时,上面的服务前移到其他机器上,然后又得重新拉取镜像,过程就比较慢了)
2.准备kubernetes-dashboard.yaml文件
拷贝网址下的文件内容
https://github.com/kubernetes/dashboard/blob/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
[root@k8s-master ~]# vim kubernetes-dashboard.yaml (:set paste(按i))把内容粘贴进去
修改以下内容

[root@k8s-master ~]# grep image kubernetes-dashboard.yaml 
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
#把默认的镜像改为 image: tigerfive/kubernetes-dashboard-amd64:v1.10.1
bb2.png

添加这两条内容

bb3.png

# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
[root@k8s-master ~]# ss -tunlp | grep 30001     #确认端口没被占用
[root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml
[root@k8s-master ~]# kubectl get pod,deployment,svc -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
pod/coredns-8686dcc4fd-bcqsm               1/1     Running   0          26h
pod/coredns-8686dcc4fd-twm87               1/1     Running   0          26h
pod/etcd-k8s-master                        1/1     Running   0          26h
pod/kube-apiserver-k8s-master              1/1     Running   0          26h
pod/kube-controller-manager-k8s-master     1/1     Running   0          26h
pod/kube-flannel-ds-amd64-cc54r            1/1     Running   0          120m
pod/kube-flannel-ds-amd64-mpqv8            1/1     Running   0          176m
pod/kube-flannel-ds-amd64-whlnx            1/1     Running   0          7h43m
pod/kube-proxy-6dhq8                       1/1     Running   0          26h
pod/kube-proxy-cmkbm                       1/1     Running   0          176m
pod/kube-proxy-f9lk5                       1/1     Running   0          120m
pod/kube-scheduler-k8s-master              1/1     Running   0          26h
pod/kubernetes-dashboard-5bbc9b8dd-t7d96   1/1     Running   0          6m16s

NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/coredns                2/2     2            2           26h
deployment.extensions/kubernetes-dashboard   1/1     1            1           6m16s

NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns               ClusterIP   10.1.0.10      <none>        53/UDP,53/TCP,9153/TCP   26h
service/kubernetes-dashboard   NodePort    10.1.211.192   <none>        443:30001/TCP            6m16s

用火狐浏览器访问:https://192.168.191.133:30001/ 出现如下页面


bb5.png

在master上创建service account并绑定默认cluster-admin管理员集群角色

#通过以下命令获取到令牌
[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-7692f
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: e6acd0af-ee8e-11ea-b326-000c29e3d07b

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNzY5MmYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTZhY2QwYWYtZWU4ZS0xMWVhLWIzMjYtMDAwYzI5ZTNkMDdiIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.z65xe57EqDldRijPyux75RsW11oSotEMuH4SchFJt_FtyxmVZcr_WdBbzZd9GwIbOhAFj-Qd5UobcStGPNT1kBuGnfp7fWScFMNXsTTScS_1Oko4hDhqLDCuWdktpwEAAXmE7G5bptrk8GIEiQuj3KFNVh7Oknpl1tTnyeRfHNJO41RKHyV93y46wrpx0z9p8TdEECzNi0Sv73mAEyu1whQ0-btOmyvt1WcRSqbYQfVgRxrR2L0Ri7Cvba1DQDVkp0SZ8FF3ho5cY0whs2ADkNKF43Y-mWppp4l-tul5mh9pG4uSVLPEM9sApybQVlXY8q-6ZTBrU5oqRxRB1GX93g

使用token登录Dashboard


bb6.png

完成


bb7.png
安装Kuboard

Kuboard是另一款开源免费的k8s图形化管理界面,更适合微服务的架构的k8s资源展示
详细安装过程参考官方文档:https://kuboard.cn/install/install-dashboard.html#%E5%AE%89%E8%A3%85kuboard

转载请注明出处,谢谢~~

禁止转载,如需转载请通过简信或评论联系作者。