Kubernetes 1.9 HA Cluster Installation

PreRequest Docker Images

Docker Images as below:

Images Version
gcr.io/google_containers/kube-apiserver-amd64 v1.9.0
gcr.io/google_containers/kube-controller-manager-amd64 v1.9.0
gcr.io/google_containers/kube-scheduler-amd64 v1.9.0
gcr.io/google_containers/etcd-amd64 3.1.10
gcr.io/google_containers/pause-amd64 3.0

Initialize Kube Repo

Please make sure you can access to Kube Repo.

[root@master3 ~]# vi /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
~
~

Install base services

Install Docker

On all of the kube nodes(Masters and minions)
As Kubernetes team suggested, here we are using docker 1.12.6 as our container tool.

[root@master1 kubernetes]# yum list docker --showduplicates |sort -r
 * updates: mirrors.163.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
Installed Packages
 * extras: mirrors.cn99.com
 * epel: mirrors.ustc.edu.cn
docker.x86_64             2:1.12.6-68.gitec8512b.el7.centos              extras 
docker.x86_64             2:1.12.6-68.gitec8512b.el7.centos              @extras
docker.x86_64             2:1.12.6-61.git85d7426.el7.centos              extras 
docker.x86_64             2:1.12.6-55.gitc4618fb.el7.centos              extras 
docker.x86_64             2:1.12.6-48.git0fdc778.el7.centos              extras 
 * base: mirrors.cn99.com
Available Packages
[root@master1 kubernetes]#yum install docker-1.12.6-68

Install Kube base services

On All of the nodes (Masters and Minions)

[root@master1 kubernetes]#yum install -y kubelet kubeadm kubectl

Enable the base services

On All of the nodes (Masters and Minions)

[root@master2 ~]# systemctl enable docker kubelet

Initialize the Etcd cluster

On master1: Use docker to start independent etcd tls cluster

$ docker stop etcd && docker rm etcd
$ rm -rf /var/lib/etcd-cluster
$ mkdir -p /var/lib/etcd-cluster
$ docker run -d \
--restart always \
-v /etc/ssl/certs:/etc/ssl/certs \
-v /var/lib/etcd-cluster:/var/lib/etcd \
-p 4001:4001 \
-p 2380:2380 \
-p 2379:2379 \
--name etcd \
gcr.io/google_containers/etcd-amd64:3.1.10 \
etcd --name=etcd0 \
--advertise-client-urls=http://192.168.0.126:2379,http://192.168.0.126:4001 \
--listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \
--initial-advertise-peer-urls=http://192.168.0.126:2380 \
--listen-peer-urls=http://0.0.0.0:2380 \
--initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \
--initial-cluster=etcd0=http://192.168.0.126:2380,etcd1=http://192.168.0.115:2380,etcd2=http://192.168.0.120:2380 \
--initial-cluster-state=new \
--auto-tls \
--peer-auto-tls \
--data-dir=/var/lib/etcd

on master2: use docker to start independent etcd tls cluster

$ docker stop etcd && docker rm etcd
$ rm -rf /var/lib/etcd-cluster
$ mkdir -p /var/lib/etcd-cluster
$ docker run -d \
--restart always \
-v /etc/ssl/certs:/etc/ssl/certs \
-v /var/lib/etcd-cluster:/var/lib/etcd \
-p 4001:4001 \
-p 2380:2380 \
-p 2379:2379 \
--name etcd \
gcr.io/google_containers/etcd-amd64:3.1.10 \
etcd --name=etcd1 \
--advertise-client-urls=http://192.168.0.115:2379,http://192.168.0.115:4001 \
--listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \
--initial-advertise-peer-urls=http://192.168.0.115:2380 \
--listen-peer-urls=http://0.0.0.0:2380 \
--initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \
--initial-cluster=etcd0=http://192.168.0.126:2380,etcd1=http://192.168.0.115:2380,etcd2=http://192.168.0.120:2380 \
--initial-cluster-state=new \
--auto-tls \
--peer-auto-tls \
--data-dir=/var/lib/etcd

On master3: use docker to start independent etcd tls cluster

$ docker stop etcd && docker rm etcd
$ rm -rf /var/lib/etcd-cluster
$ mkdir -p /var/lib/etcd-cluster
$ docker run -d \
--restart always \
-v /etc/ssl/certs:/etc/ssl/certs \
-v /var/lib/etcd-cluster:/var/lib/etcd \
-p 4001:4001 \
-p 2380:2380 \
-p 2379:2379 \
--name etcd \
gcr.io/google_containers/etcd-amd64:3.1.10 \
etcd --name=etcd2 \
--advertise-client-urls=http://192.168.0.120:2379,http://192.168.0.120:4001 \
--listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \
--initial-advertise-peer-urls=http://192.168.0.120:2380 \
--listen-peer-urls=http://0.0.0.0:2380 \
--initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \
--initial-cluster=etcd0=http://192.168.0.126:2380,etcd1=http://192.168.0.115:2380,etcd2=http://192.168.0.120:2380 \
--initial-cluster-state=new \
--auto-tls \
--peer-auto-tls \
--data-dir=/var/lib/etcd

Check ETCD Cluster Status
on k8s-master1, k8s-master2, k8s-master3: check etcd cluster health

/ # etcdctl member list
297d1ff1dc29240c: name=etcd0 peerURLs=http://192.168.0.126:2380 clientURLs=http://192.168.0.126:2379,http://192.168.0.126:4001 isLeader=true
d48aba7028627b7f: name=etcd1 peerURLs=http://192.168.0.115:2380 clientURLs=http://192.168.0.115:2379,http://192.168.0.115:4001 isLeader=false
e59f962e7b521e05: name=etcd2 peerURLs=http://192.168.0.120:2380 clientURLs=http://192.168.0.120:2379,http://192.168.0.120:4001 isLeader=false
/ # 
/ # etcdctl cluster-health
member 297d1ff1dc29240c is healthy: got healthy result from http://192.168.0.126:2379
member d48aba7028627b7f is healthy: got healthy result from http://192.168.0.115:2379
member e59f962e7b521e05 is healthy: got healthy result from http://192.168.0.120:2379
cluster is healthy
/ # 

kubeadm init

Create Kube Init Config File

#vi kube-init-1.9.yml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
kubernetesVersion: v1.9.0
networking:
  podSubnet: 10.244.0.0/16
apiServerCertSANs:
- master1
- master2
- master3
- 192.168.0.126
- 192.168.0.115
- 192.168.0.120
- 192.168.0.254
- 192.168.0.137
etcd:
  endpoints:
  - http://192.168.0.126:2379
  - http://192.168.0.115:2379
  - http://192.168.0.120:2379

Please Pay Attention:

  • 192.168.0.126, 192.168.0.115, 192.168.0.120 are the IP addresses of Master Nodes.
  • 192.168.0.254 and 192.168.0.137 are the potential HA IP.

On Master1

Switch off Swap

From Kubernetes 1.8, the Swap is required to be turned off, otherwise the kubelet service will be uanble to start.
Alternative we can work aroud with adding the initial parameters to kubelet –fail-swap-on=false .
Here, we turn off the Swap.

swapoff -a

Modify the /etc/fstab, to comment SWAP auto-mount, the confirm this change with the cmd free -m

[root@master1 kubernetes]# vi /etc/fstab 
...
#/dev/mapper/cl-swap     swap                    swap    defaults        0 0
...

To Modify the /etc/sysctl.d/k8s.conf to reset the swappiness:

vm.swappiness=0

Execute the below cmd to enable the settings.

sysctl -p /etc/sysctl.d/k8s.conf

Initialize the kubelet with kubeadm

#kubeadm init --config=kube-init-1.9.yaml
[root@master1 cluster]# kubeadm init --config=kube-init-1.9.yml 
[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1 master2 master3] and IPs [10.96.0.1 192.168.0.126 192.168.0.126 192.168.0.115 192.168.0.120 192.168.0.254 192.168.0.137]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 28.001202 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node master1 as master by adding a label and a taint
[markmaster] Master master1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 27d64f.5ddc7dcb9c98cf62
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 27d64f.5ddc7dcb9c98cf62 192.168.0.126:6443 --discovery-token-ca-cert-hash sha256:8d8a0ae49e2d2ab9cfe0bf4596bbde894c6279e59f13e054333cb0c3e368027d

[root@master1 cluster]# 

Set environment variables $KUBECONFIG, make kubectl connect kubelet

[root@master1 ~]# vi .bash_profile
......
export KUBECONFIG=/etc/kubernetes/admin.conf
......

Change Master's Admission-Control
NodeRestriction admission control will prevent other master join the cluster, we need to remove this control from the group.

[root@master1 ~]#vi /etc/kubernetes/manifests/kube-apiserver.yaml
#    - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
    - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,ResourceQuota

Install flannel networks addon

Install flannel networks addon
Install the network addons, otherwise kube-dns pod will keep status at ContainerCreating. Here we choose Flannel as the addon.

[root@master1 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
[root@master1 ~]#

Check the pods on Master1
It will take about 3m to pull the Flannel Images and start the pod.

[root@master1 ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   kube-apiserver-master1            1/1       Running   0          14h       192.168.0.126   master1
kube-system   kube-controller-manager-master1   1/1       Running   0          14h       192.168.0.126   master1
kube-system   kube-dns-6f4fd4bdf-p4lsg          3/3       Running   0          14h       10.244.0.2      master1
kube-system   kube-flannel-ds-qrblx             1/1       Running   0          27m       192.168.0.126   master1
kube-system   kube-proxy-qmnz2                  1/1       Running   0          14h       192.168.0.126   master1
kube-system   kube-scheduler-master1            1/1       Running   0          14h       192.168.0.126   master1

Install Kube Dashboard

Install Dashboard webUI
On master1: install dashboard webUI addon.
kubernetes-dashboard.yaml comes from https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

[root@master1 kube-ui]# kubectl apply -f kubernetes-dashboard.yaml 
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
[root@master1 kube-ui]# 

PAY ATTENTION PLEASE! Need to change the Serivce LB model to Nodeport and grant an Nodeport port to dashboard service.

# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30070
  selector:
    k8s-app: kubernetes-dashboard

Taint Master Node
To make master be able to schedule pods

[root@master1 kube-ui]# kubectl taint nodes --all node-role.kubernetes.io/master-
node "master1" untainted
[root@master1 kube-ui]# 

Start FluxDB Addon

[root@master1 kube-ui]# kubectl apply -f influxdb/
deployment "monitoring-grafana" created
service "monitoring-grafana" created
serviceaccount "heapster" created
deployment "heapster" created
service "heapster" created
deployment "monitoring-influxdb" created
service "monitoring-influxdb" created
[root@master1 kube-ui]# 
[root@master1 kube-ui]# 
[root@master1 kube-ui]# 
[root@master1 kube-ui]# kubectl apply -f heapster-rbac.yaml 
clusterrolebinding "heapster" created
[root@master1 kube-ui]# 

Create Kubernetes Dashboard Admin Account

Create Account Yaml File

[root@master1 kube-ui]# vi kube-dashboard-admin.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-ui-admin
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kube-ui-admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: kube-ui-admin
  namespace: kube-system

Apply this Account with Kubectl

[root@master1 kube-ui]# kubectl create -f kube-dashboard-admin.yaml 
serviceaccount "kube-ui-admin" created
clusterrolebinding "kube-ui-admin" created
[root@master1 kube-ui]# 

Check the Account's token info

[root@master1 kube-ui]# kubectl -n kube-system get secret|grep kube-ui-admin-token
kube-ui-admin-token-4mdqs                        kubernetes.io/service-account-token   3         32s
[root@master1 kube-ui]# 
[root@master1 kube-ui]# kubectl -n kube-system describe secret kube-ui-admin-token-4mdqs
Name:         kube-ui-admin-token-4mdqs
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=kube-ui-admin
              kubernetes.io/service-account.uid=9a1eae79-e538-11e7-bbbd-000c291f00ea

Type:  kubernetes.io/service-account-token

Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlLXVpLWFkbWluLXRva2VuLTRtZHFzIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmUtdWktYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5YTFlYWU3OS1lNTM4LTExZTctYmJiZC0wMDBjMjkxZjAwZWEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZS11aS1hZG1pbiJ9.mO-QgbhBiTw_n0Z2ypbobGE-MxXNC7E0RUT1mt50f1VkZ-JcdjAayFy4BLzQW7RtPC0I5H4x9uPv3WJLyIPYf_WbXdfbMiMCIo9OeLK3BmwPeqEyRWzv0X2FYuyVbCjsg-RM-mAtyu5TqX-IGZYyIBABZoSNZHDI3RsQvk9BWCkraz1vM640GRngLew8MYWmgzKjOON0Czl18i-6sEWTwlVGQqHIJWeT-RKFmORGd-yJTa9tN2C8mZWyZum1w0jCEdlryeUCL7FN4hjiKfURH6i6e1hB2mbb96sBVeN4DMcbLlhktzHYMbYdKYWj3jQ01vkdIt6BkLJMvSKS0wM9qg
ca.crt:     1025 bytes
[root@master1 kube-ui]#

Open The Dashboard in Browser

image.png

Input the token, then will get the dashboard page.


image.png

Launch Master2 & Master3

Turn off Swap

swapoff -a

Modify the /etc/fstab, to comment SWAP auto-mount, the confirm this change with the cmd free -m

[root@master1 kubernetes]# vi /etc/fstab 
...
#/dev/mapper/cl-swap     swap                    swap    defaults        0 0
...

To Modify the /etc/sysctl.d/k8s.conf to reset the swappiness:

vm.swappiness=0

Execute the below cmd to enable the settings.

sysctl -p /etc/sysctl.d/k8s.conf

Copy the Certs & Manifests to master2 and master3

SCP the kubernetes files to master2 and master3

[root@master1 ~]# scp -r /etc/kubernetes/ master2:/etc/
root@master2's password: 
kube-controller-manager.yaml                                                                                                                       100% 2232     6.1MB/s   00:00    
kube-scheduler.yaml                                                                                                                                100%  991     3.6MB/s   00:00    
kube-apiserver.yaml                                                                                                                                100% 2662     8.1MB/s   00:00    
ca.key                                                                                                                                             100% 1675     5.7MB/s   00:00    
ca.crt                                                                                                                                             100% 1025     3.9MB/s   00:00    
apiserver.key                                                                                                                                      100% 1679     6.0MB/s   00:00    
apiserver.crt                                                                                                                                      100% 1302     4.4MB/s   00:00    
apiserver-kubelet-client.key                                                                                                                       100% 1679     5.9MB/s   00:00    
apiserver-kubelet-client.crt                                                                                                                       100% 1099     2.9MB/s   00:00    
sa.key                                                                                                                                             100% 1679     4.6MB/s   00:00    
sa.pub                                                                                                                                             100%  451     1.2MB/s   00:00    
front-proxy-ca.key                                                                                                                                 100% 1679     6.1MB/s   00:00    
front-proxy-ca.crt                                                                                                                                 100% 1025     3.9MB/s   00:00    
front-proxy-client.key                                                                                                                             100% 1679     6.3MB/s   00:00    
front-proxy-client.crt                                                                                                                             100% 1050     4.1MB/s   00:00    
admin.conf                                                                                                                                         100% 5453    13.5MB/s   00:00    
kubelet.conf                                                                                                                                       100% 5461    14.5MB/s   00:00    
controller-manager.conf                                                                                                                            100% 5485    15.6MB/s   00:00    
scheduler.conf                                                                                                                                     100% 5433    16.2MB/s   00:00    
api_pwd.csv                                                                                                                                        100%   19    76.7KB/s   00:00    
[root@master1 ~]# 

Tune the parameters

Do the same steps on master2 and master3, change the server or advertise-address to each own IP address.

[root@master3 ~]# cd /etc/kubernetes/manifests/
[root@master3 manifests]# vi kube-apiserver.yaml 
...
- --advertise-address=192.168.0.120
...
[root@master3 ~]# cd /etc/kubernetes
[root@master3 kubernetes]# vi admin.conf 
...
    server: https://192.168.0.120:6443
...
[root@master3 kubernetes]# vi controller-manager.conf 
...
    server: https://192.168.0.120:6443
...
[root@master3 kubernetes]# vi scheduler.conf 
...
    server: https://192.168.0.120:6443
...
```sh
[root@master3 kubernetes]# vi kubelet.conf 
...
    server: https://192.168.0.120:6443
...

Enable the Manifests on Master2 and Master3

Reload the configuration files to raise up the kube services.

[root@master2 kubernetes]# systemctl daemon-reload && systemctl restart kubelet
[root@master3 kubernetes]# systemctl daemon-reload && systemctl restart kubelet

Add the Kube Admin Conf to ENV.

[root@master3 ~]# vi .bash_profile 
...
export KUBECONFIG=/etc/kubernetes/admin.conf
...

Install Nginx as HA Proxy for Master Nodes

Nginx Centos7 Installation (YUM)

[root@GitLab ~]# rpm -Uvh

http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm

[root@GitLab ~]# yum install nginx
[root@GitLab ~]# systemctl enable nginx
[root@GitLab ~]# systemctl start nginx

Change the Parameters

Edit nginx.conf, delete http section and add stream settings as below

   stream {
       upstream kube_apiserver {
            least_conn;
            server 192.168.0.126:6443;
            server 192.168.0.115:6443;
            server 192.168.0.120:6443;
        }
        upstream kube_server {
            least_conn;
           server 192.168.0.126:30070;
           server 192.168.0.115:30070;
           server 192.168.0.120:30070;
        }
        server {
            listen 0.0.0.0:6443;
            proxy_pass kube_apiserver;
            proxy_timeout 10m;
            proxy_connect_timeout 1s;
        }
        server {
            listen 0.0.0.0:30070;
            proxy_pass kube_server;
            proxy_timeout 10m;
            proxy_connect_timeout 1s;
        }
    }

And change worker_processes's value to auto(default value is 1).
And add the scripts as below into the events section.

multi_accept on;
use epoll;

Until now, our HA Kube Master nodes have been setup completely.

Join Minion Nodes

Do the same steps on all of the Minion Nodes. Please pay attention, we are using Nginx's IP address as the API Server's Address.

[root@km1 ~]# 
[root@km1 ~]# kubeadm join --token e9e2fe.89d78d9abeb9eb6d 192.168.0.137:6443 --discovery-token-ca-cert-hash sha256:8d8a0ae49e2d2ab9cfe0bf4596bbde894c6279e59f13e054333cb0c3e368027d
[preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.0.137:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.137:6443"
[discovery] Requesting info from "https://192.168.0.137:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.137:6443"
[discovery] Successfully established connection with API Server "192.168.0.137:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
[root@km1 ~]# 

Check the Nodes status on one of the master nodes.

[root@master1 kuberepo]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
km1       Ready     <none>    13h       v1.9.0
km2       Ready     <none>    13h       v1.9.0
km3       Ready     <none>    13h       v1.9.0
km4       Ready     <none>    13h       v1.9.0
km5       Ready     <none>    13h       v1.9.0
km6       Ready     <none>    13h       v1.9.0
km7       Ready     <none>    12h       v1.9.0
km8       Ready     <none>    12h       v1.9.0
km9       Ready     <none>    12h       v1.9.0
master1   Ready     master    2d        v1.9.0
master2   Ready     <none>    16h       v1.9.0
master3   Ready     <none>    16h       v1.9.0
[root@master1 kuberepo]# 

Create Kube Secret to Private Docker Hub

Please pay attention, the docker-server and docker-username and docker-password should be replaced by the REAL USERNAME and PASSWORD for your private docker hub.

[root@master1 ingress]# kubectl create secret docker-registry dev-sec --docker-server=hub.docker.gemii.cc --docker-username=admin --docker-password=****** --docker-email=xuejin.chen@gemii.cc —namespace=default
secret "dev-sec" created
[root@master1 ingress]# 
[root@master1 ingress]# kubectl create secret docker-registry test-sec --docker-server=hub.docker.gemii.cc --docker-username=admin --docker-password=****** --docker-email=xuejin.chen@gemii.cc —namespace=liz-test
secret "test-sec" created
[root@master1 ingress]# 
[root@master1 ingress]# kubectl create secret docker-registry kube-sec --docker-server=hub.docker.gemii.cc --docker-username=admin --docker-password=****** --docker-email=xuejin.chen@gemii.cc --namespace=kube-system
secret "kube-sec" created
[root@master1 ingress]# 
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 159,458评论 4 363
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,454评论 1 294
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 109,171评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 44,062评论 0 207
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,440评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,661评论 1 219
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,906评论 2 313
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,609评论 0 200
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,379评论 1 246
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,600评论 2 246
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,085评论 1 261
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,409评论 2 254
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,072评论 3 237
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,088评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,860评论 0 195
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,704评论 2 276
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,608评论 2 270

推荐阅读更多精彩内容