龙芯平台上kubernetes+calico部署

一.安装docker
详细步骤可参考链接
http://doc.loongnix.org/web/#/50?page_id=148
命令行如下:
yum install docker-ce -y
启动服务:
systemctl start docker.service
查看版本:
docker version

二.部署kubernetes
详细步骤参考
http://doc.loongnix.org/web/#/71?page_id=232

1.软件包下载地址
在master和node均需获取以下软件包
kubeadm-1.18.3-0.lns7.mips64el.rpm
kubectl-1.18.3-0.lns7.mips64el.rpm
kubelet-1.18.3-0.lns7.mips64el.rpm
kubernetes-cni-0.8.6-0.lns7.mips64el.rpm

2.拉取镜像

docker pull loongnixk8s/node:v3.13.2
docker pull loongnixk8s/cni:v3.13.2
docker pull loongnixk8s/pod2daemon-flexvol:v3.13.2
docker pull loongnixk8s/kube-controllers:v3.13.2
docker pull loongnixk8s/kube-apiserver-mips64le:v1.18.3
docker pull loongnixk8s/kube-controller-manager-mips64le:v1.18.3
docker pull loongnixk8s/kube-proxy-mips64le:v1.18.3
docker pull loongnixk8s/kube-scheduler-mips64le:v1.18.3
docker pull loongnixk8s/pause:3.2
docker pull loongnixk8s/coredns:1.6.5
docker pull loongnixk8s/etcd:3.3.12

3.在/etc/hosts文件中,添加master和node对应的物理ip和hostname(如下示例)

10.130.0.125 master001
10.130.0.71 node001

在master节点的/etc/hostname文件中添加内容:master001
在node节点的/etc/hostname文件中添加内容:node001

4.安装软件包

[root@master001 ~]# cd /etc/kubernetes 
[root@master001 kubernetes]# ls | grep rpm
kubeadm-1.18.3-0.mips64el.rpm
kubectl-1.18.3-0.mips64el.rpm
kubelet-1.18.3-0.mips64el.rpm
kubernetes-cni-0.8.6-0.mips64el.rpm
[root@master001 kubernetes]# rpm -ivh *.rpm

5.关闭防火墙/交换分区/SELINUX
在终端执行以下命令清除防火墙规则并查看清除后的结果:
iptables -F && iptables -X && iptables -Z && iptables -L&&systemctl stop iptables&&systemctl status iptables
在终端执行下面两条命令关闭交换分区:
swapoff -a;sed -i -e /swap/d /etc/fstab
在终端执行下面两条命令关闭selinux分区:
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0

以上步骤需要在master和node节点执行,以下步骤只需在master节点执行

6.准备kubeadm配置文件
(1)通过以下命令生成配置文件模板
kubeadm config print init-defaults > init_default.yaml
修改init_default.yaml文件中以下内容使之与当前部署环境和版本一致
找到对应配置,修改为以下内容

localAPIEndpoint:
advertiseAddress: 10.130.0.125(master的主机IP)
bindPort: 6443
........
imageRepository: loongnixk8s(私有仓库地址)
kind: ClusterConfiguration
kubernetesVersion: v1.18.3(当前k8s版本)
networking:
dnsDomain: cluster.local

(2) 执行如下命令查看kubeadm配置后所需镜像版本

[root@master001 kubernetes]# kubeadm config images list --config init_default.yaml
loongnixk8s/kube-apiserver:v1.18.3
loongnixk8s/kube-controller-manager:v1.18.3
loongnixk8s/kube-scheduler:v1.18.3
loongnixk8s/kube-proxy:v1.18.3
loongnixk8s/pause:3.2
loongnixk8s/etcd:3.4.3-0
loongnixk8s/coredns:1.6.7

(3) 通过以下命令对本地镜像进行重命名,使之与kubeadm要求的镜像名一致

docker tag loongnixk8s/kube-apiserver-mips64le:v1.18.3 loongnixk8s/kube-apiserver:v1.18.3
docker tag loongnixk8s/kube-controller-manager-mips64le:v1.18.3 loongnixk8s/kube-controller-manager:v1.18.3
docker tag loongnixk8s/kube-scheduler-mips64le:v1.18.3 loongnixk8s/kube-scheduler:v1.18.3
docker tag loongnixk8s/kube-proxy-mips64le:v1.18.3 loongnixk8s/kube-proxy:v1.18.3
docker tag loongnixk8s/pause:3.2 loongnixk8s/pause:3.2
docker tag loongnixk8s/etcd:3.3.12 loongnixk8s/etcd:3.4.3-0
docker tag loongnixk8s/coredns:1.6.5 loongnixk8s/coredns:1.6.7

7.calico配置文件准备
通过以下命令获取官方calico配置文件
curl https://docs.projectcalico.org/archive/v3.13/manifests/calico.yaml -O
修改calico.yaml中对应配置,使配置文件中镜像名称与本地镜像一致

        # It can be deleted if this is a fresh installation, or if you have already
        # upgraded to use calico-ipam.
        - name: upgrade-ipam
          image: loongnixk8s/cni:v3.13.2(保持与私有仓库地址一致)
--
        # This container installs the CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: loongnixk8s/cni:v3.13.2(保持与私有仓库地址一致)
--
        # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
        # to communicate with Felix over the Policy Sync API.
        - name: flexvol-driver
          image: loongnixk8s/pod2daemon-flexvol:v3.13.2(保持与私有仓库地址一致)
--
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: loongnixk8s/node:v3.13.2(保持与私有仓库地址一致)
--
      priorityClassName: system-cluster-critical
      containers:
        - name: calico-kube-controllers
          image: loongnixk8s/kube-controllers:v3.13.2(保持与私有仓库地址一致)

kubectl apply -f calico.yaml

7.master节点初始化
(1)执行以下命令进行kubeadm初始化

[root@master001 kubernetes]#  kubeadm init --config=init_default.yaml

终端输出的结果,如下:

W0702 10:54:50.953310   24907 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [bogon kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.130.0.125]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [bogon localhost] and IPs [10.130.0.125 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [bogon localhost] and IPs [10.130.0.125 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0702 10:56:52.414997   24907 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0702 10:56:52.418399   24907 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 43.010877 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node bogon as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node bogon as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.130.0.125:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:6c2cb8a894e19f48c1b15c2440f9c150d9e8559df0147262d9223cc28a475975
注:如果初始化失败,可以尝试执行kubeadm reset重启kubeadm(会删除创建的文件和节点)

(2) 初始化完成后在终端执行以下命令,拷贝对应的配置文件。

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
注:如果重复执行初始化操作,需要先删除$HOME/.kube目录,否则会报错

(3) 查看当前master状态。

[root@master001 kubernetes]# kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
master001   Ready    master   8m45s   v1.18.3
[root@master001 kubernetes]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                READY   STATUS              RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
kube-system   coredns-889c78476-c5dd7             0/1     Pending   0          8m45s   <none>         master001   <none>           <none>
kube-system   coredns-889c78476-sd9gd             0/1     Pending   0          8m45s   <none>         master001   <none>           <none>
kube-system   etcd-master001                      1/1     Running             0          8m41s   10.130.0.125   master001   <none>           <none>
kube-system   kube-apiserver-master001            1/1     Running             0          8m41s   10.130.0.125   master001   <none>           <none>
kube-system   kube-controller-manager-master001   1/1     Running             0          8m41s   10.130.0.125   master001   <none>           <none>
kube-system   kube-proxy-dzzc9                    1/1     Running             0          8m45s   10.130.0.125   master001   <none>           <none>
kube-system   kube-scheduler-master001            1/1     Running             0          8m41s   10.130.0.125   master001   <none>           <none>
至此master节点部署完毕,可以添加node节点

(1) 加入集群,在终端执行命令,如下示(注:以下token由3.2.1(1)生成):

kubeadm join 10.130.0.125:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:6c2cb8a894e19f48c1b15c2440f9c150d9e8559df0147262d9223cc28a475975

如果无法加入可能是因为token 过期, 可通过在master节点上执行 kubeadm token create --print-join-command重新生成加入命令,并使用输出的新命令在工作节点上重新执行即可。

node节点终端输出的结果,如下:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

(2) 查看node是否成功加入集群
在master中执行kubectl get nodes,显示如下

[root@master001 kubernetes]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
master001   Ready    master   11m   v1.18.3
node001     Ready    <none>   12s   v1.18.3

(3)master终端查看pod信息。
终端输入命令和终端输出内容,如下示:

[root@master001 kubernetes]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-66dc75b87-lgqvn   1/1     Running   0          3m59s   192.168.152.129   node001     <none>           <none>
kube-system   calico-node-lxr6t                         1/1     Running   0          3m59s   10.130.0.125      master001   <none>           <none>
kube-system   calico-node-sqhq8                         1/1     Running   0          3m59s   10.130.0.71       node001     <none>           <none>
kube-system   coredns-889c78476-c5dd7                   1/1     Running   0          16m     192.168.163.66    master001   <none>           <none>
kube-system   coredns-889c78476-sd9gd                   1/1     Running   0          16m     192.168.163.64    master001   <none>           <none>
kube-system   etcd-master001                            1/1     Running   0          15m     10.130.0.125      master001   <none>           <none>
kube-system   kube-apiserver-master001                  1/1     Running   0          15m     10.130.0.125      master001   <none>           <none>
kube-system   kube-controller-manager-master001         1/1     Running   0          15m     10.130.0.125      master001   <none>           <none>
kube-system   kube-proxy-dzzc9                          1/1     Running   0          16m     10.130.0.125      master001   <none>           <none>
kube-system   kube-proxy-hlv7s                          1/1     Running   0          4m59s   10.130.0.71       node001     <none>           <none>
kube-system   kube-scheduler-master001                  1/1     Running   0          15m     10.130.0.125      master001   <none>           <none>

若全部pod的READY和STATUS状态如上所示,则表示部署成功。

测试ngnix pod的示例

在node中获取nginx镜像

[root@node001 kubernetes]# docker pull loongnixk8s/nginx:1.17.7

在Master上创建nginx pod。

(1) 创建nginx.yaml文件,内容如下(可根据实际情况修改):

# API 版本号
apiVersion: apps/v1
# 类型,如:Pod/ReplicationController/Deployment/Service/Ingress
kind: Deployment
metadata:
  # Kind 的名称
  name: nginx-app
spec:
  selector:
    matchLabels:
      # 容器标签的名字,发布 Service 时,selector 需要和这里对应
      app: nginx
  # 部署的实例数量
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      # 配置容器,数组类型,说明可以配置多个容器
      containers:
      # 容器名称
      - name: nginx
        # 容器镜像
        image: loongnixk8s/nginx:1.17.7
        # 只有镜像不存在时,才会进行镜像拉取
        imagePullPolicy: IfNotPresent
        ports:
        # Pod 端口
        - containerPort: 80

在终端执行以下命令

[root@master001 kubernetes]# kubectl apply -f nginx.yaml
deployment.apps/nginx-app created

(2)查看pod是否运行正常。
在终端输入的命令和终端输出的结果,如下示:

[root@master001 kubernetes]# kubectl get po
NAME                         READY   STATUS    RESTARTS   AGE
nginx-app-74ddf9865c-8fmwb  1/1  Running  0  91s
nginx-app-74ddf9865c-vrgvv 1/1  Running  0  91s

(3)部署service 。
在终端输入的命令和终端输出的结果,如下示:

[root@master001 kubernetes]# kubectl expose deployment nginx-app --port=88  --target-port=80  --type=NodePort
service/nginx-app exposed

(4)查看service。
在终端输入的命令和终端输出的结果,如下示:

 [root@master001 kubernetes]# kubectl get svc
 NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S) AGE
 kubernetes ClusterIP  10.96.0.1  <none>  443/TCP 116m
 nginx-app NodePort  10.101.225.240  <none>  88:31541/TCP 43s

(5)nginx服务访问。
通过pod服务+端口的方式操作。
在终端输入的命令和终端输出的结果,如下示:


[root@master001 kubernetes]# curl 10.101.225.240:88
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width:  35em;
margin:  0  auto;
font-family:  Tahoma,  Verdana,  Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working.  Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for  using nginx.</em></p>
</body>
</html>

注:2个节点的集群部署完成,node节点继续加入集群请执行下述命令:

kubeadm join 10.130.0.125:6443  --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6c2cb8a894e19f48c1b15c2440f9c150d9e8559df0147262d9223cc28a475975

完成一个node节点配置的脚本内容如下

[root@master001 kubernetes]# cat k8s_dep.sh 
#!/bin/bash
#kubernetes 1.18.3环境搭建(安装包和镜像下载,适用于master和node节点)


#下载安装包(node and master)
wget http://ftp.loongnix.org/os/loongnix-server/1.7/virt/mips64el/kubernetes118/kubeadm-1.18.3-0.lns7.mips64el.rpm 
wget http://ftp.loongnix.org/os/loongnix-server/1.7/virt/mips64el/kubernetes118/kubectl-1.18.3-0.lns7.mips64el.rpm 
wget http://ftp.loongnix.org/os/loongnix-server/1.7/virt/mips64el/kubernetes118/kubelet-1.18.3-0.lns7.mips64el.rpm 
wget http://ftp.loongnix.org/os/loongnix-server/1.7/virt/mips64el/kubernetes118/kubernetes-cni-0.8.6-0.lns7.mips64el.rpm 

#安装
yum install conntrack socat -y
rpm -ivh kubeadm-1.18.3-0.lns7.mips64el.rpm
rpm -ivh kubectl-1.18.3-0.lns7.mips64el.rpm
rpm -ivh kubernetes-cni-0.8.6-0.lns7.mips64el.rpm
rpm -ivh kubelet-1.18.3-0.lns7.mips64el.rpm


#安装docker 启动并设置开机自启动
yum install docker-ce -y
systemctl start docker.service
systemctl enable docker.service


#iptables设置
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X


#关闭交换分区
swapoff -a
sed -i -e /swap/d /etc/fstab


#关闭 SELINUX
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0

#拉取所需镜像
docker pull loongnixk8s/node:v3.13.2
docker pull loongnixk8s/cni:v3.13.2
docker pull loongnixk8s/pod2daemon-flexvol:v3.13.2
docker pull loongnixk8s/kube-controllers:v3.13.2
docker pull loongnixk8s/kube-apiserver-mips64le:v1.18.3
docker pull loongnixk8s/kube-controller-manager-mips64le:v1.18.3
docker pull loongnixk8s/kube-proxy-mips64le:v1.18.3
docker pull loongnixk8s/kube-scheduler-mips64le:v1.18.3
docker pull loongnixk8s/pause:3.2
docker pull loongnixk8s/coredns:1.6.5
docker pull loongnixk8s/etcd:3.3.12


#重命名使之与kubeadm要求的镜像名一致
docker tag loongnixk8s/kube-apiserver-mips64le:v1.18.3 loongnixk8s/kube-apiserver:v1.18.3
docker tag loongnixk8s/kube-controller-manager-mips64le:v1.18.3 loongnixk8s/kube-controller-manager:v1.18.3
docker tag loongnixk8s/kube-scheduler-mips64le:v1.18.3 loongnixk8s/kube-scheduler:v1.18.3
docker tag loongnixk8s/kube-proxy-mips64le:v1.18.3 loongnixk8s/kube-proxy:v1.18.3
docker tag loongnixk8s/pause:3.2 loongnixk8s/pause:3.2
docker tag loongnixk8s/etcd:3.3.12 loongnixk8s/etcd:3.4.3-0
docker tag loongnixk8s/coredns:1.6.5 loongnixk8s/coredns:1.6.7