国内kubernetes(k8s)全家桶搭建

1.安装docker

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo  
yum install epel-release 
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io
  • 将docker的storage driver改为 deivicemapper
cat /etc/docker/daemon.json;
{
    "storage-driver": "devicemapper"
}

2.下载kubeadm, kubelet, kubectl

  • 使用阿里镜像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF;
  • yum安装
yum install -y kubelet kubeadm kubectl

3.使用kubeadmin初始化master

  • 系统参数调整
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
swapoff -a
  • 初始化
kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers

(注意: 爆出来的各种问题自己解决)

如果你希望master被调度或者是单机部署k8s请执行以下命令,让创建的pod可以被调度到master节点
kubectl taint node kvm-10-115-40-126 node-role.kubernetes.io/master-
让master重新不被调度执行:
kubectl taint node k8s-master node-role.kubernetes.io/master=""
  • 安装flannel
mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
这里可能会受到网络因素,所以需要改镜像
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
修改已经下载好的 kubernetes-dashboard.yaml 文件,做如下修改:
image: gcr.io/kubernetes-dashboard-amd64:v1.10.1
改为
image: loveone/kubernetes-dashboard-amd64:v1.10.1
service修改暴露nodeport
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 32765
  • 访问dashboard
# 创建用户token后访问https://x.x.x.x:32765输入链接里操作流程产出的对应的token对应的token
https://github.com/kubernetes/dashboard/wiki/Creating-sample-user
# 创建用户的yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

#授权用户cluster-admin 权限的yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
  
# 获取token命令
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

# 登陆即可
https://10.115.40.126:32765
  • 安装helm
官网(https://helm.sh/docs/using_helm/#installing-helm)
- 安装
curl -L https://git.io/get_helm.sh | bash
或者到https://github.com/helm/helm/releases下载包

4.安装slave节点加入集群

  • 从master上获取加入节点的命令
kubeadm token create --print-join-command
  • 在slave上执行上面返回的命令
kubeadm join 10.115.40.126:6443 --token yvfhlq.80ivjc67syz36msc     --discovery-token-ca-cert-hash sha256:670236660c8d4d01b1a4fd6fab178276a05bac550183a6987062fd49a4cd3854
  • 删除某个slave
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

5.日常应用

  • 创建一个namespace
kind: Namespace
apiVersion: v1
metadata:
  name: ops-prod
  • 创建configmap
# 将配置settings跟opscd路径匹配,key为文件名value为文件内容
kubectl create configmap opscd-config --from-file=settings=opscd/ -n ops-prod
  • 配置deployment使用configmap
apiVersion: apps/v1
kind: Deployment
metadata:
  name: opscd
  namespace: ops-prod
  labels:
    app: opscd
spec:
  replicas: 2
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      app: opscd
  minReadySeconds: 0
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: opscd
    spec:
      containers:
        - name: opscd
          image: 127.0.0.1/ops/opscd:0.4
          // 以pod内部环境变量的方式使用configmap
          // envFrom:
          // - configMapRef:
          //   name: example-configmap
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              memory: "256Mi"
              cpu: "300m"
          readinessProbe:
            tcpSocket:
              port: 8001
            initialDelaySeconds: 60
            periodSeconds: 30
          livenessProbe:
            httpGet:
              path: /v1
              port: 8001
            initialDelaySeconds: 60
            periodSeconds: 60
          ports:
            - containerPort: 8001
          volumeMounts:
          - name: settings
          //这里是把整个路径挂到configmap,里面对应的key为文件名value为文件内容,自动去容器内替换整个路径内容
            mountPath: /app/opscd/config 

      volumes:
        - name: settings
          configMap:
            name: opscd-config
  • 关于configmap的热更新问题
首先,configmap更新后,Env 不会同步更新,如果你使用的文件挂载方式,pod里的配置是立刻发生变化的,这个时候需要做的是重启服务即可。
1.方案一, 服务加入sidecar去保证文件有变化重启服务.如configmap-reload,配合程序使用更加的灵活.
2.方案二,通过controller暴力rolling upgrade pod
,简单容易(请参详https://github.com/stakater/Reloader)
由于好奇就看了下他的源码,大概流程就是用kubernets的go-client去消费configmap资源的event,然后针对event去遍历资源找设计configmap管理的资源,然后进行update操作,然后剩下的操作就交给k8s的deployment了

  • 使用k8s集群外部资源
# 创建一个endpoint
apiVersion: v1
kind: Endpoints
metadata:
  name: opsdb-mysql
  namespace: istio-system
subsets:
  - addresses:
    - ip: 10.115.254.14
    ports:
    - port: 3306
      protocol: TCP

6.创建ingress

  • 创建ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-myapp
  namespace: ingress-nginx
  annotations:
  // 注解的具体含义用处很重要,请参考(https://git.k8s.io/ingress-gce/examples/PREREQUISITES.md#ingress-class)
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: myapp.test.com
    http:
      paths:
      - path:
        backend:
          serviceName: myapp
          servicePort: 80
  • 创建ingress-nginx deployment
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
  • 创建ingress-nginx的service暴露nodeport
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: LoadBalancer
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 30080
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 30443
    - name: proxied-tcp-9000
      port: 9000
      targetPort: 9000
      protocol: TCP
      nodePort: 30900
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  • 验证
本地配置个myapp.test.com解析,然后访问即可
http://myapp.test.com:30080/

7.安装私有仓库harbor

  • 离线安装
# 安装文档 https://github.com/goharbor/harbor/blob/master/docs/installation_guide.md
  • push镜像到habor
docker pull nginx:latest
docker tag nginx:latest harbor.xxx.xxx/ops/nginx:lastest
docker push
  • dockerfile一个打包样例
cat Dockerfile
FROM python:3.7.3
ADD opscd opscd
ADD manage.py manage.py
ADD requirements.txt requirements.txt
RUN pip install -r requirements.txt
CMD python3 manager.py runserver 0.0.0.0:8001
EXPOSE 8001 8001

# 打包
docker build . -t 127.0.0.1/ops/opscd:ae87986

# 上传
docker push 127.0.0.1/ops/opscd:ae87986

#运行容器

docker run 127.0.0.1/ops/opscd:ae87986

# 进入一个容器
docker exec -it 0383eb8b45d6 /bin/sh

8.安装rancher

  • 安装
docker run -d --restart=unless-stopped \
  -p 80:80 -p 443:443 \
  -v /host/certs:/container/certs \
  -e SSL_CERT_DIR="/container/certs" \
  rancher/rancher:latest
  • 设置
登陆之后按照导航步骤一步一步进行,导入已经有的集群

推荐阅读更多精彩内容