5.kubernetes v1.11二进制部署(一)

因翻墙的原因,这里借用了jicki的部分镜像, nedy.com是我自己的私有镜像,没有对外发布。

第一部分:初始化环境

1. 配置hosts

10.180.160.115 kubernetes-115  node

10.180.160.114 kubernetes-114  node

10.180.160.113 kubernetes-113 node, etcd-3

10.180.160.112 kubernetes-112 master , etcd-2

10.180.160.110 kubernetes-110 master, etcd-1

说明:基于 二进制文件部署本地化

master:kube-apiserver, kube-controller-manager , kube-scheduler

nodes: kubelet, kube-proxy

2.关闭swap分区,防火墙,SElinux

# swapoff on

3.配置各节点系统内核参数使流过网桥的流量也进入iptables/netfilter框架中,在/etc/sysctl.conf中添加以下配置:

# cat <<EOF > /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

vm.swappiness=0

EOF

# sysctl --system

4. 安装docker

所有服务器预先安装 docker-ce ,官方1.9 中提示, 目前 k8s 支持最高 Docker versions 1.11.2, 1.12.6, 1.13.1, and 17.03.1

# 导入 yum 源

# 安装 yum-config-manager

yum -y install yum-utils

# 导入

yum-config-manager \

    --add-repo \

    https://download.docker.com/linux/centos/docker-ce.repo

# 更新 repo

yum makecache

# 查看yum 版本

yum list docker-ce.x86_64  --showduplicates |sort -r

# 安装指定版本 docker-ce 17.03 被 docker-ce-selinux 依赖, 不能直接yum 安装 docker-ce-selinux

# wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm

# yum install -y  docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm

# yum -y install docker-ce-17.03.2.ce

# 查看安装

docker version

Client:

Version:      17.03.2-ce

API version:  1.27

Go version:  go1.7.5

Git commit:  f5ec1e2

Built:        Tue Jun 27 02:21:36 2017

OS/Arch:      linux/amd64

更改docker 配置

# 添加配置(如果应用此配置请忽略/usr/lib/systemd/system/docker.service)

vi /etc/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine

Documentation=http://docs.docker.com

After=network.target docker-storage-setup.service

Wants=docker-storage-setup.service

[Service]

Type=notify

Environment=GOTRACEBACK=crash

ExecReload=/bin/kill -s HUP $MAINPID

Delegate=yes

KillMode=process

ExecStart=/usr/bin/dockerd \

          $DOCKER_OPTS \

          $DOCKER_STORAGE_OPTIONS \

          $DOCKER_NETWORK_OPTIONS \

          $DOCKER_DNS_OPTIONS \

          $INSECURE_REGISTRY

LimitNOFILE=1048576

LimitNPROC=1048576

LimitCORE=infinity

TimeoutStartSec=1min

Restart=on-abnormal

[Install]

WantedBy=multi-user.target

# 修改其他配置

# 低版本内核, kernel 3.10.x  配置使用 overlay2

vi /etc/docker/daemon.json

{

  "storage-driver": "overlay2",

  "storage-opts": [

    "overlay2.override_kernel_check=true"

  ]

}

# mkdir -p /etc/systemd/system/docker.service.d/

# vi /etc/systemd/system/docker.service.d/docker-options.conf

# 添加如下 :  (注意 environment 必须在同一行,如果出现换行会无法加载,注意版本不同的配置,否则无法启动)

# docker 版本 17.03.2 之前配置为 --graph=/opt/docker

# docker 版本 17.04.x 之后配置为 --data-root=/opt/docker

[Service]

Environment="DOCKER_OPTS=--insecure-registry=10.254.0.0/16 \

    --data-root=/opt/docker --log-opt max-size=50m --log-opt max-file=5"

vi /etc/systemd/system/docker.service.d/docker-dns.conf

# 添加如下 :

[Service]

Environment="DOCKER_DNS_OPTIONS=\

    --dns 10.254.0.2 --dns 114.114.114.114  \

    --dns-search default.svc.cluster.local --dns-search svc.cluster.local  \

    --dns-opt ndots:2 --dns-opt timeout:2 --dns-opt attempts:2"


# 重新读取配置,启动 docker

systemctl daemon-reload

systemctl start docker

systemctl enable docker

# 如果报错 请使用

journalctl -f -t docker  和 journalctl -u docker 来定位问题

5. 使用证书的组件如下:

etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;

kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;

kubelet:使用 ca.pem;

kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;

kubectl:使用 ca.pem、admin-key.pem、admin.pem;

kube-controller-manager:使用 ca-key.pem、ca.pem

第二部分 .创建验证

这里使用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 证书和秘钥文件。

1, 安装 cfssl

mkdir -p /opt/local/cfssl && cd /opt/local/cfssl

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 && mv cfssl_linux-amd64 cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 && mv cfssljson_linux-amd64 cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 && mv cfssl-certinfo_linux-amd64 cfssl-certinfo

chmod +x *

2, 创建 CA 证书配置

mkdir /opt/ssl && cd /opt/ssl

# config.json 文件

vi  config.json

{

  "signing": {

    "default": {

      "expiry": "87600h"

    },

    "profiles": {

      "kubernetes": {

        "usages": [

            "signing",

            "key encipherment",

            "server auth",

            "client auth"

        ],

        "expiry": "87600h"

      }

    }

  }

}

# csr.json 文件

vi csr.json

{

  "CN": "kubernetes",

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "ShenZhen",

      "L": "ShenZhen",

      "O": "k8s",

      "OU": "System"

    }

  ]

}

3, 生成 CA 证书和私钥

cd /opt/ssl/ && /opt/local/cfssl/cfssl gencert -initca csr.json | /opt/local/cfssl/cfssljson -bare ca

4, 查看证书

[root@kubernetes-110 ssl]# ls -lt

5, 分发证书

# 创建证书目录

mkdir -p /etc/kubernetes/ssl

# 拷贝所有文件到目录下

cp *.pem /etc/kubernetes/ssl  && cp ca.csr /etc/kubernetes/ssl

# 这里要将文件拷贝到所有的k8s 机器上

scp *.pem *.csr 10.180.160.115:/etc/kubernetes/ssl/

scp *.pem *.csr 10.180.160.114:/etc/kubernetes/ssl/

scp *.pem *.csr 10.180.160.113:/etc/kubernetes/ssl/

scp *.pem *.csr 10.180.160.112:/etc/kubernetes/ssl/

第三部分.创建 etcd 集群

etcd 是k8s集群最重要的组件, 如果etcd 挂了,集群就挂了, k8s v1.11.1 etcd 支持最新版本为 v3.2.18

1,安装 etcd

官方地址 https://github.com/coreos/etcd/releases

# 下载 二进制文件

wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz

tar zxvf etcd-v3.2.18-linux-amd64.tar.gz

cd etcd-v3.2.18-linux-amd64

mv etcd  etcdctl /usr/bin/

2,创建 etcd 证书

etcd 证书,默认配置三个,后续如果需要增加,更多的 etcd 节点 这里的认证IP 请多预留几个,以备后续添加能通过认证,不需要重新签发证书

cd /opt/ssl/ && vi etcd-csr.json

{

  "CN": "etcd",

  "hosts": [

    "127.0.0.1",

    "10.180.160.110",

    "10.180.160.112",

    "10.180.160.113"

  ],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "ShenZhen",

      "L": "ShenZhen",

      "O": "k8s",

      "OU": "System"

    }

  ]

}

2,生成 etcd  密钥

[root@kubernetes-110 ssl] #/opt/local/cfssl/cfssl gencert -ca=/opt/ssl/ca.pem \

  -ca-key=/opt/ssl/ca-key.pem \

  -config=/opt/ssl/config.json \

  -profile=kubernetes etcd-csr.json | /opt/local/cfssl/cfssljson -bare etcd

3, 查看生成

[root@kubernetes-110 ssl]# ls etcd*

etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem

4, 检查证书

[root@kubernetes-110 ssl]# /opt/local/cfssl/cfssl-certinfo -cert etcd.pem

5, 拷贝到etcd服务器

# etcd-1

cp etcd*.pem /etc/kubernetes/ssl/

# etcd-2

scp etcd*.pem 10.180.160.112:/etc/kubernetes/ssl/

# etcd-3

scp etcd*.pem 10.180.160.113:/etc/kubernetes/ssl/

# 如果 etcd 非 root 用户,读取证书会提示没权限

chmod 644 /etc/kubernetes/ssl/etcd-key.pem

6, 修改 etcd 配置

由于 etcd 是最重要的组件,所以 –data-dir 请配置到其他路径中

# 创建 etcd data 目录, 并授权

useradd etcd && mkdir -p /opt/etcd && chown -R etcd:etcd /opt/etcd

-------# etcd-1

vi /etc/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

[Service]

Type=notify

WorkingDirectory=/opt/etcd/

User=etcd

# set GOMAXPROCS to number of processors

ExecStart=/usr/bin/etcd \

  --name=etcd1 \

  --cert-file=/etc/kubernetes/ssl/etcd.pem \

  --key-file=/etc/kubernetes/ssl/etcd-key.pem \

  --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \

  --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \

  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \

  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \

  --initial-advertise-peer-urls=https://10.180.160.110:2380 \

  --listen-peer-urls=https://10.180.160.110:2380 \

  --listen-client-urls=https://10.180.160.110:2379,http://127.0.0.1:2379 \

  --advertise-client-urls=https://10.180.160.110:2379 \

  --initial-cluster-token=k8s-etcd-cluster \

  --initial-cluster=etcd1=https://10.180.160.110:2380,etcd2=https://10.180.160.112:2380,etcd3=https://10.180.160.113:2380 \

  --initial-cluster-state=new \

  --data-dir=/opt/etcd/

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

----------# etcd-2

vi /etc/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

[Service]

Type=notify

WorkingDirectory=/opt/etcd/

User=etcd

# set GOMAXPROCS to number of processors

ExecStart=/usr/bin/etcd \

  --name=etcd2 \

  --cert-file=/etc/kubernetes/ssl/etcd.pem \

  --key-file=/etc/kubernetes/ssl/etcd-key.pem \

  --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \

  --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \

  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \

  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \

  --initial-advertise-peer-urls=https://10.180.160.112:2380 \

  --listen-peer-urls=https://10.180.160.112:2380 \

  --listen-client-urls=https://10.180.160.112:2379,http://127.0.0.1:2379 \

  --advertise-client-urls=https://10.180.160.112:2379 \

  --initial-cluster-token=k8s-etcd-cluster \

  --initial-cluster=etcd1=https://10.180.160.110:2380,etcd2=https://10.180.160.112:2380,etcd3=https://10.180.160.113:2380 \

  --initial-cluster-state=new \

  --data-dir=/opt/etcd

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

------------# etcd-3

vi /etc/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

[Service]

Type=notify

WorkingDirectory=/opt/etcd/

User=etcd

# set GOMAXPROCS to number of processors

ExecStart=/usr/bin/etcd \

  --name=etcd3 \

  --cert-file=/etc/kubernetes/ssl/etcd.pem \

  --key-file=/etc/kubernetes/ssl/etcd-key.pem \

  --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \

  --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \

  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \

  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \

  --initial-advertise-peer-urls=https://10.180.160.113:2380 \

  --listen-peer-urls=https://10.180.160.113:2380 \

  --listen-client-urls=https://10.180.160.113:2379,http://127.0.0.1:2379 \

  --advertise-client-urls=https://10.180.160.113:2379 \

  --initial-cluster-token=k8s-etcd-cluster \

  --initial-cluster=etcd1=https://10.180.160.110:2380,etcd2=https://10.180.160.112:2380,etcd3=https://10.180.160.113:2380 \

  --initial-cluster-state=new \

  --data-dir=/opt/etcd/

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

7, 启动 etcd

分别启动 所有节点的 etcd 服务【记得同时启动,不要启动一台后傻等!!!】

systemctl daemon-reload

systemctl enable etcd.service

systemctl start etcd.service

systemctl status etcd.service

# 如果报错 请使用

journalctl -f -t etcd  和 journalctl -u etcd 来定位问题

8,验证 etcd 集群状态

查看 etcd 集群的状态:

etcdctl --endpoints=https://10.180.160.110:2379,https://10.180.160.112:2379,https://10.180.160.113:2379\

        --cert-file=/etc/kubernetes/ssl/etcd.pem \

        --ca-file=/etc/kubernetes/ssl/ca.pem \

        --key-file=/etc/kubernetes/ssl/etcd-key.pem \

        cluster-health

member 7db32b9796d28324 is healthy: got healthy result from https://10.180.160.115:2379

member a2ce16cce959e3bc is healthy: got healthy result from https://10.180.160.113:2379

member aa86371c8157d665 is healthy: got healthy result from https://10.180.160.114:2379

查看 etcd 集群成员:

etcdctl --endpoints=https://10.180.160.110:2379,https://10.180.160.112:2379,https://10.180.160.113:2379\

        --cert-file=/etc/kubernetes/ssl/etcd.pem \

        --ca-file=/etc/kubernetes/ssl/ca.pem \

        --key-file=/etc/kubernetes/ssl/etcd-key.pem \

        member list

7db32b9796d28324: name=etcd1 peerURLs=https://10.180.160.115:2380 clientURLs=https://10.180.160.115:2379 isLeader=true

a2ce16cce959e3bc: name=etcd3 peerURLs=https://10.180.160.113:2380 clientURLs=https://10.180.160.113:2379 isLeader=false

aa86371c8157d665: name=etcd2 peerURLs=https://10.180.160.114:2380 clientURLs=https://10.180.160.114:2379 isLeader=false

---------------------etcd集群部署完成----------------------------------------------------------------------------

推荐阅读更多精彩内容