k8s实验环境搭建

以下所有环境都在vmware 15完成。

===================centos7基础初始化====================================================

设置阿里云的yum源:

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

mv /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

yum clean all

yum makecache

yum update -y

安装基础软件:

yum install -y vim telnet net-tools docker ntp yum-utils device-mapper-persistent-data lvm2

关闭防火墙和selinux:

systemctl stop firewalld.service

systemctl disable firewalld.service

sed -i 's@SELINUX=enforcing@SELINUX=disabled@g' /etc/selinux/config

setenforce 0

getenforce

优化内核参数:

# 临时关闭swap

# 永久关闭 注释/etc/fstab文件里swap相关的行

swapoff -a

# 配置转发相关参数,否则可能会出错

cat <<EOF >  /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

vm.swappiness=0

EOF

sysctl --system

# 加载ipvs相关内核模块

# 如果重新开机,需要重新加载

modprobe ip_vs

modprobe ip_vs_rr

modprobe ip_vs_wrr

modprobe ip_vs_sh

modprobe nf_conntrack_ipv4

lsmod | grep ip_vs

设置静态IP:

sed -i 's@BOOTPROTO=dhcp@BOOTPROTO=static@g' /etc/sysconfig/network-scripts/ifcfg-ens32

echo '''

IPADDR=192.168.1.10

NETMASK=255.255.255.0

GATEWAY=192.168.1.2

DNS1=192.168.1.2

''' >> /etc/sysconfig/network-scripts/ifcfg-ens32

cat /etc/sysconfig/network-scripts/ifcfg-ens32

systemctl restart network

同步时间,集群之间的时间同步很重要:

ntpdate cn.pool.ntp.org

===================docker搭建私有仓库、自签发证书、登录认证====================================================

设置主机名:

hostname registry.domain.com

echo registry.domain.com > /etc/hostname

启动docker:

systemctl restart docker

systemctl enable docker

生成自签发证书:

mkdir -p certs

openssl req -newkey rsa:2048 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt

ll certs/


生成鉴权密码文件:

mkdir auth

docker run --entrypoint htpasswd registry:2 -Bbn username password  > auth/htpasswd

ls auth


启动Registry:

docker run -d -p 5000:5000 --restart=always --name registry    -v `pwd`/auth:/auth    -e "REGISTRY_AUTH=htpasswd"    -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm"    -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd    -v `pwd`/data:/var/lib/registry    -v `pwd`/certs:/certs    -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt    -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key    registry:2

设置hosts解析域名:

echo "192.168.1.11 registry.domain.com" >> /etc/hosts

client安装我们的CA证书:

mkdir -p /etc/docker/certs.d/registry.domain.com:5000

cp certs/domain.crt /etc/docker/certs.d/registry.domain.com:5000/ca.crt

systemctl restart docker

登陆Registry:

docker login registry.domain.com:5000

pull image到registry:

docker pull busybox

docker tag busybox:latest registry.domain.com:5000/busybox:latest

docker push registry.domain.com:5000/busybox:latest

查看仓库中的镜像:

curl -u username:password -XGET https://registry.domain.com:5000/v2/_catalog -k

curl -u username:password -XGET https://registry.domain.com:5000/v2/hello-node/tags/list -k

===================Kubeadm安装Kubernetes简单集群环境====================================================

192.168.1.12 k8smaster01  软件:etcd k8smaster haproxy keepalived

192.168.1.13 k8snode01    软件:k8snode

设置主机名:

(在k8smaster01执行)

hostname k8smaster01

echo k8smaster01 > /etc/hostname

(在k8snode01执行)

hostname k8snode01

echo k8snode01 > /etc/hostname

(在所有节点执行)

设置hosts解析域名:

echo '''

192.168.1.12 k8smaster01

192.168.1.13 k8snode01 

192.168.1.11 registry.domain.com

''' >> /etc/hosts

安装K8S相关组件:

(在所有节点执行)

yum install -y kubelet kubeadm kubectl ipvsadm

systemctl restart docker

systemctl enable docker

这里查到的结果为 cgroupfs.因此修改kubeadm的配置文件如下

vim  /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

#添加如下配置

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

(在k8smaster01执行)

systemctl enable kubelet && systemctl start kubelet

启动不成功:

systemctl status kubelet 查看失败

打印日志查看:journalctl -xeu kubelet > a

cat a

Jan 16 01:48:55 k8snode01 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a

Jan 16 01:48:55 k8snode01 systemd[1]: Unit kubelet.service entered failed state.

Jan 16 01:48:55 k8snode01 systemd[1]: kubelet.service failed.

Jan 16 01:49:06 k8snode01 systemd[1]: kubelet.service holdoff time over, scheduling restart.

Jan 16 01:49:06 k8snode01 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.

-- Subject: Unit kubelet.service has finished shutting down

-- Defined-By: systemd

-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

--

-- Unit kubelet.service has finished shutting down.

Jan 16 01:49:06 k8snode01 systemd[1]: Started kubelet: The Kubernetes Node Agent.

-- Subject: Unit kubelet.service has finished start-up

-- Defined-By: systemd

-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

--

-- Unit kubelet.service has finished starting up.

--

-- The start-up result is done.

Jan 16 01:49:06 k8snode01 kubelet[8508]: F0116 01:49:06.193236    8508 server.go:189] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory

Jan 16 01:49:06 k8snode01 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a

Jan 16 01:49:06 k8snode01 systemd[1]: Unit kubelet.service entered failed state.

Jan 16 01:49:06 k8snode01 systemd[1]: kubelet.service failed.

网上找了一堆文档讲需要 kubeadm init成功。

比如:https://blog.csdn.net/zzq900503/article/details/81710319

运行: kubeadm init 继续报错:

[root@k8snode01 ~]# kubeadm init

I0116 02:00:49.485884    8984 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

I0116 02:00:49.486035    8984 version.go:95] falling back to the local client version: v1.13.2

[init] Using Kubernetes version: v1.13.2

[preflight] Running pre-flight checks

error execution phase preflight: [preflight] Some fatal errors occurred:

[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

这儿有一个关键信息是:[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

意思是主机的CPU数量不能小于2,重新设置虚拟机。

到此kubelet不能启动的错误忽略!!!!!!!!!!!

继续初始化:

kubeadm init --kubernetes-version=v1.13.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

执行后报错:(因为无法访问国外网站,所以。。。。)

error execution phase preflight: [preflight] Some fatal errors occurred:

[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.2: output: Trying to pull repository k8s.gcr.io/kube-apiserver ...

Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: getsockopt: connection refused

, error: exit status 1

[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.2: output: Trying to pull repository k8s.gcr.io/kube-controller-manager ...

Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: getsockopt: connection refused

, error: exit status 1

[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.2: output: Trying to pull repository k8s.gcr.io/kube-scheduler ...

Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: getsockopt: connection refused

, error: exit status 1

[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.2: output: Trying to pull repository k8s.gcr.io/kube-proxy ...

Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: getsockopt: connection refused

, error: exit status 1

[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Trying to pull repository k8s.gcr.io/pause ...

Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: getsockopt: connection refused

, error: exit status 1

[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Trying to pull repository k8s.gcr.io/etcd ...

Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: getsockopt: connection refused

, error: exit status 1

[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Trying to pull repository k8s.gcr.io/coredns ...

Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: getsockopt: connection refused

, error: exit status 1

[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

整理后需要获取一下镜像:

kube-apiserver:v1.13.2

kube-controller-manager:v1.13.2

kube-scheduler:v1.13.2

kube-proxy:v1.13.2

pause:3.1

etcd:3.2.24

coredns:1.2.6

如何整理拉取镜像脚本:

首先查找docker的镜像:docker search kube-apiserver

会出现很多镜像,选择使用次数最多的。

INDEX      NAME                                                        DESCRIPTION                                    STARS    OFFICIAL  AUTOMATED

docker.io  docker.io/mirrorgooglecontainers/kube-apiserver-amd64                                                        24                 

docker.io  docker.io/googlecontainer/kube-apiserver                                                                    8                   

docker.io  docker.io/empiregeneral/kube-apiserver-amd64                kube-apiserver-amd64                            3                    [OK]

docker.io  docker.io/mirrorgooglecontainers/kube-apiserver-arm                                                          3                   

docker.io  docker.io/cloudnil/kube-apiserver-amd64                      kubernetes dependency                          2                    [OK]

docker.io  docker.io/graytshirt/kube-apiserver                          Alpine with the kube-apiserver binary          2                   

docker.io  docker.io/keveon/kube-apiserver-amd64                                                                        2                   

docker.io  docker.io/carlziess/kube-apiserver-amd64-v1.11.1            kube-apiserver-amd64-v1.11.1                    1                    [OK]

将脚本镜像改为使用次数最多的镜像地址拉取,脚本如下:

vim pullimages.sh

#!/bin/bash

images=(

kube-apiserver:v1.13.2

kube-controller-manager:v1.13.2

kube-scheduler:v1.13.2

kube-proxy:v1.13.2

pause:3.1

etcd:3.2.24

coredns:1.2.6

)

for imageName in ${images[@]} ; do

docker pull mirrorgooglecontainers/$imageName

docker tag mirrorgooglecontainers/$imageName k8s.gcr.io/$imageName

docker rmi mirrorgooglecontainers/$imageName

done

镜像拉取完成后,再次执行初始化操作。

成功!!!!!!!

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node

as root:

  kubeadm join 192.168.1.12:6443 --token elyo5v.4vbp3yf28l2wylxl --discovery-token-ca-cert-hash sha256:89d8bdf78d478671441437ad11fbcc7ab71fe387c25ad2ce239b5072a6413e5d

再次查看master上的k8s就已启动了!

[root@k8smaster01 yum.repos.d]# systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent

  Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)

  Drop-In: /etc/systemd/system/kubelet.service.d

          └─10-kubeadm.conf

  Active: active (running) since Wed 2019-01-16 03:23:57 EST; 1min 44s ago

获取组件的健康状态:

[root@k8smaster01 yum.repos.d]#  kubectl get cs

NAME                STATUS    MESSAGE              ERROR

scheduler            Healthy  ok                 

controller-manager  Healthy  ok                 

etcd-0              Healthy  {"health": "true"} 

查看节点信:

[root@k8smaster01 yum.repos.d]# kubectl get nodes

NAME          STATUS    ROLES    AGE    VERSION

k8smaster01  NotReady  master  5m13s  v1.13.2

安装网络插件:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl get nodes

NAME      STATUS    ROLES    AGE      VERSION

server76  Ready    master    33m      v1.11.2

执行如下的命令,获取当前系统上所有在运行的pod的状态,指定名称空间为kube-system,为系统级的pod,命令如下

[root@k8smaster01 ~]# kubectl get pods -n kube-system

NAME                                    READY  STATUS    RESTARTS  AGE

coredns-86c58d9df4-4x6fz                1/1    Running  1          23h

coredns-86c58d9df4-9xmp8                1/1    Running  1          23h

etcd-k8smaster01                        1/1    Running  1          23h

kube-apiserver-k8smaster01              1/1    Running  1          23h

kube-controller-manager-k8smaster01    1/1    Running  2          23h

kube-flannel-ds-amd64-n2vlb            1/1    Running  1          23h

kube-flannel-ds-amd64-s7jd2            1/1    Running  1          23h

kube-proxy-h9m8j                        1/1    Running  1          23h

kube-proxy-mzkxp                        1/1    Running  1          23h

kube-scheduler-k8smaster01              1/1    Running  2          23h

执行如下命令,获取当前系统的名称空间

kubectl get ns

NAME          STATUS    AGE

default      Active    36m

kube-public  Active    36m

kube-system  Active    36m

for i in k8snode01;do scp /usr/lib/systemd/system/docker.service  $i:/usr/lib/systemd/system/;done

for i in k8snode01;do scp /etc/sysconfig/kubelet $i:/etc/sysconfig/;done

(在k8snode01执行)

加入集群:

kubeadm join 192.168.1.12:6443 --token elyo5v.4vbp3yf28l2wylxl --discovery-token-ca-cert-hash sha256:89d8bdf78d478671441437ad11fbcc7ab71fe387c25ad2ce239b5072a6413e5d

[root@k8smaster01 yum.repos.d]# kubectl get nodes

NAME          STATUS    ROLES    AGE  VERSION

k8smaster01  Ready      master  22m  v1.13.2

k8snode01    NotReady  <none>  12m  v1.13.2

不成功!!!

[root@k8snode01 ~]# journalctl -f

-- Logs begin at Wed 2019-01-16 02:05:51 EST. --

Jan 16 03:47:03 k8snode01 kubelet[22009]: E0116 03:47:03.950898  22009 pod_workers.go:190] Error syncing pod 866d373e-1969-11e9-a19f-000c29a0a657 ("kube-proxy-mzkxp_kube-system(866d373e-1969-11e9-a19f-000c29a0a657)"), skipping: failed to "CreatePodSandbox" for "kube-proxy-mzkxp_kube-system(866d373e-1969-11e9-a19f-000c29a0a657)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-proxy-mzkxp_kube-system(866d373e-1969-11e9-a19f-000c29a0a657)\" failed: rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.1\": Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.97.82:443: getsockopt: connection refused"

Jan 16 03:47:08 k8snode01 kubelet[22009]: W0116 03:47:08.084119  22009 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d

Jan 16 03:47:08 k8snode01 kubelet[22009]: E0116 03:47:08.084309  22009 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Jan 16 03:47:08 k8snode01 kubelet[22009]: E0116 03:47:08.123557  22009 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"

Jan 16 03:47:08 k8snode01 kubelet[22009]: E0116 03:47:08.123603  22009 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"

Jan 16 03:47:08 k8snode01 dockerd-current[21895]: time="2019-01-16T03:47:08.941121235-05:00" level=error msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.97.82:443: getsockopt: connection refused"

Jan 16 03:47:08 k8snode01 kubelet[22009]: E0116 03:47:08.941891  22009 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.97.82:443: getsockopt: connection refused

Jan 16 03:47:08 k8snode01 kubelet[22009]: E0116 03:47:08.941944  22009 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-flannel-ds-amd64-n2vlb_kube-system(866cf8ca-1969-11e9-a19f-000c29a0a657)" failed: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.97.82:443: getsockopt: connection refused

Jan 16 03:47:08 k8snode01 kubelet[22009]: E0116 03:47:08.941966  22009 kuberuntime_manager.go:662] createPodSandbox for pod "kube-flannel-ds-amd64-n2vlb_kube-system(866cf8ca-1969-11e9-a19f-000c29a0a657)" failed: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.97.82:443: getsockopt: connection refused

Jan 16 03:47:08 k8snode01 kubelet[22009]: E0116 03:47:08.942032  22009 pod_workers.go:190] Error syncing pod 866cf8ca-1969-11e9-a19f-000c29a0a657 ("kube-flannel-ds-amd64-n2vlb_kube-system(866cf8ca-1969-11e9-a19f-000c29a0a657)"), skipping: failed to "CreatePodSandbox" for "kube-flannel-ds-amd64-n2vlb_kube-system(866cf8ca-1969-11e9-a19f-000c29a0a657)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-flannel-ds-amd64-n2vlb_kube-system(866cf8ca-1969-11e9-a19f-000c29a0a657)\" failed: rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.1\": Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.97.82:443: getsockopt: connection refused"

Jan 16 03:47:13 k8snode01 kubelet[22009]: W0116 03:47:13.086956  22009 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d

Jan 16 03:47:13 k8snode01 kubelet[22009]: E0116 03:47:13.087239  22009 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

^XJan 16 03:47:15 k8snode01 dockerd-current[21895]: time="2019-01-16T03:47:15.911376186-05:00" level=error msg="Handler for GET /v1.26/images/k8s.gcr.io/pause:3.1/json returned error: No such image: k8s.gcr.io/pause:3.1"

Jan 16 03:47:18 k8snode01 kubelet[22009]: W0116 03:47:18.089527  22009 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d

Jan 16 03:47:18 k8snode01 kubelet[22009]: E0116 03:47:18.090331  22009 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Jan 16 03:47:18 k8snode01 kubelet[22009]: E0116 03:47:18.129802  22009 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"

Jan 16 03:47:18 k8snode01 kubelet[22009]: E0116 03:47:18.130278  22009 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"

错误信息包括2种错误:

1、是拉取镜像不成功!!主要包括pause等,将拉取镜像的脚步修改如下,执行:

vim pullimages.sh

#!/bin/bash

images=(

pause:3.1

etcd:3.2.24

coredns:1.2.6

)

for imageName in ${images[@]} ; do

docker pull mirrorgooglecontainers/$imageName

docker tag mirrorgooglecontainers/$imageName k8s.gcr.io/$imageName

docker rmi mirrorgooglecontainers/$imageName

done

再次查看状态:

[root@k8smaster01 /]# kubectl get nodes

NAME          STATUS  ROLES    AGE  VERSION

k8smaster01  Ready    master  26m  v1.13.2

k8snode01    Ready    <none>  16m  v1.13.2

2、Failed to get system container stats for "/system.slice/kubelet.service"

原因:kubernetes和docker版本兼容性问题

===================部署一个nodejs程序到集群环境====================================================

安装nodejs环境

curl --silent --location https://rpm.nodesource.com/setup_8.x | bash -

yum install -y nodejs

在k8smaster01、k8snode01创建目录:

mkdir -p /etc/docker/certs.d/registry.domain.com:5000

docker login registry.domain.com:5000

将认证文件传到k8smaster01、k8snode01创建目录下:

scp certs/domain.crt root@192.168.1.12:/etc/docker/certs.d/registry.domain.com:5000/ca.crt

scp certs/domain.crt root@192.168.1.13:/etc/docker/certs.d/registry.domain.com:5000/ca.crt

在k8smaster01、k8snode01登陆仓库:

docker login registry.domain.com:5000

准备nodeJS镜像

mkdir /data/hellonode

[root@k8smaster01 hellonode]# cat Dockerfile

FROM node:8.15.0

EXPOSE 8080

COPY server.js .

CMD node server.js

[root@k8smaster01 hellonode]# cat server.js

var http = require('http');

var handleRequest = function(request, response) {

  console.log('Received request for URL: ' + request.url);

  response.writeHead(200);

  response.end('Hello World!');

};

var www = http.createServer(handleRequest);

www.listen(8080);

[root@k8smaster01 hellonode]#

docker build -t registry.domain.com/hello-node:v1 .

docker tag registry.domain.com/hello-node:v1 registry.domain.com:5000/hello-node:v1

docker push registry.domain.com:5000/hello-node:v1

docker images

kubectl run hello-node --image=registry.domain.com:5000/hello-node:v1 --port=8080

kubectl get pod

我们可以使用kubectl expose命令将Pod暴露到外部环境:

kubectl expose deployment hello-node --type=LoadBalancer

[root@k8smaster01 hellonode]# kubectl get services

NAME        TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)          AGE

hello-node  LoadBalancer  10.102.47.157  <pending>    8080:31769/TCP  14s

kubernetes  ClusterIP      10.96.0.1      <none>        443/TCP          95m

[root@k8smaster01 hellonode]# curl http://192.168.1.13:31769/

Hello World!

到此部署自己的程序就此完成!!!!!

===================部署kubernetes-dashboard(说是很简单,还是莫名其妙的踩了很久的坑)====================================================

官网文档:

https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

下载配置文件

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

创建:

kubectl create -f kubernetes-dashboard.yaml

查看:

[root@k8smaster01 ~]# kubectl get pod -n kube-system

NAME                                  READY  STATUS            RESTARTS  AGE

coredns-86c58d9df4-4x6fz              1/1    Running            1          23h

coredns-86c58d9df4-9xmp8              1/1    Running            1          23h

etcd-k8smaster01                      1/1    Running            1          23h

kube-apiserver-k8smaster01            1/1    Running            1          23h

kube-controller-manager-k8smaster01    1/1    Running            2          23h

kube-flannel-ds-amd64-n2vlb            1/1    Running            1          22h

kube-flannel-ds-amd64-s7jd2            1/1    Running            1          22h

kube-proxy-h9m8j                      1/1    Running            1          23h

kube-proxy-mzkxp                      1/1    Running            1          22h

kube-scheduler-k8smaster01            1/1    Running            2          23h

kubernetes-dashboard-57df4db6b-dmh4t  0/1    ImagePullBackOff  0          68s

[root@k8smaster01 ~]# kubectl describe pod  -n kube-system kubernetes-dashboard-57df4db6b-dmh4t

Name:              kubernetes-dashboard-57df4db6b-dmh4t

Namespace:          kube-system

Priority:          0

PriorityClassName:  <none>

Node:              k8snode01/192.168.1.13

Start Time:        Thu, 17 Jan 2019 02:24:39 -0500

Labels:            k8s-app=kubernetes-dashboard

                    pod-template-hash=57df4db6b

Annotations:        <none>

Status:            Pending

IP:                10.244.1.108

Controlled By:      ReplicaSet/kubernetes-dashboard-57df4db6b

Containers:

  kubernetes-dashboard:

    Container ID: 

    Image:        k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

    Image ID:     

    Port:          8443/TCP

    Host Port:    0/TCP

    Args:

      --auto-generate-certificates

    State:          Waiting

      Reason:      ErrImagePull

    Ready:          False

    Restart Count:  0

    Liveness:      http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3

    Environment:    <none>

    Mounts:

      /certs from kubernetes-dashboard-certs (rw)

      /tmp from tmp-volume (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-5bhx5 (ro)

Conditions:

  Type              Status

  Initialized      True

  Ready            False

  ContainersReady  False

  PodScheduled      True

Volumes:

  kubernetes-dashboard-certs:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  kubernetes-dashboard-certs

    Optional:    false

  tmp-volume:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium: 

  kubernetes-dashboard-token-5bhx5:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  kubernetes-dashboard-token-5bhx5

    Optional:    false

QoS Class:      BestEffort

Node-Selectors:  <none>

Tolerations:    node-role.kubernetes.io/master:NoSchedule

                node.kubernetes.io/not-ready:NoExecute for 300s

                node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type    Reason          Age                From                Message

  ----    ------          ----              ----                -------

  Normal  Scheduled      101s              default-scheduler  Successfully assigned kube-system/kubernetes-dashboard-57df4db6b-dmh4t to k8snode01

  Warning  FailedMount    100s              kubelet, k8snode01  MountVolume.SetUp failed for volume "kubernetes-dashboard-certs" : couldn't propagate object cache: timed out waiting for the condition

  Warning  FailedMount    100s              kubelet, k8snode01  MountVolume.SetUp failed for volume "kubernetes-dashboard-token-5bhx5" : couldn't propagate object cache: timed out waiting for the condition

  Normal  SandboxChanged  52s (x4 over 60s)  kubelet, k8snode01  Pod sandbox changed, it will be killed and re-created.

  Normal  Pulling        50s (x2 over 97s)  kubelet, k8snode01  pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"

  Warning  Failed          14s (x2 over 61s)  kubelet, k8snode01  Failed to pull image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1": rpc error: code = Unknown desc = Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: getsockopt: connection refused

  Warning  Failed          14s (x2 over 61s)  kubelet, k8snode01  Error: ErrImagePull

  Normal  BackOff        14s (x4 over 58s)  kubelet, k8snode01  Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"

  Warning  Failed          14s (x4 over 58s)  kubelet, k8snode01  Error: ImagePullBackOff

提示下载k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1镜像失败

还是用之前的方法找能下载的镜像:

[root@k8smaster01 ~]# docker search kubernetes-dashboard-amd64

INDEX      NAME                                                          DESCRIPTION                                    STARS    OFFICIAL  AUTOMATED

docker.io  docker.io/googlecontainer/kubernetes-dashboard-amd64                                                          20                 

docker.io  docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64                                                  10                 

删除

kubectl delete -f kubernetes-dashboard.yaml

修改配置的镜像下载仓库为:docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64

修改前:

[root@k8smaster01 ~]# cat kubernetes-dashboard.yaml |grep image

        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

修改后:

[root@k8smaster01 ~]# cat kubernetes-dashboard.yaml |grep image

        image: docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

再创建:

kubectl create -f kubernetes-dashboard.yaml

创建成功!!!!!

[root@k8smaster01 ~]# kubectl get pod -n kube-system

NAME                                    READY  STATUS    RESTARTS  AGE

coredns-86c58d9df4-4x6fz                1/1    Running  1          23h

coredns-86c58d9df4-9xmp8                1/1    Running  1          23h

etcd-k8smaster01                        1/1    Running  1          23h

kube-apiserver-k8smaster01              1/1    Running  1          23h

kube-controller-manager-k8smaster01    1/1    Running  2          23h

kube-flannel-ds-amd64-n2vlb            1/1    Running  1          22h

kube-flannel-ds-amd64-s7jd2            1/1    Running  1          23h

kube-proxy-h9m8j                        1/1    Running  1          23h

kube-proxy-mzkxp                        1/1    Running  1          22h

kube-scheduler-k8smaster01              1/1    Running  2          23h

kubernetes-dashboard-54d7877b75-6mn5g  1/1    Running  0          4s

[root@k8smaster01 ~]# kubectl describe pod  -n kube-system kubernetes-dashboard-54d7877b75-6mn5g

Name:              kubernetes-dashboard-54d7877b75-6mn5g

Namespace:          kube-system

Priority:          0

PriorityClassName:  <none>

Node:              k8snode01/192.168.1.13

Start Time:        Thu, 17 Jan 2019 02:33:54 -0500

Labels:            k8s-app=kubernetes-dashboard

                    pod-template-hash=54d7877b75

Annotations:        <none>

Status:            Running

IP:                10.244.1.128

Controlled By:      ReplicaSet/kubernetes-dashboard-54d7877b75

Containers:

  kubernetes-dashboard:

    Container ID:  docker://88519c8e6e59b63ba5c495f708ab831822d1e70cf4ecf802c50b570fc8f8c373

    Image:        docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

    Image ID:      docker-pullable://docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64@sha256:d6b4e5d77c1cdcb54cd5697a9fe164bc08581a7020d6463986fe1366d36060e8

    Port:          8443/TCP

    Host Port:    0/TCP

    Args:

      --auto-generate-certificates

    State:          Running

      Started:      Thu, 17 Jan 2019 02:33:57 -0500

    Ready:          True

    Restart Count:  0

    Liveness:      http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3

    Environment:    <none>

    Mounts:

      /certs from kubernetes-dashboard-certs (rw)

      /tmp from tmp-volume (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-dfmh9 (ro)

Conditions:

  Type              Status

  Initialized      True

  Ready            True

  ContainersReady  True

  PodScheduled      True

Volumes:

  kubernetes-dashboard-certs:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  kubernetes-dashboard-certs

    Optional:    false

  tmp-volume:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium: 

  kubernetes-dashboard-token-dfmh9:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  kubernetes-dashboard-token-dfmh9

    Optional:    false

QoS Class:      BestEffort

Node-Selectors:  <none>

Tolerations:    node-role.kubernetes.io/master:NoSchedule

                node.kubernetes.io/not-ready:NoExecute for 300s

                node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type    Reason    Age  From                Message

  ----    ------    ----  ----                -------

  Normal  Scheduled  14s  default-scheduler  Successfully assigned kube-system/kubernetes-dashboard-54d7877b75-6mn5g to k8snode01

  Normal  Pulled    12s  kubelet, k8snode01  Container image "docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1" already present on machine

  Normal  Created    12s  kubelet, k8snode01  Created container

  Normal  Started    11s  kubelet, k8snode01  Started container

将日志打印起来:

[root@k8smaster01 ~]# kubectl logs -f  -n kube-system kubernetes-dashboard-54d7877b75-6mn5g

2019/01/17 07:33:57 Starting overwatch

2019/01/17 07:33:57 Using in-cluster config to connect to apiserver

2019/01/17 07:33:57 Using service account token for csrf signing

2019/01/17 07:33:57 Successful initial request to the apiserver, version: v1.13.2

2019/01/17 07:33:57 Generating JWE encryption key

2019/01/17 07:33:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting

2019/01/17 07:33:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system

2019/01/17 07:33:57 Initializing JWE encryption key from synchronized object

2019/01/17 07:33:57 Creating in-cluster Heapster client

2019/01/17 07:33:57 Auto-generating certificates

2019/01/17 07:33:57 Successfully created certificates

2019/01/17 07:33:57 Serving securely on HTTPS port: 8443

打开另外一个终端:

kubectl proxy --address=192.168.1.12 --disable-filter=true -p 8001  --accept-hosts="^*$"

浏览器界面访问:

http://192.168.1.12:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

经过一些列,各种折腾始终验证不通过!!!!(花费了一上午的时间)

查看github后发现:https://github.com/kubernetes/dashboard

master分支为v1.10.1版本,其次是v1.10.0,再次是v1.8.3。决定将kubernetes-dashboard调整到v1.8.3尝试!!!!!

[root@k8smaster01 ~]# kubectl delete -f kubernetes-dashboard.yaml

更改前:

[root@k8smaster01 ~]# cat kubernetes-dashboard.yaml |grep v1.10.1

        image: docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

更改后:

[root@k8smaster01 ~]# cat kubernetes-dashboard.yaml |grep v1.

        image: docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3

[root@k8smaster01 ~]# kubectl create -f kubernetes-dashboard.yaml

修改访问权限:

修改前:

[root@k8smaster01 ~]# cat kubernetes-dashboard.yaml |egrep "RoleBinding|roleRef" -A 3

kind: RoleBinding

metadata:

  name: kubernetes-dashboard-minimal

  namespace: kube-system

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: kubernetes-dashboard-minimal

修改后:

[root@k8smaster01 ~]# cat kubernetes-dashboard.yaml |egrep "ClusterRoleBinding|roleRef" -A 3

kind: ClusterRoleBinding

metadata:

  name: kubernetes-dashboard-minimal

  namespace: kube-system

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

[root@k8smaster01 ~]# kubectl delete -f kubernetes-dashboard.yaml

[root@k8smaster01 ~]# kubectl create -f kubernetes-dashboard.yaml

再次浏览器界面访问:

http://192.168.1.12:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

并且点击"跳过",就能看到kubernetes的集群界面了!!!

这里只是绕过了认证,如需要设置认证请参考其他文档。

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 151,829评论 1 331
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 64,603评论 1 273
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 101,846评论 0 226
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 42,600评论 0 191
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 50,780评论 3 272
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 39,695评论 1 192
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,136评论 2 293
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 29,862评论 0 182
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 33,453评论 0 229
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 29,942评论 2 233
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 31,347评论 1 242
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 27,790评论 2 236
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 32,293评论 3 221
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 25,839评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,448评论 0 181
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 34,564评论 2 249
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 34,623评论 2 249

推荐阅读更多精彩内容

  • Kubernetes部属说明 环境准备 资料准备 开始部属 编写yaml文件,用于部属Kubenetes基础容器e...
    俊逸之光阅读 2,554评论 2 0
  • 时隔大半年,我又回来了,这回带来的是最近非常火的容器编排工具——kubernetes 先附上docker 官网和k...
    我的橙子很甜阅读 13,101评论 2 79
  • 5.flannel网络安装 flannel启动顺序1、启动etcd (先为flannel及docker分配虚拟...
    K8S_Goearth阅读 1,414评论 0 0
  • 版权声明:原创作品,谢绝转载!否则将追究法律责任。 前言 最近中国和印度的局势也是愈演愈烈。作为一个爱国青年我有些...
    李伟铭MIng阅读 1,998评论 0 5
  • 到了学龄前期,幼儿的识字量、记忆力和理解力都得到一定的积累。他们渴望在书的世界中寻找更丰富的体验。在这个阶段幼儿的...
    娟子_Wendy阅读 112评论 0 0