容器及容器编排

安装docker

# apt-get install apt-transport-https ca-certificates curl software-properties-common

# curl -fsSL https://download.docker.com/linux/ubuntu/gpg| sudo apt-key add -

# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# apt-get update && apt-getinstall -y docker-ce

container分解

container

K8S分解(部分)

K8S

以下部分文件的下载需要额外的科学网络配置,官方文件下载完毕后可push本地的docker registry,再次部署时只需要pull下来在docker tag成XML定义的大包名称即可。

安装K8S组件

# systemctl disable firewalld.service

# systemctl stop firewalld.service

# apt-get update && apt-getinstall -y apt-transport-https

# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb http://apt.kubernetes.io/ kubernetes-xenial main

EOF

# apt-get update

# apt-get install -y kubelet kubeadm kubectl

主控节点

kubeadm init --apiserver-advertise-address <host IP address, 10.109.181.110 e.g.> --pod-network-cidr=10.244.0.0/16

执行过程

按照输出做如下配置:

# mkdir -p $HOME/.kube

# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# sudo chown $(id -u):$(id -g) $HOME/.kube/config

使能K8S命令行的自动补齐

# echo "source <(kubectl completion bash)" >> ~/.bashrc

配置flannel网络:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

slave节点

在主控节点上查看输出信息 'kubeadm token create --print-join-command'

# kubeadm join 10.109.181.110:6443--token ztwxpd.qbp9iaiqsd8v97gg --discovery-token-ca-cert-hash

sha256:79ac20fc3f33ab41e23701923f246f997977a70ff3cb40ab10431aee4bf098b3

节点发现完毕
查看基本服务状态

安装dashboard

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

将type: ClusterIP中的ClusterIP改为NodePort

# kubectl --namespace=kube-system edit service kubernetes-dashboard

# kubectl --namespace=kube-system get service kubernetes-dashboard

NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE

kubernetes-dashboard   NodePort  10.111.96.162           443:32588/TCP   2d

如果是测试环境需要跳过kubeconfig或者Token的方式登录,按照以下方法操作后登录dashboard然后skip

skip admin

用火狐浏览器登录<host IP>:32588访问dashboard,登录界面会提示安全策略警告,点击advance然后skip。用chrome浏览会出现错误且无法绕过,暂时没有规避办法。


部署EFK

EFK实际上是elastic search,fluentd和kibana三个服务,用来搜集,监控容器实例的log并提供可视化界面进行更为灵活的管理。这几个模块可以自由组合,比如ELK,用logstash来替代fluentd进行log的搜集。

# wget  https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml

# wget  https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/es-service.yaml

# wget  https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml

# wget  https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml 注意需要注释掉configuration start with 'NodeSelector'

# wget  https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-service.yaml

# wget  https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml

# kubectl create -f .

查看服务状态

# kubectl cluster-info

Kubernetes master is running athttps://<host IP address>:6443

Elasticsearch is running athttps://<host IP address>:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy

Kibana is running athttps://<host IP address>:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy

KubeDNS is running athttps://<host IP address>:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

设置代理同时将端口映射到8888(可以自己定义)

#kubectl proxy --address='0.0.0.0'--port=8888--accept-hosts='^*$' &

进入kibana控制面板 http://<host IP address>:8888/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana#/management/kibana/index?_g=() 进行进一步配置

主要是index pattern等配置

部署gluster

所有节点:

# apt-get installsoftware-properties-common

#add-apt-repository ppa:gluster/glusterfs-3.8

# apt-get update && apt-getinstall glusterfs-server

# mkdir /opt/glusterd

# mkdir /opt/gfs_data

# sed -i 's/var\/lib/opt/g' /etc/glusterfs/glusterd.vol

# systemctl status glusterfs-server.service

●glusterfs-server.service - LSB: GlusterFS server

  Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset:enabled)

  Active: active (running) since Thu 2018-06-07 07:31:51 UTC; 31min ago

    Docs: man:systemd-sysv-generator(8)

  CGroup: /system.slice/glusterfs-server.service

           └─19538/usr/sbin/glusterd -p /var/run/glusterd.pid

Jun 07 07:31:49 k8s-cluster-1systemd[1]: Starting LSB: GlusterFS server...

Jun 07 07:31:49 k8s-cluster-1glusterfs-server[19528]:  * Startingglusterd service glusterd

Jun 07 07:31:51 k8s-cluster-1glusterfs-server[19528]:    ...done.

Jun 07 07:31:51 k8s-cluster-1 systemd[1]:

Started LSB: GlusterFS server

主控节点:

确保所有节点可以解析

root@k8s-cluster-1:~/gluster# cat/etc/hosts

10.109.181.110 k8s-cluster-1

10.109.181.117 k8s-cluster-2

10.109.181.119 k8s-cluster-3

root@k8s-cluster-1:~/gluster#gluster peer probe k8s-cluster-2

peer probe: success.

root@k8s-cluster-1:~/gluster#gluster peer probe k8s-cluster-3

peer probe: success.

root@k8s-cluster-1:~/gluster#gluster peer status

Number of Peers: 2

Hostname: k8s-cluster-2

Uuid:d10af069-09f6-4d86-8120-dde1afa4393b

State: Peer in Cluster (Connected)

Hostname: k8s-cluster-3

Uuid:c6d4f3eb-78c5-4b10-927e-f1c6e41330d5

State: Peer in Cluster (Connected)

创建对应的endpoint

curl -O https://raw.githubusercontent.com/kubernetes/examples/master/staging/volumes/glusterfs/glusterfs-endpoints.json

配置如图

root@k8s-cluster-1:~/gluster#kubectl apply -f glusterfs-endpoints.json

endpoints "glusterfs-cluster" created

root@k8s-cluster-1:~/gluster#kubectl get ep

NAME                ENDPOINTS                                                    AGE

glusterfs-cluster   10.109.181.110:1207,10.109.181.117:1207,10.109.181.119:1207   5s

influxdb                                                                   16d

kubernetes          10.109.181.110:6443                                           27d

创建对应服务

curl -O https://raw.githubusercontent.com/kubernetes/examples/master/staging/volumes/glusterfs/glusterfs-service.json

配置如图

root@k8s-cluster-1:~/gluster#kubectl apply -f glusterfs-service.json

service"glusterfs-cluster" created

root@k8s-cluster-1:~/gluster# kubectlget svc

NAME                TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE

glusterfs-cluster   ClusterIP      10.97.199.53             1207/TCP         6s

influxdb            LoadBalancer   10.109.218.156        8086:31240/TCP   16d

kubernetes          ClusterIP      10.96.0.1                443/TCP          27d

创建卷及参数微调

# gluster volume create k8s-volume transport tcp k8s-cluster-2:/opt/gfs_data k8s-cluster-3:/opt/gfs_data force

# gluster volume quota k8s-volume enable

# gluster volume quota k8s-volume limit-usage / 1TB

# gluster volume set k8s-volume performance.cache-size 4GB

# gluster volume set k8s-volume performance.io-thread-count 16

# gluster volume set k8s-volume network.ping-timeout 10

# gluster volume set k8s-volume performance.write-behind-window-size 1024MB

基本测试

# curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/volumes/glusterfs/glusterfs-pod.json

修改该json中对应的内容 "path": "k8-volume"

# kubectl apply -f glusterfs-pod.json

登录到该pod中,用df -h检查是否分配并挂载了对应的volume

Heketi服务

简单的说heketi提供了一个上层的restful接口以及简单的命令行来实现更加灵活的分布式存储管理。

# wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz

# tar -xvf heketi-client-v7.0.0.linux.amd64.tar.gz

# cp heketi-client/bin /heketi-cli/bin/

# git clone https://github.com/gluster/gluster-kubernetes && cd ./gluster-kubernetes/deploy

/*Create a separate namespace*/

# kubectl create namespace gluster

安装脚本执行前有一些前提条件需要满足,比如必须要加载的内核模块(更多前提查看脚本提示https://github.com/gluster/gluster-kubernetes/blob/master/deploy/gk-deploy

# modprobe dm_snapshot dm_mirror dm_thin_pool

修改对应的daemonset文件保证该模块被映射进pod中

root@k-1:/home/ubuntu/gluster-kubernetes/deploy#vim kube-templates/glusterfs-daemonset.yaml

 - name: kernel-modules

        hostPath:

          path: "/lib/modules"   ---- > change from /var/lib/modules

每个节点都需要mount.glusterfs command is available. 部分的红帽系统这个命令包含在glusterfs-fuse中

# add-apt-repositoryppa:gluster/glusterfs-3.12

# apt-get update

# apt-get install -yglusterfs-client

执行安装脚本

# ./gk-deploy -g -n gluster /*-g option,it will deploy a GlusterFS DaemonSet onto your Kubernetes cluster by treatingthe nodes listed in the topology file as hyper-converged nodes with both Kubernetes and storage devices on them.*/

删除之前创建的vg

# vgremove -ff $(sudo vgdisplay | grep -i "VG Name" | awk '{print $3}')

主节点和存储节点

这里我们有三个存储节点分别是k3,k-pv1,k-pv2

# add-apt-repository ppa:gluster/glusterfs-3.12 && apt-get update && apt-get install -y glusterfs-client

具体参见https://www.jianshu.com/p/2c6a0eacfe4a

root@k-1:/home/ubuntu/gluster-kubernetes/deploy#export HEKETI_CLI_SERVER=$(kubectl get svc/deploy-heketi -n gluster --template 'http://{{.spec.clusterIP}}:{{(index.spec.ports 0).port}}')

root@k-1:/home/ubuntu/gluster-kubernetes/deploy#echo $HEKETI_CLI_SERVER

http://x.x.x.x:8080

root@k-1:/home/ubuntu/gluster-kubernetes/deploy#curl $HEKETI_CLI_SERVER/hello

Hello from Heketi

root@k-1:/home/ubuntu/gluster-kubernetes/deploy#heketi-cli -s $HEKETI_CLI_SERVER cluster list

Clusters:

Id:035b137fbe2c02021cc7c381710ed0c4[block]

root@k-1:/home/ubuntu/gluster-kubernetes/deploy#heketi-cli -s $HEKETI_CLI_SERVER topology info

Cluster Id:a17b06b860a5c731725ae435d03ed750

   File:  true

   Block: true

   Volumes:

   Nodes:

        Node Id:13206c89322302eee45a7d3d5a0b2175

        State: online

        Cluster Id:a17b06b860a5c731725ae435d03ed750

        Zone: 1

       Management Hostnames: k-3

        Storage Hostnames: 10.109.181.131

        Devices:

               Id:a5987c9a076eac86378825a552ce8b16  Name:/dev/vdb           State:online    Size (GiB):49      Used (GiB):0       Free (GiB):49

                        Bricks:

        Node Id:952e7876c36b3177a6f30b91f328f752

        State: online

        Cluster Id:a17b06b860a5c731725ae435d03ed750

        Zone: 1

        Management Hostnames: k-pv2

        Storage Hostnames: 10.109.181.134

        Devices:

               Id:56bc8b325b258cade583905f2d6cba0e   Name:/dev/vdb            State:online    Size (GiB):99      Used (GiB):0       Free (GiB):99

                        Bricks:

        Node Id:a28dbd80cd95122a4cd834146b7939ce

        State: online

        Cluster Id:a17b06b860a5c731725ae435d03ed750

        Zone: 1

        Management Hostnames: k-pv1

        Storage Hostnames: 10.109.181.152

        Devices:

               Id:58a6e5a003c6aa1d2ccc4acec67cbd5c  Name:/dev/vdb           State:online    Size (GiB):99      Used (GiB):0       Free (GiB):99

                        Bricks:

创建相应的pv和pvc,以及测试用pod

具体文件参考:https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md


HELM

helm是K8S的一个包管理工具,用户可以来编辑应用而不用关心底层的pod,service,endpoint等关系,是application focus的一个利器。

官方解释:Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application.

安装步骤如下:

# curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get> get_helm.sh

# chmod 700 get_helm.sh

# ./get_helm.sh

# helm version

Client:&version.Version{SemVer:"v2.9.1",GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710",GitTreeState:"clean"}

安装tiller

# helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.9.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

# helm init --upgrade

$HELM_HOME has been configured at /Users/test/.helm.

Tiller (the helm server side component)has been installed into your Kubernetes Cluster.

Happy Helming!

# helm version

Client:&version.Version{SemVer:"v2.9.1",GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710",GitTreeState:"clean"}

Server:&version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710",GitTreeState:"clean"}

# kubectl create serviceaccount --namespace kube-system tiller

# kubectl create clusterrolebinding tiller-cluster-rule--clusterrole=cluster-admin --serviceaccount=kube-system:tiller

# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

安装应用wordpress,测试

# helm install --name wordpress-helm --set "persistence.enabled=false,mariadb.persistence.enabled=false" stable/wordpress

NAME:   wordpress-helm

LAST DEPLOYED: Thu Jun 28 09:03:362018

NAMESPACE: default

STATUS: DEPLOYED

RESOURCES:

==> v1/Service

NAME                      TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE

wordpress-helm-mariadb    ClusterIP     10.103.74.128         3306/TCP                    1s

wordpress-helm-wordpress  LoadBalancer 10.108.70.1      80:32211/TCP,443:32191/TCP  1s

==> v1beta1/Deployment

NAME                      DESIRED  CURRENT UP-TO-DATE  AVAILABLE  AGE

wordpress-helm-wordpress  1       1        1           0          1s

==> v1beta1/StatefulSet

NAME                    DESIRED  CURRENT AGE

wordpress-helm-mariadb  1       1        1s

==> v1/Pod(related)

NAME                                      READY  STATUS             RESTARTS  AGE

wordpress-helm-wordpress-8f698f574-xbbhj  0/1   ContainerCreating  0         0s

wordpress-helm-mariadb-0                  0/1    Pending            0         0s

==> v1/Secret

NAME                      TYPE    DATA AGE

wordpress-helm-mariadb    Opaque 2     1s

wordpress-helm-wordpress  Opaque 2     1s

==> v1/ConfigMap

NAME                          DATA  AGE

wordpress-helm-mariadb        1    1s

wordpress-helm-mariadb-tests  1    1s

NOTES:

1. 获取URL

 NOTE: It may take a few minutes for the LoadBalancer IP to be available. Watch the status with: 'kubectl get svc--namespace default -w wordpress-helm-wordpress'

 export SERVICE_IP=$(kubectl get svc --namespace default wordpress-helm-wordpress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

 echo http://$SERVICE_IP/admin

2. 获取鉴权去登录blog

 echo Username: user

 echo Password: $(kubectl get secret --namespace default wordpress-helm-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)

# helm ls

NAME            REVISION        UPDATED                         STATUS          CHART           NAMESPACE

wordpress-helm  1               Thu Jun 28 09:03:36 2018        DEPLOYED        wordpress-2.0.0 default


附录

K8S API

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#podsecuritypolicy-v1beta1-extensions

Execute command after PODinstantiated

https://kubernetes.io/cn/docs/tasks/inject-data-application/define-command-argument-container/

Capability

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

https://github.com/torvalds/linux/blob/master/include/uapi/linux/capability.h

AppArmor

https://kubernetes.io/docs/tutorials/clusters/apparmor/

Networking

https://kubernetes.io/docs/concepts/cluster-administration/networking/

Kompose

https://k8smeetup.github.io/docs/tools/kompose/user-guide/

Cheat sheet

https://kubernetes.io/docs/reference/kubectl/cheatsheet

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 159,117评论 4 362
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,328评论 1 293
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 108,839评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 44,007评论 0 206
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,384评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,629评论 1 219
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,880评论 2 313
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,593评论 0 198
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,313评论 1 243
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,575评论 2 246
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,066评论 1 260
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,392评论 2 253
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,052评论 3 236
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,082评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,844评论 0 195
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,662评论 2 274
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,575评论 2 270

推荐阅读更多精彩内容