install k8s by kubeadmin

install and initialize kubeadm tool

in this article, we'll install a k8s cluster by kubeadmin tool. First, let's prepare a vitual machine by oracle virtualbox, with resource limited below:
cpu: 2core 4g disk : 20g
why we use kubeadm:

  1. the simplist way to install a k8s cluster.
  2. easier for automation setting up and test on our application.

there are some requriment if we follow the guide:

  1. One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
  2. 2 GiB or more of RAM per machine--any less leaves little room for your apps.
  3. At least 2 CPUs on the machine that you use as a control-plane node.
  4. Full network connectivity among all machines in the cluster.
  5. Unique hostname, MAC address, and product_uuid for every node
  6. Certain ports are open on your machines
  7. Swap disabled. You MUST disable swap in order for the kubelet to work properly

After install the VM, let's check this requriement,

  1. we use ubuntu 20.04 for os, so it's compatible.
  2. we offer 4Gib,20g disk for the first one vm, it's satisfied.
  3. but the core of cpu is 1 in default, let change it by opening the setting dialog of vm and set it to 2:


    image.png
  4. all the vms will in the same virtual network.
  5. after restart the vm, login it, and check the hostname MAC and product uuid
    hostname: charleslin1 it was setted when install the vm.
    MAC: get it by using the cmd "ip link"
    product_uudi check it by command "sudo cat /sys/class/dmi/id/product_uuid"
  6. let iptable see bridged traffic
    we need the br_netfilter module is loaded, first we use lsmod cmd to check the module:
    lsmod | grep br_netfilter* br_netfilter is module for bridge firewall , To load it explicitly call sudo modprobe br_netfilter.
    As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

Install container runtime
k8s will usr CRI to interface with contianer runtime, we need to install a container runtime , here we use docker ad runtime.
refer to docker installation for unbuntu, folow the steps and install it.
after installation,we may check the version we installed by docker version

$ docker version
image.png

after docker installing, we i'll install these packages on the VM:

  • kubeadmin the command to bootstrap the cluster
  • kubelet: the component that runs on all of our machines
  • kubectl the comand line to cmunicated with our cluster.
    follow the steps, here we need update agt-get firt,then install some utilities, don't worry if there are duplicated tools we installed before, apt-get will skip them.
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

second, we'll download Google Cloud public signing key, it 'll be used for verification when downloading utils mentioned before.
Cos, there is error to connect to google's server, we need dowload https://download.docker.com/linux/ubuntu/gpg from manully( open a proxy which can visit foregin website), open the link in chrome, and drag it into our VM.
now, move it ot the destination folder by:

sudo cp apt-key.gpg /usr/share/keyrings/kubernetes-archive-keyring.gpg

then we need to add a mirror to create sourcelist file

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Then, we can install all tools nomally

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

when we have finished those tasks above, we need configure cgroup driver for docker and k8s. The Container runtimes page explains that the systemd driver is recommended for kubeadm based setups instead of the cgroupfs driver, because kubeadm manages the kubelet as a systemd service.
a. check the Cgroup v2 is installed in the server(Ubuntu 20.04 kernel 5.4 use v2 as default)

grep cgroup2 /proc/filesystems
image.png

In result, the cgroup2 shows it's used in system.
b. check the cgroup config of docker, by docker info


image.png

it's obviously, the cgroupfs is used, in k8s, we need to change the driver to systemd by command below:

cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

then, restart the docker by:

sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

check the docker info again:


image.png

Before initialize the kubeadm, we need stop the sap

 $ swapoff -a  
$sudo sed -i 's/.*swap.*/#&/' /etc/fstab
$ cat /etc/fstab

in the opened fstab file, comment out the /sap line:


image.png

the reboot the server by

$ reboot

sometime we also need to config cgroup drvier for kubelet. but after v1.22 , if user doesn't set cgroupDriver field under KubeletConfiguration, kubeadm will default it to systemd. it means we do nothing with kubectl. now start the initializing of kubeadm

kubeadm init 

but, error occurs when pulling image, use be cmd below to check the image we need:

$ kubeadm config images list
image.png

then pull thoses images by mirror site.

sudo docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
sudo docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
sudo docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
sudo docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
sudo docker pull registry.aliyuncs.com/google_containers/pause:3.6
sudo docker pull registry.aliyuncs.com/google_containers/etcd:3.5.1-0
sudo docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.6

Notes: cos network issues, please use domestic mirror site " registry.aliyuncs.com/google_containers" as prefix.
After the finish of all download, we use docker tag to chang the image's name.

sudo docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0  k8s.gcr.io/kube-apiserver:v1.23.0
sudo docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0  k8s.gcr.io/kube-controller-manager:v1.23.0
sudo docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0  k8s.gcr.io/kube-scheduler:v1.23.0
sudo docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0  k8s.gcr.io/kube-proxy:v1.23.0
sudo docker tag registry.aliyuncs.com/google_containers/pause:3.6  k8s.gcr.io/pause:3.6
sudo docker tag registry.aliyuncs.com/google_containers/etcd:3.5.1-0  k8s.gcr.io/etcd:3.5.1-0
sudo docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.6  k8s.gcr.io/coredns/coredns:v1.8.6

last is to initialize the kubeadm again by:

sudo kubeadm init
image.png

Now, we finished all tasks with kubeadmin initializtion, we are ready to install a k8s cluster now.

Create a cluster with kubeadm

after kubadmin initializtion, according to the comments in console, we'll run those comands below:


image.png
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

thoses configuration including certification and location info for connecting to k8s server. after that we can try kubectl comand

kubectl get  node
image.png

Now, we have kubectl installed, but without network. use comand below to check node info:

kubectl describe node  k8snode1-virtualbox
image.png

Refer to infomation abuve, the network for the node isn't ready, we need to install a network plugin.

  1. config NetworkManager for calico.
    Create the following configuration file at /etc/NetworkManager/conf.d/calico.conf to prevent NetworkManager from interfering with the interfaces:
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:wireguard.cali
  1. close firewall by comand below
sudo systemctl stop firewalld
  1. download the Calico networking manifest
curl https://docs.projectcalico.org/manifests/calico.yaml -O
  1. apply the calico.yaml by comands below
kubectl apply -f calico.yaml
image.png

after 4, we may check the pods in kubesystem, calico-kube controller is downloaded and installed. and then when we check the node status, is ready now.


image.png

5.(optional) we may install calicoctl comand line tool to manage calico resources and perform administritive funcitons.
Use the following command to download the calicoctl binary

curl -o calicoctl -O -L  "https://github.com/projectcalico/calicoctl/releases/download/v3.21.2/calicoctl" 

Till here, we finish the network plugin installing.

Schedule pods on the control panel node

By default, your cluster will not schedule Pods on the control-plane node for security reasons. If you want to be able to schedule Pods on the control-plane node, for example for a single-machine Kubernetes cluster for development, run:

kubectl taint nodes --all node-role.kubernetes.io/master-
image.png

This will remove the ode/k8snode1-virtualbox taint from any nodes that have it, including the control-plane node, meaning that the scheduler will then be able to schedule Pods everywhere.

after this section, a standalone k8s installation is finished, we may try a redis pod, after apply the configuration, we can get pod on server:


image.png

Add a second node

Create a second VM, Please check the network to make sure the conntion between vms: Any node shuld access other nodes without any limit. here we configure the vm with NAT service in oracle viture box:


image.png

After creation, we install the docker as runtime on the second server ,and confgure cgroup as depicted in prior section, follow step before till initializing the kubeadm.
An easy way to create the second VM is to generate from snapshoot of VM1, then change the hostname by:

vi /etc/hostname          #change the server name
vi /hosts                       #change the server name in  ip mapping

We also need to close swap on second node by modify the /etc/fstab as above.

sudo sed -i 's/.*swap.*/#&/' /etc/fstab

and, one more task is start kubectl:

systemctl enable kubelet.service
systemctl start kubelet.service
systemctl status kubelet.service

after that, we login into first vm(the one we install the control panel on) and create an join token for second node by:

kubeadm token create
image.png

record the token, then we need generate the ca-cert-hash on fist VM still.

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
image.png

after receiving token and ca cert hash, we login the second VM and join it into cluster:

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

here <control-plane-host> is IP of fist VM and port is 6443, so we get comand like below

kubeadm join --token iln84z.h2v1tjez4nvc20p5 10.0.2.15:6443 --discovery-token-ca-cert-hash sha256:2617192f5a967c78b72d657f05b70f01a99d30c2dac8d32465bdb8ba4ea605cc

kubeadm join 10.0.2.9:6443 --token ybqdpp.vh0sntajqehf5rw9
--discovery-token-ca-cert-hash sha256:efd185e1fc2ee987d97fd34c76b289ed8b0f06e8d4e71ca0156c5410d0ae5e1c

execute it in comand line tool of second vm:

image.png

Congratulations! you're succesful if see the same tips in console.
Let's check the nodes and pods in control panel node , input kubectl get node, you'll get two nodes shows, worknode is with a none role.
image.png

Input kubectl get pods -A, you'll see all pods in system namespace(we hanven't deployed any cutom pod in default namespace), you'll see calico-* for network, coredns-* for cluste dns, and other import conponent in admin node:
image.png

by the same way, we may add more work-node into cluster.

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 159,015评论 4 362
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,262评论 1 292
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 108,727评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,986评论 0 205
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,363评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,610评论 1 219
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,871评论 2 312
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,582评论 0 198
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,297评论 1 242
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,551评论 2 246
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,053评论 1 260
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,385评论 2 253
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,035评论 3 236
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,079评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,841评论 0 195
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,648评论 2 274
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,550评论 2 270

推荐阅读更多精彩内容