openshift安装

概述

OpenShift是一个开源容器云平台,是一个基于主流的容器技术Docker和Kubernetes构建的云平台。OpenShift以Docker技术和kubernetes框架为基础,在此之上扩展提供了软件定义网络、软件定义存储、权限管理、企业级镜像仓库、统一入口路由、持续集成流程(s2i/jenkins)、统一管理控制台、监控日志等功能,形成覆盖整个软件生命周期的解决方案。

环境准备

  • 操作系统:
centos7.6
  • 节点规划:
master(192.168.1.144):
4C/8G-硬盘60G/30G

node1(192.168.1.198):
4C/3G-硬盘60G/30G

node2(192.168.1.204):
4C/3G-硬盘60G/30G

第一部分 初始化设置:

1.配置主机名

192.168.1.144
hostnamectl set-hostname master

192.168.1.198
hostnamectl set-hostname node1

192.168.1.204
hostnamectl set-hostname node2

2.配置hosts文件(各个节点操作):

cat /etc/hosts
192.168.1.144 master
192.168.1.198 node1
192.168.1.204 node2

3.开启selinux(各个节点操作)

cat /etc/sysconfig/selinux
SELINUX=enforcing

SELINUXTYPE=targeted

重启主机生效

4.修改配置文件ifcfg-enp0s3(各个节点操作)

vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
//添加如下参数
NM_CONTROLLED=yes

修改好之后重启网络

service network restart

5.停掉NetworkManager,iptables,firewalld(各个节点操作)

systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl stop firewalld
systemctl stop iptables
systemctl disable firewalld
systemctl disable iptables

6.在master节点生成密钥,分发到各node节点(master节点操作)

ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub master
ssh-copy-id -i .ssh/id_rsa.pub node1
ssh-copy-id -i .ssh/id_rsa.pub node2

7.ntpdate做时间同步(各个节点操作)

ntpdate time2.aliyun.com

8.安装基础包(各个节点操作)

yum update -y

yum install -y wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct java-1.8.0-openjdk-headless python-passlib

yum -y install nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel vim ncurses-devel autoconf automake zlib-devel python-devel epel-release lrzsz openssh-server socat ipvsadm conntrack

yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

9.安装ansible2.6.5(master节点操作.openshift3.10需要安装ansible2.6.5,否则会报错)

ansible
//yum install ansible-2.6.5-1.el7.ans.noarch.rpm

sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo
yum -y --enablerepo=epel install ansible pyOpenSSL

10.上传openshift-3.10安装包到master节点

openshift-ansible-release-3.10.zip

解压:

unzip openshift-ansible-release-3.10.zip

11.安装docker(各个几点操作)

yum install -y docker-1.13.1
(1)修改docker配置文件
vi /etc/sysconfig/docker
//之前的OPTIONS注释掉,变成下面这行
OPTIONS='--selinux-enabled=false --signature-verification=False'
(2)配置docker加速器
vi /etc/docker/daemon.json
{"registry-mirrors": [""https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com""]}

重启docker

systemctl daemon-reload
systemctl restart docker.service
(3)配置私有仓库(在master节点操作)
docker pull registry:2.5
  • 生成用户名/密码访问 (这里设置用户名lucky, 密码 lucky)
yum install httpd -y
service httpd start
chkconfig httpd on
mkdir -p /opt/registry-var/auth/
docker run --entrypoint htpasswd registry:2.5 -Bbn lucky lucky >> /opt/registry-var/auth/htpasswd 
  • 设置配置文件
mkdir -p /opt/registry-var/config
vim /opt/registry-var/config/config.yml

version: "0.1"
log:
  fields:
    service: registry
storage:
  delete:
    enabled: true
  cache:
    blobdescriptor:  inmemory
  filesystem:
    rootdirectory: /var/lib/registry
http:
  addr: :5000
  headers:
    X-Content-Type-Options: [nosniff]
health:
  storagedriver:
    enabled: true
interval: 10s
threshold: 3
启动服务
docker run -d -p 5000:5000 --restart=always  --name=registry -v /opt/registry-var/config/:/etc/docker/registry/ -v /opt/registry-var/auth/:/auth/ -e "REGISTRY_AUTH=htpasswd"  -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -v /opt/registry-var/:/var/lib/registry/ registry:2.5
配置https权限支持(在master,node1,node2上均做修改)
vim /etc/docker/daemon.json

{"registry-mirrors": [""https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com""],
"insecure-registries":["192.168.1.144:5000"]
}
重启docker
systemctl daemon-reload

systemctl restart docker.service
测试私有仓库是否可以登陆(各个节点均操作)
docker login 192.168.1.144:5000

输入用户名和密码显示如下说明登陆成功,其他机器如果想要push或者pull镜像到这个私有仓库,需要docker login 192.168.1.144:5000 输入用户名和密码登陆成功之后才可以上传下载


image.jpeg
(4)设置docker开机自启动
systemctl enable docker
systemctl is-active docker
(5)所有节点更改/etc/sysconfig/docker-storage-setup如下:
DEVS=/dev/sdb
VG=docker-vg
(6)所有Node节点执行docker-storage-setup
docker-storage-setup

12.下载镜像

master节点:

docker pull quay.io/coreos/etcd:v3.2.22

docker pull openshift/origin-control-plane:v3.10

docker pull docker.io/openshift/origin-service-catalog:v3.10

docker pull openshift/origin-node:v3.10

docker pull openshift/origin-deployer:v3.10

docker pull openshift/origin-deployer:v3.10.0

docker pull openshift/origin-template-service-broker:v3.10

docker pull openshift/origin-pod:v3.10

docker pull openshift/origin-pod:v3.10.0

docker pull openshift/origin-web-console:v3.10

docker pull openshift/origin-docker-registry:v3.10

docker pull openshift/origin-haproxy-router:v3.10

docker pull cockpit/kubernetes:latest

docker pull docker.io/cockpit/kubernetes:latest

docker pull docker.io/openshift/origin-control-plane:v3.10

docker pull docker.io/openshift/origin-deployer:v3.10

docker pull docker.io/openshift/origin-docker-registry:v3.10

docker pull docker.io/openshift/origin-haproxy-router:v3.10

docker pull docker.io/openshift/origin-pod:v3.10

node1和node2节点:

docker pull quay.io/coreos/etcd:v3.2.22

docker pull openshift/origin-control-plane:v3.10

docker pull openshift/origin-node:v3.10

docker pull docker.io/openshift/origin-node:v3.10

docker pull openshift/origin-haproxy-router:v3.10

docker pull openshift/origin-deployer:v3.10

docker pull openshift/origin-pod:v3.10

docker pull ansibleplaybookbundle/origin-ansible-service-broker:v3.10

docker pull openshift/origin-docker-registry:v3.10

docker pull cockpit/kubernetes:latest

docker pull openshift/origin-haproxy-router:v3.10

docker pull docker.io/cockpit/kubernetes:latest

docker pull docker.io/openshift/origin-control-plane:v3.10

docker pull docker.io/openshift/origin-deployer:v3.10

docker pull docker.io/openshift/origin-docker-registry:v3.10

docker pull docker.io/openshift/origin-haproxy-router:v3.10

docker pull docker.io/openshift/origin-pod:v3.10

13.配置ansible的hosts文件

vi /etc/ansible/hosts
[OSEv3:children]
masters
nodes
etcd
[OSEv3:vars]
openshift_deployment_type=origin
ansible_ssh_user=root
ansible_become=yes
openshift_repos_enable_testing=true
openshift_enable_service_catalog=false
template_service_broker_install=false
debug_level=4
openshift_clock_enabled=true
openshift_version=3.10.0
openshift_image_tag=v3.10
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability,os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant i
openshift_master_identity_providers=[{'name': 'htpasswd_auth','login': 'true', 'challenge': 'true','kind': 'HTPasswdPasswordIdentityProvider'}]
[masters]
master
[nodes]
master openshift_node_group_name='node-config-master'
node1 openshift_node_group_name='node-config-computer'
node2 openshift_node_group_name='node-config-computer'
[etcd]
master

14.安装集群

(1)安装前预配置检查
ansible-playbook -i /etc/ansible/hosts openshift-ansible-release-3.10/playbooks/prerequisites.yml

检查如果显示如下,则没有报错,那么开始安装


image.jpeg
oc label node node1 node-role.kubernetes.io/infra=true
oc label node node2 node-role.kubernetes.io/infra=true
(2)安装
ansible-playbook -i /etc/ansible/hosts openshift-ansible-release-3.10/playbooks/deploy_cluster.yml

执行deploy时主机dns导致连外网失败(在执行上面deploy时,需要在每个节点ping www.baidu.com,如果ping不通,解决方案如下)

临时解决方案更改/etc/resolv.conf

当部署的时候看到retry,就需要在master和node节点执行下面命令,这样就可以继续ping通外网

echo nameserver 8.8.8.8 >>/etc/resolv.conf

需要给节点打标签

TASK [openshift_manage_node : Set node schedulability] 到这个task之后执行下面部分

oc label node node1 node-role.kubernetes.io/infra=true

oc label node node2 node-role.kubernetes.io/infra=true

echo nameserver 8.8.8.8 >>/etc/resolv.conf

显示如下,说明安装成功

image.jpeg

15.创建管理员账号

首次新建用户密码

htpasswd -cb /etc/origin/master/htpasswd admin admin

添加用户密码

htpasswd -b /etc/origin/master/htpasswd dev dev

以集群管理员登录

oc login -u system:admin

给用户分配一个集群管理员角色

oc adm policy add-cluster-role-to-user cluster-admin admin

16.在浏览器登陆console控制台

https://192.168.1.144:8443

image.jpeg

账号:admin

密码:admin

第二部分 web界面创建项目步骤及说明

登录到web控制台: https://192.168.1.144:8443

1.创建第一个项目ParkSmap,用来展示世界主要公园的地图的一个程序

(1)点击右侧Create Project

name:myproject

上面改好之后点击create即可

(2)Deploy image

选中myproject这个项目,出现如下界面

image.jpeg

点击Deploy image,出现如下界面

image.jpeg

选则create an image pull secret,出现如下界面

image.png

Secret Name:parksmap

Image Registry Server Address:openshiftroadshow/parksmap-katacoda:1.0.0

Username:admin

Password:admin

Email:sknfie@163.com

点击创建即可

再回到刚才的Deploy Image界面,选则下面的Image name,输入openshiftroadshow/parksmap-katacoda:1.0.0,然后点击搜索,出现如下

image.jpeg
image.jpeg

上面修改好之后,点击Deploy即可

master节点做如下操作,给节点打标签,否则调度不成功

oc label nodes node1 node-role.kubernetes.io/compute=true

oc label nodes node2 node-role.kubernetes.io/compute=true

oc label nodes node1 node-role.kubernetes.io/infra=true

oc label nodes node2 node-role.kubernetes.io/infra=true

上面创建成功之后可以再master节点验证

oc get pods -n myproject 显示如下

image.jpeg
(3)创建一个router,用于在集群外部访问

选则Applications下的Routers,出现如下界面

image.png

点击创建router,出现如下界面

image.png

Name:parksmap-katacoda

然后点击创建即可,出现如下界面

image.jpeg

上面改好之后,在集群外部还是访问不了,需要修改service的cluster ip类型,修改方法如下

Applications------>services显示如下

image.jpeg

点击parksmap-katacoda,出现如下

image.jpeg

选则右侧Action下的Edit Yaml,修改内容如下

image.jpeg

改好之后点击Save

在master节点查看

oc get service -n myproject 显示如下

image.jpeg

在浏览器访问:http://192.168.1.144:30080/index.html

出现如下界面

image.jpeg
(4)部署parksmap的后端程序,通过rest api获取世界主要公园的数据

在myproject项目下选则Add to project

选则Browse Catalog------>python 出现如下界面

image.jpeg

点击next,出现如下界面

image.jpeg

Version:3.5

Application Name:nationalparks-katacoda

Git Repository:https://github.com/openshift-roadshow/nationalparks-katacoda

注:python3.5的s2i镜像

https://github.com/sclorg/s2i-python-container/blob/master/3.5/README.md

推荐阅读更多精彩内容