三、企业 keepalived 高可用项目实战/haproxy基础

1、Keepalived VRRP 介绍

keepalived是什么

keepalived是集群管理中保证集群高可用的一个服务软件,用来防止单点故障。

keepalived工作原理

keepalived是以VRRP协议为实现基础的,VRRP全称Virtual Router Redundancy Protocol,即虚拟路由冗余协议。

虚拟路由冗余协议,可以认为是实现路由器高可用的协议,即将N台提供相同功能的路由器组成一个路由器组,这个组里面有一个master和多个backup,master上面有一个对外提供服务的vip(该路由器所在局域网内其他机器的默认路由为该vip),master会发组播,当backup收不到vrrp包时就认为master宕掉了,这时就需要根据VRRP的优先级来选举一个backup当master。这样的话就可以保证路由器的高可用了。

keepalived主要有三个模块,分别是core、check和vrrp。core模块为keepalived的核心,负责主进程的启动、维护以及全局配置文件的加载和解析。check负责健康检查,包括常见的各种检查方式。vrrp模块是来实现VRRP协议的。
==============================================
脑裂:
防火墙IPtables
Keepalived的BACKUP主机在收到不MASTER主机报文后就会切换成为master,如果是它们之间的通信线路出现问题,无法接收到彼此的组播通知,但是两个节点实际都处于正常工作状态,这时两个节点均为master强行绑定虚拟IP,导致不可预料的后果,这就是脑裂。
解决方式:
1、添加更多的检测手段,比如冗余的心跳线(两块网卡做健康监测),ping对方等等。尽量减少"裂脑"发生机会。(指标不治本,只是提高了检测到的概率);
2、设置仲裁机制。两方都不可靠,那就依赖第三方。比如启用共享磁盘锁,ping网关等。(针对不同的手段还需具体分析);
3、爆头,将master停掉。然后检查机器之间的防火墙。网络之间的通信

2、Nginx+keepalived实现七层的负载均衡高可用

Nginx通过Upstream模块实现负载均衡

upstream 支持的负载均衡算法

主机清单:

主机名 ip 系统 用途
Proxy-master 192.168.94.132 centos7.5 主负载
Proxy-slave 192.168.94.133 centos7.5 主备
Real-server1 192.168.94.134 Centos7.5 web1
Real-server2 192.168.94.135 centos7.5 web2
Vip for proxy 192.168.94.100

轮询(默认):可以通过weight指定轮询的权重,权重越大,被调度的次数越多
ip_hash:可以实现会话保持,将同一客户的IP调度到同一样后端服务器,可以解决session的问题,不能使用weight
fair:可以根据请求页面的大小和加载时间长短进行调度,使用第三方的upstream_fair模块
url_hash:按请求的url的hash进行调度,从而使每个url定向到同一服务器,使用第三方的url_hash模块

配置安装nginx 所有的机器,关闭防火墙和selinux
[root@proxy-master ~]# systemctl stop firewalld         //关闭防火墙
[root@proxy-master ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/sysconfig/selinux        //关闭selinux,重启生效
[root@proxy-master ~]# setenforce 0                //关闭selinux,临时生效

安装nginx, 全部4台
[root@proxy-master ~]# cd /etc/yum.repos.d/
[root@proxy-master yum.repos.d]# vim nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1
[root@proxy-master yum.repos.d]# yum install yum-utils -y
[root@proxy-master yum.repos.d]# yum install nginx -y
一、实施过程

同类服务

调度到不同组后端服务器
网站分区进行调度
 
1、选择两台nginx服务器作为代理服务器。
2、给两台代理服务器安装keepalived制作高可用生成VIP
3、配置nginx的负载均衡
# 两台配置完全一样
[root@proxy-master ~]# cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
[root@proxy-master ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    include /etc/nginx/conf.d/*.conf;
    upstream backend {
    server 192.168.94.134:80 weight=1 max_fails=3 fail_timeout=20s;
    server 192.168.94.135:80 weight=1 max_fails=3 fail_timeout=20s;
    }
    server {
        listen       80;
        server_name  localhost;
        location / {
        proxy_pass http://backend;
        proxy_set_header Host $host:$proxy_port;
        proxy_set_header X-Forwarded-For $remote_addr;
        }
    }
}
[root@proxy-master ~]# nginx
[root@proxy-master ~]# scp /etc/nginx/nginx.conf root@192.168.94.133:/etc/nginx/
[root@proxy-slave ~]# nginx

不同类服务(与本实验无关)

调度到不同组后端服务器
网站分区进行调度
=================================================================================

拓扑结构

                            [vip: 20.20.20.20]

                        [LB1 Nginx]     [LB2 Nginx]
                        192.168.1.2     192.168.1.3

        [index]     [milis]      [videos]      [images]       [news]
         1.11        1.21          1.31           1.41         1.51
         1.12        1.22          1.32           1.42         1.52
         1.13        1.23          1.33           1.43         1.53
         ...         ...            ...           ...           ...
         /web     /web/milis    /web/videos     /web/images   /web/news
      index.html  index.html     index.html      index.html   index.html

一、实施过程 
根据站点分区进行调度
http {
    upstream index {
        server 192.168.1.11:80 weight=1 max_fails=2 fail_timeout=2;
        server 192.168.1.12:80 weight=2 max_fails=2 fail_timeout=2;
        server 192.168.1.13:80 weight=2 max_fails=2 fail_timeout=2;
       }
       
    upstream milis {
        server 192.168.1.21:80 weight=1 max_fails=2 fail_timeout=2;
        server 192.168.1.22:80 weight=2 max_fails=2 fail_timeout=2;
        server 192.168.1.23:80 weight=2 max_fails=2 fail_timeout=2;
       }
       
     upstream videos {
        server 192.168.1.31:80 weight=1 max_fails=2 fail_timeout=2;
        server 192.168.1.32:80 weight=2 max_fails=2 fail_timeout=2;
        server 192.168.1.33:80 weight=2 max_fails=2 fail_timeout=2;
       }
       
     upstream images {
        server 192.168.1.41:80 weight=1 max_fails=2 fail_timeout=2;
        server 192.168.1.42:80 weight=2 max_fails=2 fail_timeout=2;
        server 192.168.1.43:80 weight=2 max_fails=2 fail_timeout=2;
       }
       
      upstream news {
        server 192.168.1.51:80 weight=1 max_fails=2 fail_timeout=2;
        server 192.168.1.52:80 weight=2 max_fails=2 fail_timeout=2;
        server 192.168.1.53:80 weight=2 max_fails=2 fail_timeout=2;
       }
       
     server {
            location / {
            proxy_pass http://index;
            }
            
            location  /news {
            proxy_pass http://news;
            }
            
            location /milis {
            proxy_pass http://milis;
            }
            
            location ~* \.(wmv|mp4|rmvb)$ {
            proxy_pass http://videos;
            }
            
            location ~* \.(png|gif|jpg)$ {
            proxy_pass http://images;
            }
}
二、Keepalived实现调度器HA
注:主/备调度器均能够实现正常调度
1. 主/备调度器安装软件
[root@proxy-master ~]# yum install -y keepalived
[root@proxy-slave ~]# yum install -y keepalived
[root@proxy-master ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@proxy-master ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id directory1   #辅助改为directory2
}

vrrp_instance VI_1 {
    state MASTER        #定义主还是备
    interface ens32     #VIP绑定接口
    virtual_router_id 80  #整个集群的调度器一致
    priority 100         #back改为50//优先级
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.94.100/32   # vip
    }
}

[root@proxy-slave ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@proxy-slave ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id directory2
}

vrrp_instance VI_1 {
    state BACKUP    #设置为backup
    interface ens32
    nopreempt        #设置到back上面,不抢占资源
    virtual_router_id 80
    priority 50   #辅助改为50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.94.100/24
    }
}

[root@proxy-master ~]# systemctl start keepalived
[root@proxy-slave ~]# systemctl start keepalived
[root@proxy-slave ~]# systemctl status keepalived
[root@proxy-master ~]# systemctl status keepalived
[root@proxy-master ~]# ip a
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:5e:04:f3 brd ff:ff:ff:ff:ff:ff
    inet 192.168.94.132/24 brd 192.168.94.255 scope global dynamic ens32
    inet 192.168.0.100/32 scope global ens32

测试

[root@real-server1 ~]# cat /etc/nginx/nginx.conf
location / {
            root   /opt;
            index  index.html index.htm;
        }
[root@real-server1 ~]# echo real-server1 > /opt/index.html
[root@real-server1 ~]# nginx -s reload
[root@real-server2 ~]# echo real-server2 > /opt/index.html
[root@real-server2 ~]# nginx -s reload
使用一台机器作为client端访问:
[root@real-server2 ~]# curl 192.168.94.100
real-server1
[root@real-server2 ~]# curl 192.168.94.100
real-server2

停止master端的Keepalived,观察ip漂移,并测试能否继续正常访问
[root@proxy-master ~]# systemctl stop keepalived
[root@proxy-slave ~]# ip a
    inet 192.168.94.133/24 brd 192.168.94.255 scope global dynamic ens32
    inet 192.168.94.100/32 scope global ens32
[root@real-server2 ~]# curl 192.168.94.100
real-server1
[root@real-server2 ~]# curl 192.168.94.100
real-server2

重新启动master端Keepalived,再次测试
[root@proxy-master ~]# systemctl start keepalived
[root@proxy-master ~]# ip a
    inet 192.168.94.100/32 scope global ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::f041:f8cc:18a1:adad/64 scope link 
       valid_lft forever preferred_lft forever
[root@real-server2 ~]# curl 192.168.94.100
real-server1
[root@real-server2 ~]# curl 192.168.94.100
real-server2

到此:
可以解决心跳故障keepalived
不能解决Nginx服务故障

添加健康检查

4. 扩展对调度器Nginx健康检查(可选)两台都设置
思路:
让Keepalived以一定时间间隔执行一个外部脚本,脚本的功能是当Nginx失败,则关闭本机的Keepalived
(1) script
[root@proxy-master ~]# vim /etc/keepalived/check_nginx_status.sh
#!/bin/bash                                                     
/usr/bin/curl -I http://localhost &>/dev/null   
if [ $? -ne 0 ];then                                            
#   /etc/init.d/keepalived stop
    systemctl stop keepalived
fi                                                                      
[root@proxy-master ~]# chmod a+x /etc/keepalived/check_nginx_status.sh

(2). keepalived使用script
! Configuration File for keepalived

global_defs {
   router_id director1
}
vrrp_script check_nginx {
   script "/etc/keepalived/check_nginx_status.sh"
   interval 5   //脚本执行间隔时间/S
}

vrrp_instance VI_1 {
    state MASTER
    interface ens32
    virtual_router_id 80
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.94.100/32
    }
    track_script {
        check_nginx
    }
}
注:必须先启动nginx,再启动keepalived

3、LVS_Director + KeepAlived

主机名 ip 系统 用途
client 192.168.94.99 centos7.5 客户端
lvs-keepalived-master 192.168.94.132 centos7.5 分发器
lvs-keepalived-slave 192.168.94.133 centos7.5 分发器备
real-server1 192.168.94.134 centos7.5 web1
real-server2 192.168.94.135 centos7.5 web2
vip 172.16/147.100 生产中是公网ip
LVS_Director + KeepAlived

KeepAlived在该项目中的功能:
1. 管理IPVS的路由表(包括对RealServer做健康检查)
2. 实现调度器的HA
http://www.keepalived.org

Keepalived所执行的外部脚本命令建议使用绝对路径

实施步骤:

1. 主/备调度器安装软件
[root@lvs-keepalived-master ~]# yum -y install ipvsadm keepalived
[root@lvs-keepalived-slave ~]# yum -y install ipvsadm keepalived

[root@lvs-keepalived-master ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lvs-keepalived-master    #辅助改为lvs-backup
}

vrrp_instance VI_1 {
    state MASTER
    interface ens32                #VIP绑定接口
    virtual_router_id 80         #VRID 同一组集群,主备一致          
    priority 100            #本节点优先级,辅助改为50
    advert_int 1            #检查间隔,默认为1s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.94.100/32
    }
}

virtual_server 192.168.94.100 80 {    #LVS配置
    delay_loop 3
    lb_algo rr     #LVS调度算法
    lb_kind DR     #LVS集群模式(路由模式)
    nat_mask 255.255.255.255
    protocol TCP      #健康检查使用的协议
    real_server 192.168.94.134 80 {
        weight 1
        inhibit_on_failure   #当该节点失败时,把权重设置为0,而不是从IPVS中删除
        TCP_CHECK {          #健康检查
            connect_port 80   #检查的端口
            connect_timeout 3  #连接超时的时间
            }
        }
    real_server 192.168.94.135 80 {
        weight 1
        inhibit_on_failure
        TCP_CHECK {
            connect_timeout 3
            connect_port 80
            }
        }
}

[root@lvs-keepalived-slave ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lvs-keepalived-slave
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens32
    nopreempt                    #不抢占资源
    virtual_router_id 80
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.94.100/32
    }
}
virtual_server 192.168.94.100 80 {
    delay_loop 3
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.255.255
    protocol TCP
    real_server 192.168.94.134 80 {
        weight 1
        inhibit_on_failure
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            }
        }
    real_server 192.168.94.135 80 {
        weight 1
        inhibit_on_failure
        TCP_CHECK {
            connect_timeout 3
            connect_port 80
            }
        }
}
3. 启动KeepAlived(主备均启动)
[root@lvs-keepalived-master ~]# systemctl start keepalived
[root@lvs-keepalived-master ~]# systemctl enable keepalived

ipvsadm无需配置,keeplived启动时会自动配置
[root@lvs-keepalived-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.94.100:80 rr
  -> 192.168.94.134:80            Route   1      0          0         
  -> 192.168.94.135:80            Route   1      0          0

4. 所有RS配置(nginx1,nginx2)
配置好网站服务器,测试所有RS
[root@real-server1 ~]# yum install -y nginx
[root@real-server2 ~]# yum install -y nginx
[root@real-server1 ~]# ip addr add dev lo 192.168.94.100/32   #在lo接口上绑定VIP
[root@real-server1 ~]# echo "ip addr add dev lo 192.168.94.100/32" >> /etc/rc.local
[root@real-server1 ~]# echo "net.ipv4.conf.all.arp_ignore = 1" >> /etc/sysctl.conf
[root@real-server1 ~]# sysctl -p
[root@real-server1 ~]# echo "real-server1" >> /usr/share/nginx/html/index.html
[root@real-server1 ~]# systemctl start nginx
[root@real-server1 ~]# chmod +x /etc/rc.local

测试

[root@real-server1 ~]# cat /etc/nginx/nginx.conf
location / {
            root   /opt;
            index  index.html index.htm;
        }
[root@real-server1 ~]# echo real-server1 > /opt/index.html
[root@real-server1 ~]# nginx -s reload
[root@real-server2 ~]# echo real-server2 > /opt/index.html
[root@real-server2 ~]# nginx -s reload
使用client端访问:
[root@client ~]# curl 192.168.94.100
real-server1
[root@client ~]# curl 192.168.94.100
real-server2

停止master端的Keepalived,观察ip漂移,并测试能否继续正常访问
[root@lvs-keepalived-master ~]# systemctl stop keepalived
[root@lvs-keepalived-slave ~]# ip a
    inet 192.168.94.133/24 brd 192.168.94.255 scope global dynamic ens32
    inet 192.168.94.100/32 scope global ens32
[root@client ~]# curl 192.168.94.100
real-server1
[root@client ~]# curl 192.168.94.100
real-server2

重新启动master端Keepalived,再次测试
[root@lvs-keepalived-master ~]# systemctl start keepalived
[root@lvs-keepalived-master ~]# ip a
    inet 192.168.94.100/32 scope global ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::f041:f8cc:18a1:adad/64 scope link 
       valid_lft forever preferred_lft forever
[root@client ~]# curl 192.168.94.100
real-server1
[root@client ~]# curl 192.168.94.100
real-server2

健康检查:
启动ipvsadm
#!/usr/bin/bash
systemctl status ipvsadm
if [ $? -ne 0 ];then
    systemctl stop keepalived
fi

4、MySQL+Keepalived

Keepalived+mysql 自动切换

项目环境:
VIP 192.168.94.100
mysql1 192.168.94.132      keepalived-master
mysql2 192.168.94.133      keepalived-salve
 
一、mysql 主主同步        (不使用共享存储,数据保存本地存储)
二、安装keepalived 
三、keepalived 主备配置文件
四、mysql状态检测脚本/root/bin/keepalived_check_mysql.sh
五、测试及诊断
实施步骤:

一、mysql 主主同步

master端配置:
[root@mysql-keepalived-master ~]# vim /etc/my.cnf # 在[mysqld]添加
log-bin=mysql-bin #开启二进制日志
log-bin-index=binlog.index
server-id=1 #设置server-id
# auto_increment_increment=2  # 自增减步长
# auto_increment_offset=2     # 自增减开始
适用场景:当两台机器同时接受到3号请求时,就尴尬了,所以设置为一台机器处理序号为
奇数的请求,另一台处理偶数请求。

[root@mysql-keepalived-master ~]# systemctl restart mysqld
[root@mysql-keepalived-master ~]# mysql -uroot -p"Duan@123"
mysql> grant all on *.* to "duan"@"192.168.94.%" identified by "Duan@123";
mysql> flush privileges;
mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000001
         Position: 747

slave端配置:
[root@mysql-keepalived-slave ~]# vim /etc/my.cnf
log-bin=mysql-bin 
log-bin-index=binlog.index
server-id=2
[root@mysql-keepalived-slave ~]# systemctl restart mysqld
[root@mysql-keepalived-slave ~]# mysql -uroot -p"Duan@123"
mysql> grant all on *.* to "duan"@"192.168.94.%" identified by "Duan@123";
mysql> \e
CHANGE MASTER TO
MASTER_HOST='192.168.94.132',
MASTER_USER='duan',
MASTER_PASSWORD='Duan@123',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=747;
    -> ;
mysql> start slave;
mysql> show slave status\G
image.png

image.png
show slave status\G时注意 Seconds_Behind_Master 这个参数,这个参数值过大则表示同步延迟高。
mysql> flush privileges;
mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000001
         Position: 597
返回master端操作:
mysql> \e
CHANGE MASTER TO
MASTER_HOST='192.168.94.132',
MASTER_USER='duan',
MASTER_PASSWORD='Duan@123',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=747;
    ->;
mysql> start slave;
mysql> show slave status\G
image.png

二、安装keepalived---两台机器都操作

[root@mysql-keepalived-master ~]# yum -y install keepalived
[root@mysql-keepalived-slave ~]# yum -y install keepalived

三、keepalived 主备配置文件

192.168.94.132 master配置
[root@mysql-keepalived-master ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@mysql-keepalived-master ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id master
}
vrrp_script check_run {
   script "/etc/keepalived/keepalived_check_mysql.sh"
   interval 5
}

vrrp_instance VI_1 {
    state MASTER
    interface ens32
    virtual_router_id 89
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.94.100/32
    }
    track_script {
        check_run
    }
}


slave 192.168.94.133 配置
[root@mysql-keepalived-slave ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@mysql-keepalived-slave ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id backup
}
vrrp_script check_run {
   script "/etc/keepalived/keepalived_check_mysql.sh"
   interval 5
}

vrrp_instance VI_1 {
    state BACKUP
    nopreempt
    interface ens32
    virtual_router_id 89
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.94.100/32
    }
    track_script {
        check_run
    }
}

四、mysql状态检测脚

本/root/keepalived_check_mysql.sh(两台MySQL同样的脚本)
版本一:简单使用:
[root@mysql-keepalived-master ~]# vim /etc/keepalived/keepalived_check_mysql.sh
#!/bin/bash
/usr/bin/mysql -uroot -p'Duan@123' -e "show status" &>/dev/null 
if [ $? -ne 0 ] ;then 
#   service keepalived stop
    systemctl stop keepalived
fi
[root@mysql-keepalived-master ~]# chmod a+x /etc/keepalived/keepalived_check_mysql.sh
==========================================================================
两边均启动keepalived
方式一:
[root@mysql-keepalived-master ~]# systemctl start keepalived
[root@mysql-keepalived-master ~]# systemctl enable keepalived
方式二:
# /etc/init.d/keepalived start
# /etc/init.d/keepalived start
# chkconfig --add keepalived
# chkconfig keepalived on

测试

[root@mysql-keepalived-master  ~]#  mysql -uduan -p"Duan@123" -h 192.168.94.100
mysql> create database duan;
mysql> use duan
mysql> create table t1(id int);

5、Haproxy 基础

HA 高可用
LB 负载均衡
ha-proxy 是一款高性能的负载均衡软件
1562943827261.png

软件:haproxy---主要是做负载均衡的7层,也可以做4层负载均衡
apache也可以做7层负载均衡,但是很麻烦。实际工作中没有人用。
负载均衡是通过OSI协议对应的
7层负载均衡:用的7层http协议,
4层负载均衡:用的是tcp协议加端口号做的负载均衡

ha-proxy概述

ha-proxy是一款高性能的负载均衡软件。因为其专注于负载均衡这一些事情,因此与nginx比起来在负载均衡这件事情上做更好,更专业。

ha-proxy的特点

ha-proxy 作为目前流行的负载均衡软件,必须有其出色的一面。下面介绍一下ha-proxy相对LVS,Nginx等负载均衡软件的优点。

•支持tcp / http 两种协议层的负载均衡,使得其负载均衡功能非常丰富。
•支持8种左右的负载均衡算法,尤其是在http模式时,有许多非常实在的负载均衡算法,适用各种需求。
•性能非常优秀,基于事件驱动的链接处理模式及单进程处理模式(和Nginx类似)让其性能卓越。
•拥有一个功能出色的监控页面,实时了解系统的当前状况。
•功能强大的ACL(访问控制)支持,给用户极大的方便。

haproxy算法:

1.roundrobin

基于权重进行轮询,在服务器的处理时间保持均匀分布时,这是最平衡,最公平的算法.此算法是动态的,这表示其权重可以在运行时进行调整.不过在设计上,每个后端服务器仅能最多接受4128个连接

2.static-rr

基于权重进行轮询,与roundrobin类似,但是为静态方法,在运行时调整其服务器权重不会生效.不过,其在后端服务器连接数上没有限制

3.leastconn

新的连接请求被派发至具有最少连接数目的后端服务器.

4.source

基于请求源IP的算法。对请求的源IP进行hash运算,然后将结果与后端服务器的权重总数相除后转发至某台匹配的服务器。使用同一IP客户端请求始终被转发到某特定的后端服务器。

5.uri

对部分或整体URI进行hash运算,再与服务器的总权重相除,最后转发到匹配后端。

6.uri param

根据URI路径中参数进行转发,保证在后端服务器数量不变的情况下,同一用户请求分发到同一机器

7.hdr(<name>)

根据http头转发,如果不存在http头。则使用简单轮询

1、Haproxy 实现七层负载

环境

ip 主机名
192.168.94.132 ha-proxy-master
192.168.94.133 ha-proxy-slave
192.168.94.134 real-server1
192.168.94.135 real-server2

一、RS配置

配置好网站服务器,测试所有RS,所有机器安装nginx
[root@real-server1 ~]# yum install -y nginx
[root@real-server1 ~]# systemctl start nginx
[root@real-server1 ~]# echo "real-server1" >> /usr/share/nginx/html/index.html
# 所有nginx服务器按顺序输入编号,方便区分。

二、调度器配置Haproxy(主/备)都执行

#不能启动nginx
[root@ha-proxy-master ~]# yum -y install haproxy
[root@ha-proxy-master ~]# cp -rf /etc/haproxy/haproxy.cfg{,.bak}
[root@ha-proxy-master ~]# sed -i -r '/^[ ]*#/d;/^$/d' /etc/haproxy/haproxy.cfg
[root@ha-proxy-master ~]# vim /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2 info
    pidfile     /var/run/haproxy.pid
    maxconn     4000   #优先级低
    user        haproxy
    group       haproxy
    daemon               #以后台形式运行ha-proxy
    nbproc 1            #工作进程数量  cpu内核是几就写几
defaults
    mode                http  #工作模式 http ,tcp 是 4 层,http是 7 层   
    log                 global
    retries             3   #健康检查。3次连接失败就认为服务器不可用,主要通过后面的check检查
    option              redispatch  #服务不可用后重定向到其他健康服务器。
    maxconn             4000  #优先级中
    contimeout          5000  #ha服务器与后端服务器连接超时时间,单位毫秒ms
    clitimeout          50000 #客户端超时
    srvtimeout          50000 #后端服务器超时
listen stats
    bind                *:81   //端口只要不是已经被占用的都可以
    stats               enable
    stats uri           /haproxy  #使用浏览器访问 http://192.168.94.132:81/haproxy,可以看到服务器状态  
    stats auth          duan:123  #用户认证,客户端使用elinks浏览器的时候不生效
frontend  web
    mode                 http  
    bind                 *:80   #监听哪个ip和什么端口
    option               httplog        #日志类别 http 日志格式
    acl html url_reg  -i       \.html$  #1.访问控制列表名称html。规则要求访问以html结尾的url
    use_backend httpservers if  html #2.如果满足acl html规则,则推送给后端服务器httpservers
    default_backend      httpservers   #默认使用的服务器组
backend httpservers    #名字要与上面的名字必须一样
    balance              roundrobin  #负载均衡的方式
    server  http1 192.168.94.134:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
    server  http2 192.168.94.135:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
将配置文件拷贝到slave服务器
[root@ha-proxy-master ~]# scp  /etc/haproxy/haproxy.cfg 192.168.246.161:/etc/haproxy/
[root@ha-proxy-master ~]# vim /etc/rsyslog.conf
local2.*                                                /var/log/haproxy.log
两台机器启动设置开机启动
[root@ha-proxy-master ~]# systemctl start haproxy
[root@ha-proxy-master ~]# systemctl enable haproxy
[root@ha-proxy-master ~]# systemctl status haproxy
#check inter 2000          检测心跳频率
#maxconn       最大连接数  
#check inter   检查间隔  
#rise 2      2 次正确认为服务器可用
#fall 2      2 次失败认为服务器不可用
/etc/haproxy/haproxy.cfg
global                                                    //关于进程的全局参数
    log                     127.0.0.1 local2 info  #日志服务器
    pidfile                 /var/run/haproxy.pid  #pid文件
    maxconn                 4000     #最大连接数
    user                    haproxy   #用户
    group               haproxy      #组
    daemon          #守护进程方式后台运行
    nbproc 1        #工作进程数量  cpu内核是几就写几
defaults 段用于为其它配置段提供默认参数
listen是frontend和backend的结合体

frontend        虚拟服务VIrtual Server
backend        真实服务器Real Server

调度器可以同时为多个站点调度,如果使用frontend、backend的方式:
frontend1 backend1
frontend2 backend2
frontend3 backend3

三、测试

[root@real-server1 ~]# curl 192.168.94.132
real-server1
[root@real-server1 ~]# curl 192.168.94.132
real-server2
[root@real-server1 ~]# curl 192.168.94.133
real-server1
[root@real-server1 ~]# curl 192.168.94.133
real-server2

浏览器访问测试
主:

image.png

image.png

备:

image.png

页面主要参数解释

Queue
Cur: current queued requests //当前的队列请求数量
Max:max queued requests     //最大的队列请求数量
Limit:           //队列限制数量

Errors
Req:request errors             //错误请求
Conn:connection errors          //错误的连接

Server列表:
Status:状态,包括up(后端机活动)和down(后端机挂掉)两种状态
LastChk:    持续检查后端服务器的时间
Wght: (weight) : 权重
如果出现bind失败的报错,执行下列命令
setsebool -P haproxy_connect_any=1

Keepalived实现调度器HA

注:主/备调度器均能够实现正常调度
1. 主/备调度器安装软件
[root@ha-proxy-master ~]# yum install -y keepalived
[root@ha-proxy-slave ~]# yum install -y keepalived
[root@ha-proxy-master ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@ha-proxy-master keepalived]# vim keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id director1
}
vrrp_script check_haproxy {
   script "/etc/keepalived/check_haproxy_status.sh"
   interval 5
}

vrrp_instance VI_1 {
    state MASTER
    interface ens32
    virtual_router_id 80
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.94.100/32
    }
    track_script {
        check_haproxy
    }
}
[root@ha-proxy-slave keepalived]# vim keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id directory2
}
vrrp_script check_haproxy {
   script "/etc/keepalived/check_haproxy_status.sh"
   interval 5
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens32
    nopreempt
    virtual_router_id 80
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.94.100/32
    }
    track_script {
        check_haproxy
    }
}

[root@ha-proxy-master ~]# vim /etc/keepalived/check_haproxy_status.sh
#!/bin/bash/
usr/bin/curl -I http://localhost &>/dev/null                            
if [ $? -ne 0 ];then                                                                            
#       /etc/init.d/keepalived stop
        systemctl stop keepalived
fi  
[root@ha-proxy-master ~]# chmod a+x /etc/keepalived/check_haproxy_status.sh
[root@ha-proxy-master ~]# scp /etc/keepalived/check_haproxy_status.sh 192.168.94.133:/etc/keepalived
[root@ha-proxy-master keepalived]# systemctl restart keepalived
[root@ha-proxy-slave keepalived]# systemctl restart keepalived
注:必须先启动haproxy,再启动keepalived
[root@ha-proxy-master ~]# vim /etc/rsyslog.conf 
# Provides UDP syslog reception  #由于haproxy的日志是用udp传输的,所以要启用rsyslog的udp监听
$ModLoad imudp
$UDPServerRun 514
找到  #### RULES ####   下面添加
local2.*                       /var/log/haproxy.log
[root@ha-proxy-master ~]# systemctl restart rsyslog
[root@ha-proxy-master ~]# systemctl restart haproxy
[root@ha-proxy-master ~]# tail -f /var/log/haproxy.log 
Feb 15 18:08:31 localhost haproxy[6633]: 127.0.0.1:40666 [15/Feb/2020:18:08:31.820] httpserver httpservers/http1 0/0/0 235 -- 1/1/0/1/0 0/0
Feb 15 18:08:36 localhost haproxy[6930]: Proxy stats started.
Feb 15 18:08:36 localhost haproxy[6930]: Proxy httpserver started.
Feb 15 18:08:36 localhost haproxy[6930]: Proxy httpservers started.
Feb 15 18:08:36 localhost haproxy[6930]: 127.0.0.1:40692 [15/Feb/2020:18:08:36.822] httpserver httpservers/http1 0/0/1 235 -- 1/1/0/1/0 0/0
Feb 15 18:08:41 localhost haproxy[6931]: 127.0.0.1:40718 [15/Feb/2020:18:08:41.832] httpserver httpservers/http2 0/0/0 235 -- 1/1/0/1/0 0/0
Feb 15 18:08:46 localhost haproxy[6931]: 127.0.0.1:40744 [15/Feb/2020:18:08:46.829] httpserver httpservers/http1 0/1/1 235 -- 1/1/0/1/0 0/0

访问测试

[root@client ~]# curl 192.168.94.100
real-server2
[root@client ~]# curl 192.168.94.100
real-server1

Haproxy 实现四层负载

两台haproxy配置文件:
[root@ha-proxy-master ~]# cat /etc/haproxy/haproxy.cfg
Haproxy L4
=================================================================================
global
    log         127.0.0.1 local2
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    nbproc      1
defaults
    mode                    http
    log                     global
    option                  redispatch
    retries                 3
    maxconn                 4000
    contimeout              5000
    clitimeout              50000
    srvtimeout              50000
listen stats
    bind            *:81
    stats                       enable
    stats uri               /haproxy
    stats auth              qianfeng:123
frontend  web
    mode                    http
    bind                            *:80
    option                  httplog
    default_backend    httpservers
backend httpservers
    balance     roundrobin
    server  http1 192.168.94.134:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
    server  http2 192.168.94.135:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
listen mysql
    bind *:3306
    mode tcp
    balance roundrobin
    server mysql1 192.168.94.134:3306 weight 1  check inter 1s rise 2 fall 2
    server mysql2 192.168.94.135:3306 weight 1  check inter 1s rise 2 fall 2
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 158,233评论 4 360
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,013评论 1 291
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 108,030评论 0 241
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,827评论 0 204
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,221评论 3 286
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,542评论 1 216
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,814评论 2 312
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,513评论 0 198
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,225评论 1 241
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,497评论 2 244
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 31,998评论 1 258
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,342评论 2 253
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 32,986评论 3 235
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,055评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,812评论 0 194
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,560评论 2 271
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,461评论 2 266