为什么用redis
增加缓存,提高查询性能
计划
- 使用redis做缓存工具
- 实现系统高可用,redis需要做主备。使用redis做分片集群
- 业务逻辑中添加缓存
使用redis做缓存
安装/配置redis
- 需要安装gcc:yum install gcc-c++
- 官网下载redis
$ wget http://download.redis.io/releases/redis-4.0.2.tar.gz
$ tar xzf redis-4.0.2.tar.gz
$ cd redis-4.0.2
$ make
//安装到指定目录,如 /usr/local/redis
$ cd /usr/local/redis-4.0.2
$ make PREFIX=/usr/local/redis install
- 拷贝redis.conf
$ cp /usr/local/redis-4.0.2/redis.conf /usr/local/redis/bin
redis启动
- 前台启动
bin/redis-server - 后台启动
修改下redis.conf文件 daemonize yes
./bin/redis-server ./redis.conf
redis集群
集群原理
架构
- 所有的redis节点彼此互联(PING-PONG机制),内部使用二进制协议优化传输速度和带宽
- 节点的fail是通过集群中超过半数的节点检测失效时才生效
- 客户端与redis节点直连,不需要中间proxy层,客户端不需要连接集群所有节点,连接集群中任何一个可用节点即可
- redis-cluster把所有的物理节点映射到[0-16383]slot上
Redis 集群中内置了 16384 个哈希槽,当需要在 Redis 集群中放置一个 key-value 时,redis 先对 key 使用 crc16 算法算出一个结果,然后把结果对 16384 求余数,这样每个 key 都会对应一个编号在 0-16383 之间的哈希槽,redis 会根据节点数量大致均等的将哈希槽映射到不同的节点
redis-cluster投票:容错
- 领着投票过程是集群中所有master参与,如果半数以上master节点与master节点通信超过(cluster-node-timeout),认为当前master节点挂掉.
- 什么时候整个集群不可用(cluster_state:fail)?
a:如果集群任意master挂掉,且当前master没有slave.集群进入fail状态,也可以理解成集群的slot映射[0-16383]不完成时进入fail状态. ps : redis-3.0.0.rc1加入cluster-require-full-coverage参数,默认关闭,打开集群兼容部分失败.
b:如果集群超过半数以上master挂掉,无论是否有slave集群进入fail状态.
ps:当集群不可用时,所有对集群的操作做都不可用,收到((error) CLUSTERDOWN The cluster is down)错误
ruby环境
- 安装ruby
yum install ruby
yum install rubygems //包管理工具
创建集群
- /usr/local下创建redis-cluster目录,其下创建7001、7002。。7006目录
- 将redis/bin下的文件拷贝到每个700*目录下
- 修改每个700*目录下的redis.conf配置文件
port 700*
cluster-enabled yes
- 脚本开启每个redis实例
在redis-cluster/下创建一个startall.sh脚本,测试启动所有实例
cd redis01
./redis-server redis.conf
cd ..
cd redis02
./redis-server redis.conf
cd ..
cd redis03
./redis-server redis.conf
cd ..
cd redis04
./redis-server redis.conf
cd ..
cd redis05
./redis-server redis.conf
cd ..
cd redis06
./redis-server redis.conf
cd ..
- 执行创建集群命令
[root@localhost redis-cluster]# ./redis-trib.rb create --replicas 1 192.168.176.101:7001 192.168.176.101:7002 192.168.176.101:7003 192.168.176.101:7004 192.168.176.101:7005 192.168.176.101:7006
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.176.101:7001
192.168.176.101:7002
192.168.176.101:7003
Adding replica 192.168.176.101:7004 to 192.168.176.101:7001
Adding replica 192.168.176.101:7005 to 192.168.176.101:7002
Adding replica 192.168.176.101:7006 to 192.168.176.101:7003
M: 5f51d88cef55e85d3c85b7a92a9fccb7d775c095 192.168.176.101:7001
slots:0-5460 (5461 slots) master
M: e0d56c65a56d3456908f0eab6ddeec38dc1f3dd0 192.168.176.101:7002
slots:5461-10922 (5462 slots) master
M: c3ae4999e6c3b7f6f984f4efea483fddb1ba7a36 192.168.176.101:7003
slots:10923-16383 (5461 slots) master
S: 84250b962fa1174555a771d7349e98b607d65439 192.168.176.101:7004
replicates 5f51d88cef55e85d3c85b7a92a9fccb7d775c095
S: 23302609be589bf1c89865599754e916246083e2 192.168.176.101:7005
replicates e0d56c65a56d3456908f0eab6ddeec38dc1f3dd0
S: c049a2d2d70a39e24e3137e5eba30d8a9f17c5e7 192.168.176.101:7006
replicates c3ae4999e6c3b7f6f984f4efea483fddb1ba7a36
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.....
>>> Performing Cluster Check (using node 192.168.176.101:7001)
M: 5f51d88cef55e85d3c85b7a92a9fccb7d775c095 192.168.176.101:7001
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: c049a2d2d70a39e24e3137e5eba30d8a9f17c5e7 192.168.176.101:7006
slots: (0 slots) slave
replicates c3ae4999e6c3b7f6f984f4efea483fddb1ba7a36
S: 84250b962fa1174555a771d7349e98b607d65439 192.168.176.101:7004
slots: (0 slots) slave
replicates 5f51d88cef55e85d3c85b7a92a9fccb7d775c095
M: c3ae4999e6c3b7f6f984f4efea483fddb1ba7a36 192.168.176.101:7003
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 23302609be589bf1c89865599754e916246083e2 192.168.176.101:7005
slots: (0 slots) slave
replicates e0d56c65a56d3456908f0eab6ddeec38dc1f3dd0
M: e0d56c65a56d3456908f0eab6ddeec38dc1f3dd0 192.168.176.101:7002
slots:5461-10922 (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
- 关闭集群
shutdown.sh
redis-cli -p 7001 shutdown
集群测试
参数要加上-c 集群间才能传递
[root@localhost redis-cluster]# redis01/redis-cli -h 192.168.176.101 -p 7003 -c
Redis客户端
集群配置中遇到的问题
- centos7 ruby2.0的版本
yum源太老,后面安装redis4.2的库时,需要ruby>=2.2.2
gem install redis
ERROR: Error installing redis:
redis requires Ruby version >= 2.2.2.
---------------------------------------
解决方案:使用rvm管理工具
1. 安装RVM
gpg2 --keyserver hkp://keys.gnupg.net --recv-keys D39DC0E3
curl -L get.rvm.io | bash -s stable
find / -name rvm -print
source /usr/local/rvm/scripts/rvm
2. 查看rvm库中已知的ruby版本
rvm list known
3. 安装一个ruby版本
rvm install 2.4.1
4. 使用一个ruby
rvm use 2.4.1
5. 设置默认版本
rvm use 2.4.1 -default
- 在集群搭建的[ERR]
>>> Creating cluster
[ERR] Node 192.168.176.101:7001 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
原因:dump.rdb是由Redis服务器自动生成的 默认情况下 每隔一段时间redis服务器程序会自动对数据库做一次遍历,把内存快照写在一个叫做“dump.rdb”的文件里,这个持久化机制叫做SNAPSHOT。有了SNAPSHOT后,如果服务器宕机,重新启动redis服务器程序时redis会自动加载dump.rdb,将数据库状态恢复到上一次做SNAPSHOT时的状态。
解决方案:
1. 将每个节点下aof、rdb、nodes.conf本地备份文件删除;
2. 重新启动redis,执行脚本
./redis-trib.rb create --replicas 1 192.168.176.101:7001 192.168.176.101:7002 192.168.176.101:7003 192.168.176.101:7004 192.168.176.101:7005 192.168.176.101:7006