《分布式一致性协议raft简介》

一. 由来

如果你搭建过大型的分布式系统,那么一般你会用到zookeeper这个服务。该服务实现了ZAB算法。其通常会用在fail over选主,微服务上下游server配置中心等场景。但是ZAB和paxos有个缺点,就是理解性比较差。其论文内容十分复杂,导致真正理解的开发人员非常少。【知乎:raft算法与paxos算法相比有什么优势,使用场景有什么差异】

二. raft 原理描述

【raft homepage】
【raft paper】
【live demo】

1. 选主Leader Election

raft和zookeeper类似,一般需要3或者5个node。这样有利于判断选举的多数情况
node分为3个状态:follower,candidate,leader
状态的转换
raft有2个timeout设置
1)从follow而转换到candidate的timeout: election timeout,设置为:150ms到300ms中的随机数。一个node到达这个timeout之后会发起一个新的选举term(递增的,大的表示新的),向其他节点发起投票请求,包括投给自己的那票,如果获得了大多数选票,那么自己就转换为leader状态
2)node成为leader之后会向其他node发送Append Entries,这个时间为heartbeat timeout
如果lead在实际使用中down掉,剩下的节点会重新开启1)和2)描述的选举流程,保证了高可用性
特殊情况
如果集群中剩下偶数个node,并且在选举的过程中有2个node获得相等的选票数,那么会开启新的一轮term选举。知道有一个node获得多数选票(随机的election timeout保证可行)

2. 分布式系统中数据的一致性和高可用保证log replication

client给leader发送数据修改请求
leader通过Append Entries在心跳的过程中将修改内容下发到follower nodes
在多数follower 接收了修改内容返回后,leader向client确认
leader向follower发送心跳,具体执行修改操作,此后数据在集群中保持一致
特殊情况
节点之前的网络状况十分不好,此时会有多个leader,其term也是不同的。
由于commit的修改需要多数通过,那么只有具有最多node的一个集群会commit修改成功。
当网络状况恢复,整个集群的节点会向多数节点的集群同步。这样整个集群中的数据会继续保持一致

3. raft集群扩容Membership Changes

live demo中没有提及,但是paper中说明的内容。
在实际使用中可有可能会遇到现有机器被新机器替换,或者为了提升稳定性扩容raft集群的情况。作者给出了joint consensus的解决方案。其能保证切换过程是无缝的。

三. 在工业界系统的应用

  1. 【MySQL 三节点企业版】
    mysql 三节点企业版

    利用分布式一致性协议(Raft)保障多节点状态切换的可靠性和原子性。
  2. 【RethinkDB: pushes JSON to your apps in realtime】

How is cluster configuration propagated?
Updating the state of a cluster is a surprisingly difficult problem in distributed systems. At any given point different (and potentially) conflicting configurations can be selected on different sides of a netsplit, different configurations can reach different nodes in the cluster at unpredictable times, etc.
RethinkDB uses the Raft algorithm to store and propagate cluster configuration in most cases, although there are some situations it uses semilattices, versioned with internal timestamps. This architecture turns out to have sufficient mathematical properties to address all the issues mentioned above (this result has been known in distributed systems research for quite a while)

  1. 【etcd:A distributed, reliable key-value store for the most critical data of a distributed system】

What is failure tolerance?
An etcd cluster operates so long as a member quorum can be established. If quorum is lost through transient network failures (e.g., partitions), etcd automatically and safely resumes once the network recovers and restores quorum; Raft enforces cluster consistency. For power loss, etcd persists the Raft log to disk; etcd replays the log to the point of failure and resumes cluster participation. For permanent hardware failure, the node may be removed from the cluster through runtime reconfiguration.
It is recommended to have an odd number of members in a cluster. An odd-size cluster tolerates the same number of failures as an even-size cluster but with fewer nodes. The difference can be seen by comparing even and odd sized clusters:

etcd节点数与容错能力数据:奇数个node有优势

Adding a member to bring the size of cluster up to an even number doesn't buy additional fault tolerance. Likewise, during a network partition, an odd number of members guarantees that there will always be a majority partition that can continue to operate and be the source of truth when the partition ends.

推荐阅读更多精彩内容