Openshift开启Calico BGP 与 OVS性能PK

openshiftcalico

Openshift网络方案选择

  • 大家都知道K8S在网络插件选择上有很多种,默认的是Flannel,但是它的性能一般,互联网中使用最多的是Calico BGP,因为它的性能非常好。
  • 而对于Openshift,官方只支持ovs一种网络方案,同时RedHat也表示ovs在Openshift平台上运行是最合适的。但是ovs的网络性能怎样呢?因为ovs方案对数据需要进行加包,解包的过程,性能肯定是会受影响的。同时经过实测,在万兆网络中的损耗近50%,虽然在绝大部分场景下ovs已经够用了,但是但是跟几乎无损耗的Calico BGP比起来还是逊色不少。
  • 很庆幸,Openshift虽然官方不作Calico网络方案的支持,但还是很体贴地把它加入到了Openshift的安装脚本中,从而让大家都能方便地使用Calico网络方案,包括IPIP及BGP方案。

安装步骤

  1. 在ansible hosts中设置关闭openshift默认的sdn方案,开启calico方案
    /etc/ansible/hosts
[OSEv3:vars]
os_sdn_network_plugin_name=cni
openshift_use_calico=true
openshift_use_openshift_sdn=false
  1. 设置Calico网络配置
    openshift-ansible/roles/calico/defaults/main.yaml
calico_ip_autodetection_method: "first-found"
ip_pools:
  apiVersion: projectcalico.org/v3
  kind: IPPoolList
  items:
  - apiVersion: projectcalico.org/v3
    kind: IPPool
    metadata:
      name: default-ipv4-ippool
    spec:
      cidr: "{{ openshift_cluster_network_cidr }}"
      ipipMode: Always  #默认是为Always,为IPIP模式
      natOutgoing: true
      nodeSelector: "all()"

配置说明(正确开启calico bgp网络的关键):
calico_ip_autodetection_method

calico_ip_autodetection_method: "interface=eth0"
# 默认为“first-found”,如果各主机网络设备名不一样,可以使用正则
# calico_ip_autodetection_method: "interface=(eth0|eth1)"

spec.ipipMode

ipipMode: Always  #默认是为Always,为IPIP模式;Never为开启BGP模式

完整配置

---
cni_conf_dir: "/etc/cni/net.d/"
cni_bin_dir: "/opt/cni/bin/"

calico_url_policy_controller: "quay.io/calico/kube-controllers:v3.5.0"
calico_node_image: "quay.io/calico/node:v3.5.0"
calico_cni_image: "quay.io/calico/cni:v3.5.0"
calicoctl_image: "quay.io/calico/ctl:v3.5.0"
calico_upgrade_image: "quay.io/calico/upgrade:v1.0.5"
calico_ip_autodetection_method: "interface=eth0"
# 默认为“first-found”,如果各主机网络设备名不一样,可以使用正则
# calico_ip_autodetection_method: "interface=(eth0|eth1)"
use_calico_etcd: False

# Configure the IP Pool(s) from which Pod IPs will be chosen.
ip_pools:
  apiVersion: projectcalico.org/v3
  kind: IPPoolList
  items:
  - apiVersion: projectcalico.org/v3
    kind: IPPool
    metadata:
      name: default-ipv4-ippool
    spec:
      cidr: "{{ openshift_cluster_network_cidr }}"
      ipipMode: Never #默认是为Always,为IPIP模式;Never为开启BGP模式
      natOutgoing: true
      nodeSelector: "all()"

# Options below are only valid for legacy Calico v2 installations,
# and have been superceded by options above for Calico v3.
calico_ipv4pool_ipip: "always"
  1. 正常执行Openshift安装脚本
$ ansible-playbook playbooks/prerequisites.yml
$ ansible-playbook playbooks/deploy_cluster.yml
  1. 查看网络
[root@master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:fc:dd:fc:ed brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.3/24 brd 192.168.0.255 scope global dynamic eth0
       valid_lft 86262sec preferred_lft 86262sec
    inet6 fe80::248:584e:2626:2269/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:46:89:5d:d0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: cali252a8913dc3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
5: cali6d8bb449db0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
6: cali9efe4d704f6@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever

[root@master1 ~]# ip route
default via 192.168.0.1 dev eth0 proto static metric 100 
10.128.113.64/26 via 192.168.0.7 dev eth0 proto bird 
10.128.141.128/26 via 192.168.0.4 dev eth0 proto bird 
10.129.8.0/26 via 192.168.0.9 dev eth0 proto bird 
10.129.182.192/26 via 192.168.0.8 dev eth0 proto bird 
10.129.200.0/26 via 192.168.0.6 dev eth0 proto bird 
10.130.193.128/26 via 192.168.0.10 dev eth0 proto bird 
blackhole 10.131.9.192/26 proto bird 
10.131.9.206 dev cali252a8913dc3 scope link 
10.131.9.207 dev cali6d8bb449db0 scope link 
10.131.9.208 dev cali9efe4d704f6 scope link 
10.131.42.192/26 via 192.168.0.11 dev eth0 proto bird 
10.131.148.0/26 via 192.168.0.5 dev eth0 proto bird 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.3 metric 100 

说明:如果要部署路由反射(RR)模式,可参考OpenShift支持Calico BGP 路由反射(RR)模式

网络性能测试

测试环境为公有云平台上的虚拟机

iperf测试Pod吞吐量

测试方法与步骤

  1. 部署iperf服务端
$ oc new-project test
$ oc run iperf-server --image=registry.dcs.cmbchina.cn:9443/tools/iperf3 -- -s
$ oc get pod -o wide
NAME                   READY     STATUS    RESTARTS   AGE       IP            NODE
iperf-server-1-r6z2x   1/1       Running   0          3m        10.131.2.76  node1
  1. 部署iperf客户端
$ oc run iperf-client --image=registry.dcs.cmbchina.cn:9443/tools/iperf3 -n project-e --command -- sleep 10000
$ oc get pod -o wide | grep qperf
NAME                   READY     STATUS    RESTARTS   AGE       IP            NODE
iperf-client-3-gtr2l   1/1       Running   0          2h        10.130.0.70   node2
qperf-server-1-xxmhz   1/1       Running   0          4h        10.128.2.59    node1
  1. iperf3客户端测试iperf3(pod)吞吐量
$ oc rsh iperf-client-3-gtr2l
  $ iperf3 -c 10.131.2.76 

测试结果

ovs网络方案测试结果

Connecting to host 10.130.0.51, port 5201
[  4] local 10.129.0.50 port 42924 connected to 10.130.0.51 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   282 MBytes  2.36 Gbits/sec  1406    638 KBytes       
[  4]   1.00-2.00   sec   326 MBytes  2.74 Gbits/sec  2484    797 KBytes       
[  4]   2.00-3.00   sec   324 MBytes  2.71 Gbits/sec  2136    692 KBytes       
[  4]   3.00-4.00   sec   314 MBytes  2.63 Gbits/sec  3907    744 KBytes       
[  4]   4.00-5.00   sec   323 MBytes  2.71 Gbits/sec  1539    811 KBytes       
[  4]   5.00-6.00   sec   323 MBytes  2.71 Gbits/sec  1996    685 KBytes       
[  4]   6.00-7.00   sec   318 MBytes  2.67 Gbits/sec  1085    891 KBytes       
[  4]   7.00-8.00   sec   286 MBytes  2.40 Gbits/sec  2534    744 KBytes       
[  4]   8.00-9.00   sec   336 MBytes  2.82 Gbits/sec  1856    793 KBytes       
[  4]   9.00-10.00  sec   256 MBytes  2.14 Gbits/sec  2256    452 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  3.01 GBytes  2.59 Gbits/sec  21199             sender
[  4]   0.00-10.00  sec  3.01 GBytes  2.59 Gbits/sec                  receiver

iperf Done.

calico bgp网络方案测试结果

Connecting to host 10.129.8.3, port 5201
[  4] local 10.130.193.131 port 46222 connected to 10.129.8.3 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   735 MBytes  6.17 Gbits/sec  204    655 KBytes       
[  4]   1.00-2.00   sec   914 MBytes  7.67 Gbits/sec  353    818 KBytes       
[  4]   2.00-3.00   sec  1.01 GBytes  8.70 Gbits/sec    0   1.44 MBytes       
[  4]   3.00-4.00   sec  1.02 GBytes  8.76 Gbits/sec  465   1.87 MBytes       
[  4]   4.00-5.00   sec  1.02 GBytes  8.79 Gbits/sec  184   2.20 MBytes       
[  4]   5.00-6.00   sec  1.03 GBytes  8.81 Gbits/sec  596   1.33 MBytes       
[  4]   6.00-7.00   sec  1012 MBytes  8.49 Gbits/sec   17   1.28 MBytes       
[  4]   7.00-8.00   sec  1.02 GBytes  8.79 Gbits/sec   46   1.31 MBytes       
[  4]   8.00-9.00   sec  1.01 GBytes  8.69 Gbits/sec   87   1.26 MBytes       
[  4]   9.00-10.00  sec  1.02 GBytes  8.73 Gbits/sec  133   1.21 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  9.73 GBytes  8.36 Gbits/sec  2085             sender
[  4]   0.00-10.00  sec  9.73 GBytes  8.36 Gbits/sec                  receiver

iperf Done.
网络方案 传输数据量 传输速率
ovs方案 3.01 GB 2.59 Gb
calico bgp方案 9.73 GB 8.36 Gb

qperf测试网络带宽与延时

测试方法与步骤

  1. 部署qperf服务端
$ oc run qperf-server --image=registry.dcs.cmbchina.cn:9443/tools/qperf
$ oc get pod -o wide
NAME                   READY     STATUS    RESTARTS   AGE       IP            NODE
qperf-server-1-xxmhz   1/1       Running   0          4h        10.128.2.59    �node1
  1. 部署qperf客户端
$ oc run qperf-client --image=registry.dcs.cmbchina.cn:9443/tools/qperf --command -- sleep 10000
$ oc get pod -o wide -n project-e | grep qperf
NAME                   READY     STATUS    RESTARTS   AGE       IP            NODE
qperf-client-2-7jmvb   1/1       Running   0          4h        10.130.2.224   node2
qperf-server-1-xxmhz   1/1       Running   0          4h        10.128.2.59    node1
  1. qperf客户端测试qperf(pod)带宽与延时
$ oc rsh qperf-client-2-7jmvb
  $ qperf 10.128.2.59 -t 10 -oo msg_size:8:256K:*2 tcp_bw tcp_lat

测试结果

ovs网络方案qperf测试结果

tcp_bw:
    bw  =  15 MB/sec
tcp_bw:
    bw  =  26.4 MB/sec
tcp_bw:
    bw  =  40.7 MB/sec
tcp_bw:
    bw  =  59.5 MB/sec
tcp_bw:
    bw  =  76.1 MB/sec
tcp_bw:
    bw  =  194 MB/sec
tcp_bw:
    bw  =  239 MB/sec
tcp_bw:
    bw  =  256 MB/sec
tcp_bw:
    bw  =  258 MB/sec
tcp_bw:
    bw  =  262 MB/sec
tcp_bw:
    bw  =  259 MB/sec
tcp_bw:
    bw  =  250 MB/sec
tcp_bw:
    bw  =  272 MB/sec
tcp_bw:
    bw  =  291 MB/sec
tcp_bw:
    bw  =  272 MB/sec
tcp_bw:
    bw  =  282 MB/sec
tcp_lat:
    latency  =  34.2 us
tcp_lat:
    latency  =  34.3 us
tcp_lat:
    latency  =  33.9 us
tcp_lat:
    latency  =  33.4 us
tcp_lat:
    latency  =  34.1 us
tcp_lat:
    latency  =  34.1 us
tcp_lat:
    latency  =  34.2 us
tcp_lat:
    latency  =  34.8 us
tcp_lat:
    latency  =  46.3 us
tcp_lat:
    latency  =  56 us
tcp_lat:
    latency  =  86.5 us
tcp_lat:
    latency  =  133 us
tcp_lat:
    latency  =  219 us
tcp_lat:
    latency  =  435 us
tcp_lat:
    latency  =  733 us
tcp_lat:
    latency  =  1.27 ms

calico bgp网络方案qperf测试结果

tcp_bw:
    bw  =  17 MB/sec
tcp_bw:
    bw  =  32.1 MB/sec
tcp_bw:
    bw  =  39.4 MB/sec
tcp_bw:
    bw  =  81.7 MB/sec
tcp_bw:
    bw  =  141 MB/sec
tcp_bw:
    bw  =  297 MB/sec
tcp_bw:
    bw  =  703 MB/sec
tcp_bw:
    bw  =  790 MB/sec
tcp_bw:
    bw  =  845 MB/sec
tcp_bw:
    bw  =  708 MB/sec
tcp_bw:
    bw  =  830 MB/sec
tcp_bw:
    bw  =  884 MB/sec
tcp_bw:
    bw  =  768 MB/sec
tcp_bw:
    bw  =  787 MB/sec
tcp_bw:
    bw  =  749 MB/sec
tcp_bw:
    bw  =  780 MB/sec
tcp_lat:
    latency  =  95.8 us
tcp_lat:
    latency  =  71.5 us
tcp_lat:
    latency  =  69.1 us
tcp_lat:
    latency  =  69.6 us
tcp_lat:
    latency  =  72.7 us
tcp_lat:
    latency  =  84 us
tcp_lat:
    latency  =  93.3 us
tcp_lat:
    latency  =  86.3 us
tcp_lat:
    latency  =  145 us
tcp_lat:
    latency  =  139 us
tcp_lat:
    latency  =  158 us
tcp_lat:
    latency  =  171 us
tcp_lat:
    latency  =  198 us
tcp_lat:
    latency  =  459 us
tcp_lat:
    latency  =  593 us
tcp_lat:
    latency  =  881 us
包大小 ovs方案带宽 calico bgp方案带宽 ovs方案时延 calico bgp方案时延
msg_size ovs tcp_bw calico bgp tcp_bw ovs tcp_lat calico bgp tcp_lat
8bytes 15 MB/sec 17 MB/sec 34.2 us 95.8 us
16bytes 26.4 MB/sec 32.1 MB/sec 34.4 us 71.5 us
32bytes 40.7 MB/sec 39.4 MB/sec 33.9 us 69.1 us
64bytes 59.5MB/sec 81.7 MB/sec 33.4 us 69.6 us
128bytes 76.1 MB/sec 141 MB/sec 34.1 us 72.7 us
256bytes 194 MB/sec 297 MB/sec 34.1 us 84 us
512bytes 239 MB/sec 703 MB/sec 34.2 us 93.3 us
1KiB 256 MB/sec 790 MB/sec 34.8 us 86.3 us
2KiB 258 MB/sec 845 MB/sec 46.3 us 145 us
4KiB 262 MB/sec 708 MB/sec 56 us 139 us
8KiB 259 MB/sec 830 MB/sec 86.5 us 158 us
16KiB 250 MB/sec 884 MB/sec 133 us 171 us
32KiB 272 MB/sec 768 MB/sec 219 us 198 us
64KiB 291 MB/sec 787 MB/sec 435 us 459 us
128KiB 272 MB/sec 749 MB/sec 733 us 593 us
256KiB 282 MB/sec 780 MB/sec 1.27 ms 881 us

结果总结

从测试的数据中可以看到对于小包传输,Calico BGP的优势并不明显,同时它的网络延时甚至会更高,而对于大包传输,Calico BGP网络方案明显好于ovs方案

欢迎关注

文章已结束,以下并没有内容了。

还有 0% 的精彩内容
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
支付 ¥1.00 继续阅读
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 158,560评论 4 361
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,104评论 1 291
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 108,297评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,869评论 0 204
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,275评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,563评论 1 216
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,833评论 2 312
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,543评论 0 197
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,245评论 1 241
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,512评论 2 244
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,011评论 1 258
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,359评论 2 253
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,006评论 3 235
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,062评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,825评论 0 194
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,590评论 2 273
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,501评论 2 268

推荐阅读更多精彩内容