Metallb调试分析

零 前言

image

一 环境信息

集群节点

[root@master ~]# kubectl  get node -o wide
NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
master   Ready    master   19h   v1.17.5   192.168.26.10   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://19.3.9
node1    Ready    <none>   18h   v1.17.5   192.168.26.11   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://19.3.9
node2    Ready    <none>   18h   v1.17.5   192.168.26.12   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://19.3.9
[root@master ~]#

POD部署

[root@master ~]# kubectl  get pod -o wide
NAME                                  READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
metallb-controller-75bf779d4f-585mp   1/1     Running   0          14s   10.244.104.8     node2    <none>           <none>
metallb-speaker-4cnnj                 1/1     Running   0          14s   192.168.26.12    node2    <none>           <none>
metallb-speaker-kkd5n                 1/1     Running   0          14s   192.168.26.11    node1    <none>           <none>
metallb-speaker-w8bs4                 1/1     Running   0          14s   192.168.26.10    master   <none>           <none>
my-nginx-f97c96f6d-dfnj9              1/1     Running   0          27s   10.244.166.131   node1    <none>           <none>

测试LB服务

my-nginx为LoadBalancer类型的服务,分配的IP为主机网段192.168.26.190

[root@master ~]# kubectl  get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1      <none>           443/TCP        19h
my-nginx     LoadBalancer   10.101.85.30   192.168.26.190   80:32366/TCP   17s

二 节点信息

Master节点

  • eth0,eth1,docker0网卡会进行广播

  • kube-ipvs0设置为NOARP,不会进行广播

[root@master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:0e:4e:dd brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 85378sec preferred_lft 85378sec
    inet6 fe80::8fb:7623:d2f6:25e4/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 10:00:00:00:00:a0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.10/24 brd 192.168.26.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::1200:ff:fe00:a0/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:df:3f:fc:54 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 42:d5:d5:cc:8d:d7 brd ff:ff:ff:ff:ff:ff
6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
    link/ether 72:f1:c0:46:00:c2 brd ff:ff:ff:ff:ff:ff
    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.101.139.35/32 brd 10.101.139.35 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 192.168.26.190/32 brd 192.168.26.190 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
7: calicebcde35cc6@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
       valid_lft forever preferred_lft forever

Node1节点

[root@node1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:0e:4e:dd brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 85306sec preferred_lft 85306sec
    inet6 fe80::7b7f:9e4b:166d:56cf/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 10:00:00:00:00:b1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.11/24 brd 192.168.26.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::1200:ff:fe00:b1/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:a9:ab:b7:d8 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 52:d4:68:89:70:4f brd ff:ff:ff:ff:ff:ff
6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
    link/ether 0a:ed:a7:28:3c:5a brd ff:ff:ff:ff:ff:ff
    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.101.139.35/32 brd 10.101.139.35 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 192.168.26.190/32 brd 192.168.26.190 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
7: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.244.166.128/32 brd 10.244.166.128 scope global tunl0
       valid_lft forever preferred_lft forever

Node2节点

[root@node2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:0e:4e:dd brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 85054sec preferred_lft 85054sec
    inet6 fe80::3b32:152f:273d:43c8/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 10:00:00:00:00:b2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.12/24 brd 192.168.26.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::1200:ff:fe00:b2/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:09:e0:d2:f0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 8a:6f:d9:a9:99:93 brd ff:ff:ff:ff:ff:ff
6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
    link/ether 12:42:77:11:42:72 brd ff:ff:ff:ff:ff:ff
    inet 10.101.139.35/32 brd 10.101.139.35 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 192.168.26.190/32 brd 192.168.26.190 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
7: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.244.104.0/32 brd 10.244.104.0 scope global tunl0
       valid_lft forever preferred_lft forever
8: cali5f2d86330cb@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
       valid_lft forever preferred_lft forever
9: cali26eb7e820f9@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
       valid_lft forever preferred_lft forever

测试节点

  • 192.168.26.190 的mac地址为10:00:00:00:00:b2,对应节点Node2。
[root@out ~]# arp -an
? (192.168.26.11) at 10:00:00:00:00:b1 [ether] on eth1
? (10.0.2.3) at 52:54:00:12:35:03 [ether] on eth0
? (192.168.26.12) at 10:00:00:00:00:b2 [ether] on eth1
? (192.168.26.10) at 10:00:00:00:00:a0 [ether] on eth1
? (192.168.26.190) at 10:00:00:00:00:b1 [ether] on eth1
? (10.0.2.2) at 52:54:00:12:35:02 [ether] on eth0
[root@out ~]# curl 192.168.26.190
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>

三 工作原理

metallb分为两部分,controller和speaker。

  • controller用于给svc分配loadbalancer ip
  • speaker用于arp广播(arp主动广播,arp响应)

Controller

{"caller":"service.go:98","event":"ipAllocated","ip":"192.168.26.190","msg":"IP address assigned by controller","service":"default/my-nginx","ts":"2020-05-22T02:17:11.742233189Z"}

[root@master ~]# kubectl  logs metallb-controller-75bf779d4f-585mp
{"branch":"HEAD","caller":"main.go:142","commit":"v0.8.1","msg":"MetalLB controller starting version 0.8.1 (commit v0.8.1, branch HEAD)","ts":"2020-05-22T02:17:11.577936238Z","version":"0.8.1"}
{"caller":"main.go:108","configmap":"default/metallb","event":"startUpdate","msg":"start of config update","ts":"2020-05-22T02:17:11.686448912Z"}
{"caller":"main.go:121","configmap":"default/metallb","event":"endUpdate","msg":"end of config update","ts":"2020-05-22T02:17:11.686475979Z"}
{"caller":"k8s.go:376","configmap":"default/metallb","event":"configLoaded","msg":"config (re)loaded","ts":"2020-05-22T02:17:11.68648444Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:11.686507792Z"}
{"caller":"service.go:33","event":"clearAssignment","msg":"not a LoadBalancer","reason":"notLoadBalancer","service":"default/kubernetes","ts":"2020-05-22T02:17:11.686521668Z"}
{"caller":"main.go:75","event":"noChange","msg":"service converged, no change","service":"default/kubernetes","ts":"2020-05-22T02:17:11.686549849Z"}
{"caller":"main.go:76","event":"endUpdate","msg":"end of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:11.686559142Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:11.686570384Z"}
{"caller":"service.go:85","error":"controller not synced","msg":"controller not synced yet, cannot allocate IP; will retry after sync","op":"allocateIP","service":"default/my-nginx","ts":"2020-05-22T02:17:11.686579009Z"}
{"caller":"main.go:72","event":"endUpdate","msg":"end of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:11.686587521Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.686598889Z"}
{"caller":"service.go:33","event":"clearAssignment","msg":"not a LoadBalancer","reason":"notLoadBalancer","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.686606378Z"}
{"caller":"main.go:75","event":"noChange","msg":"service converged, no change","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.68662786Z"}
{"caller":"main.go:76","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.686634351Z"}
{"caller":"main.go:126","event":"stateSynced","msg":"controller synced, can allocate IPs now","ts":"2020-05-22T02:17:11.686645509Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.698513135Z"}
{"caller":"service.go:33","event":"clearAssignment","msg":"not a LoadBalancer","reason":"notLoadBalancer","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.698558483Z"}
{"caller":"main.go:75","event":"noChange","msg":"service converged, no change","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.698596972Z"}
{"caller":"main.go:76","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.698605272Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:11.698617575Z"}
{"caller":"service.go:33","event":"clearAssignment","msg":"not a LoadBalancer","reason":"notLoadBalancer","service":"default/kubernetes","ts":"2020-05-22T02:17:11.703655381Z"}
{"caller":"main.go:75","event":"noChange","msg":"service converged, no change","service":"default/kubernetes","ts":"2020-05-22T02:17:11.703710198Z"}
{"caller":"main.go:76","event":"endUpdate","msg":"end of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:11.703726179Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:11.703745316Z"}
{"caller":"service.go:98","event":"ipAllocated","ip":"192.168.26.190","msg":"IP address assigned by controller","service":"default/my-nginx","ts":"2020-05-22T02:17:11.742233189Z"}

speaker日志

根据apr返回,查找190对应的mac地址,可用确定speaker运行在哪个节点。

pod运行在node1 192.168.26.11节点。arp返回190的mac为node1的eth1网卡,所以请求会发送到node1节点。

{"caller":"main.go:340","event":"serviceAnnounced","ip":"192.168.26.190","msg":"service has IP, announcing","pool":"default","protocol":"layer2","service":"default/my-nginx","ts":"2020-05-22T02:17:11.74920593Z"}

[root@master ~]# kubectl  logs metallb-speaker-kkd5n
{"branch":"main","caller":"main.go:84","commit":"734ee674","msg":"MetalLB speaker starting (commit 734ee674, branch main)","ts":"2020-05-22T02:17:10.094496521Z","version":""}
{"caller":"main.go:105","msg":"Not starting fast dead node detection (MemberList), need ml-bindaddr / ml-labels / ml-namespace config","op":"startup","ts":"2020-05-22T02:17:10.094565059Z"}
{"caller":"announcer.go:103","event":"createARPResponder","interface":"eth0","msg":"created ARP responder for interface","ts":"2020-05-22T02:17:10.096927728Z"}
{"caller":"announcer.go:112","event":"createNDPResponder","interface":"eth0","msg":"created NDP responder for interface","ts":"2020-05-22T02:17:10.097172952Z"}
{"caller":"announcer.go:103","event":"createARPResponder","interface":"eth1","msg":"created ARP responder for interface","ts":"2020-05-22T02:17:10.097312492Z"}
{"caller":"announcer.go:112","event":"createNDPResponder","interface":"eth1","msg":"created NDP responder for interface","ts":"2020-05-22T02:17:10.097523494Z"}
{"caller":"announcer.go:103","event":"createARPResponder","interface":"docker0","msg":"created ARP responder for interface","ts":"2020-05-22T02:17:10.097732881Z"}
{"caller":"announcer.go:103","event":"createARPResponder","interface":"cali37e7c6d2053","msg":"created ARP responder for interface","ts":"2020-05-22T02:17:10.098019843Z"}
{"caller":"announcer.go:112","event":"createNDPResponder","interface":"cali37e7c6d2053","msg":"created NDP responder for interface","ts":"2020-05-22T02:17:10.098082182Z"}
{"caller":"main.go:383","configmap":"default/metallb","event":"startUpdate","msg":"start of config update","ts":"2020-05-22T02:17:10.234129838Z"}
{"caller":"main.go:407","configmap":"default/metallb","event":"endUpdate","msg":"end of config update","ts":"2020-05-22T02:17:10.234162307Z"}
{"caller":"k8s.go:402","configmap":"default/metallb","event":"configLoaded","msg":"config (re)loaded","ts":"2020-05-22T02:17:10.234171521Z"}
{"caller":"bgp_controller.go:285","event":"nodeLabelsChanged","msg":"Node labels changed, resyncing BGP peers","ts":"2020-05-22T02:17:10.234193311Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:10.234204251Z"}
{"caller":"main.go:268","event":"endUpdate","msg":"end of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:10.234212547Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:10.234221455Z"}
{"caller":"main.go:277","event":"endUpdate","msg":"end of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:10.234227764Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:10.234235443Z"}
{"caller":"main.go:268","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:10.234243163Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:10.23948262Z"}
{"caller":"main.go:268","event":"endUpdate","msg":"end of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:10.239523709Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:10.239534319Z"}
{"caller":"main.go:277","event":"endUpdate","msg":"end of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:10.239540994Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:10.239550003Z"}
{"caller":"main.go:268","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:10.239556402Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:11.749145884Z"}
{"caller":"main.go:340","event":"serviceAnnounced","ip":"192.168.26.190","msg":"service has IP, announcing","pool":"default","protocol":"layer2","service":"default/my-nginx","ts":"2020-05-22T02:17:11.74920593Z"}
{"caller":"main.go:343","event":"endUpdate","msg":"end of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:11.749256307Z"}

四 异常处理

Speaker Pod挂掉

可以通过增加nodeselector模拟pod挂掉。

  • 选主出来的speaker pod挂掉,由于kubeproxy还是在工作,所以不会影响服务。

  • 由于svc信息没有发生变化,所以190对应的mac地址不会改变

Node节点宕机

arp的响应会发生变化。

speaker/main.go:196 watchMemberListEvents

节点驱逐

metallb 选主watch节点变更(watchMemberListEvents),当前节点被kubectl delete node后(当前pod),

其它的speaker pod感知到变化,会重新watch apiserver。所以lb ip会被重新广播。

五 代码分析

https://github.com/huiwq1990/metallb/commits/hg

六 参考

https://www.objectif-libre.com/en/blog/2019/06/11/metallb/

https://metallb.universe.tf/

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 158,117评论 4 360
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 66,963评论 1 290
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 107,897评论 0 240
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,805评论 0 203
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,208评论 3 286
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,535评论 1 216
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,797评论 2 311
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,493评论 0 197
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,215评论 1 241
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,477评论 2 244
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 31,988评论 1 258
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,325评论 2 252
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 32,971评论 3 235
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,055评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,807评论 0 194
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,544评论 2 271
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,455评论 2 266