ELKB5.2.2集群环境部署配置优化终极文档

本人陆陆续续接触了ELK的1.4,2.0,2.4,5.0,5.2版本,可以说前面使用当中一直没有太多感触,最近使用5.2才慢慢有了点感觉,可见认知事务的艰难,本次文档尽量详细点,现在写文档越来越喜欢简洁了,不知道是不是不太好。不扯了看正文(注意这里的配置是优化前配置,正常使用没问题,量大时需要优化)。

备注:

本次属于大版本变更,有很多修改,部署重大修改如下:

1,filebeat直接输出kafka,并drop不必要的字段如beat相关的

2,elasticsearch集群布局优化:分三master节点6data节点

3,logstash filter 加入urldecode支持url、reffer、agent中文显示

4,logstash fileter加入geoip支持客户端ip区域城市定位功能

5, logstash mutate替换字符串并remove不必要字段如kafka相关的

5,elasticsearch插件需要另外部署node.js,不能像以前一样集成一起

6,nginx日志新增request参数、请求方法

一,架构

可选架构

filebeat--elasticsearch--kibana

filebeat--logstash--kafka--logstash--elasticsearch--kibana

filebeat--kafka--logstash--elasticsearch--kibana

由于filebeat5.2.2支持多种输出logstash、elasticsearch、kafka、redis、syslog、file等,为了优化资源使用率且能够支持大并发场景选择

filebeat(18)--kafka(3)--logstash(3)--elasticsearch(3)--kibana(3--nginx负载均衡

共3台物理机、12台虚拟机、系统CentOS6.8、具体划分如下:

服务器一(192.168.188.186)

kafka1  32G700G4CPU

logstash8G      100G    4CPU

elasticsearch1  40G1.4T    8CPU

elasticsearch2  40G     1.4T    8CPU

服务器二(192.168.188.187)

kafka2  32G700G4CPU

logstash8G      100G    4CPU

elasticsearch3  40G1.4T    8CPU

elasticsearch4  40G     1.4T    8CPU

服务器三(192.168.188.188)

kafka3  32G700G4CPU

logstash8G      100G    4CPU

elasticsearch5  40G1.4T    8CPU

elasticsearch6  40G     1.4T    8CPU

磁盘分区

Logstach     100G

SWAP  8G/boot200M  剩下/

Kafka       700G

SWAP  8G/boot200M/30G剩下/data

Elasticsearch 1.4T

SWAP  8G/boot200M/30G剩下/data

IP分配

Elasticsearch1-6     192.168.188.191-196

kibana1-3              192.168.188.191/193/195

kafka1-3                192.168.188.237-239

logstash                192.168.188.238/198/240

二,环境准备

yum -y remove java-1.6.0-openjdk

yum -y remove java-1.7.0-openjdk

yum -y remove perl-*

yum -y remove sssd-*

yum -yinstalljava-1.8.0-openjdk

java -version

yum update

reboot

设置host环境kafka需要用到

cat /etc/hosts

12192.168.188.191   ES191(master和data)

192.168.188.192   ES192(data)

192.168.188.193   ES193(master和data)

192.168.188.194   ES194(data)

192.168.188.195   ES195(master和data)

192.168.188.196   ES196(data)

192.168.188.237   kafka237

192.168.188.238   kafka238

192.168.188.239   kafka239

192.168.188.197   logstash197

192.168.188.198   logstash198

192.168.188.240   logstash240

三,部署elasticsearch集群

mkdir /data/esnginx

mkdir /data/eslog

rpm -ivh /srv/elasticsearch-5.2.2.rpm

chkconfig --add elasticsearch

chkconfig postfix off

rpm -ivh /srv/kibana-5.2.2-x86_64.rpm

chown  elasticsearch:elasticsearch /data/eslog -R

chown  elasticsearch:elasticsearch /data/esnginx -R

配置文件(3master+6data)

[root@ES191 elasticsearch]# cat elasticsearch.yml|grep -Ev '^#|^$'

cluster.name: nginxlog

node.name: ES191

node.master:true

node.data:true

node.attr.rack: r1

path.data:/data/esnginx

path.logs:/data/eslog

bootstrap.memory_lock:true

network.host: 192.168.188.191

http.port: 9200

transport.tcp.port: 9300

discovery.zen.ping.unicast.hosts: ["192.168.188.191","192.168.188.192","192.168.188.193","192.168.188.194","192.168.188.195","192.168.188.196"]

discovery.zen.minimum_master_nodes: 2

gateway.recover_after_nodes: 5

gateway.recover_after_time: 5m

gateway.expected_nodes: 6

cluster.routing.allocation.same_shard.host:true

script.engine.groovy.inline.search: on

script.engine.groovy.inline.aggs: on

indices.recovery.max_bytes_per_sec: 30mb

http.cors.enabled:true

http.cors.allow-origin:"*"

bootstrap.system_call_filter:false#内核3.0以下的需要,centos7内核3.10不需要

特别注意

/etc/security/limits.conf

elasticsearch  soft  memlock  unlimited

elasticsearch  hard  memlock  unlimited

elasticsearch  soft  nofile   65536

elasticsearch  hard  nofile   131072

elasticsearch  soft  nproc    2048

elasticsearch  hard  nproc    4096

/etc/elasticsearch/jvm.options

# Xms represents the initial size of total heap space

# Xmx represents the maximum size of total heap space

-Xms20g

-Xmx20g

启动集群

service elasticsearch start

健康检查

http://192.168.188.191:9200/_cluster/health?pretty=true

{

"cluster_name":"nginxlog",

"status":"green",

"timed_out":false,

"number_of_nodes": 6,

"number_of_data_nodes": 6,

"active_primary_shards": 0,

"active_shards": 0,

"relocating_shards": 0,

"initializing_shards": 0,

"unassigned_shards": 0,

"delayed_unassigned_shards": 0,

"number_of_pending_tasks": 0,

"number_of_in_flight_fetch": 0,

"task_max_waiting_in_queue_millis": 0,

"active_shards_percent_as_number": 100.0

}

elasticsearch-head插件

http://192.168.188.215:9100/

连接上面192.168.188.191:9200任意一台即可

设置分片

官方建议生成索引时再设置

curl -XPUT 'http://192.168.188.193:9200/_all/_settings?preserve_existing=true' -d '{

"index.number_of_replicas" : "1",

"index.number_of_shards" : "6"

}'

没有生效,后来发现这个分片设置可以在模版创建时指定,目前还是使用默认1副本,5分片。

其他报错(这个只是参考,优化时有方案)

bootstrap.system_call_filter: false   # 针对 system call filters failed to install,

参见 https://www.elastic.co/guide/en/elasticsearch/reference/current/system-call-filter-check.html

[WARN ][o.e.b.JNANatives ] unable to install syscall filter:

java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in

四、部署kafka集群

kafka集群搭建

1,zookeeper集群

wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz

tarzxvf zookeeper-3.4.10.tar.gz -C/usr/local/

ln-s/usr/local/zookeeper-3.4.10//usr/local/zookeeper

mkdir-p/data/zookeeper/data/

vim/usr/local/zookeeper/conf/zoo.cfg

tickTime=2000

initLimit=5

syncLimit=2

dataDir=/data/zookeeper/data

clientPort=2181

server.1=192.168.188.237:2888:3888

server.2=192.168.188.238:2888:3888

server.3=192.168.188.239:2888:3888

vim/data/zookeeper/data/myid

1

/usr/local/zookeeper/bin/zkServer.sh start

2,kafka集群

wget http://mirrors.hust.edu.cn/apache/kafka/0.10.0.1/kafka_2.11-0.10.0.1.tgz

tar zxvf kafka_2.11-0.10.0.1.tgz -C /usr/local/

ln -s /usr/local/kafka_2.11-0.10.0.1 /usr/local/kafka

diff了下server.properties和zookeeper.properties变动不大可以直接使用

vim /usr/local/kafka/config/server.properties

broker.id=237

port=9092

host.name=192.168.188.237

num.network.threads=4

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/data/kafkalog

num.partitions=3

num.recovery.threads.per.data.dir=1

log.retention.hours=24

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

log.cleaner.enable=false

zookeeper.connect=192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:237

zookeeper.connection.timeout.ms=6000

producer.type=async

broker.list=192.168.188.237:9092,192.168.188.238:9092,192.168.188.239:9092

mkdir /data/kafkalog

修改内存使用大小

vim /usr/local/kafka/bin/kafka-server-start.sh

export KAFKA_HEAP_OPTS="-Xmx16G -Xms16G"

启动kafka

/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties

创建六组前端topic

/usr/local/kafka/bin/kafka-topics.sh --create --topic ngx1-168 --replication-factor 1 --partitions 3 --zookeeper 192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181

/usr/local/kafka/bin/kafka-topics.sh --create --topic ngx2-178 --replication-factor 1 --partitions 3 --zookeeper  192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181

/usr/local/kafka/bin/kafka-topics.sh --create --topic ngx3-188 --replication-factor 1 --partitions 3 --zookeeper  192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181

检查topic

/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper  192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181

ngx1-168

ngx2-178

ngx3-188

3,开机启动

cat /etc/rc.local

/usr/local/zookeeper/bin/zkServer.sh start

/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &

五,部署配置logstash

安装

rpm -ivh logstash-5.2.2.rpm

mkdir /usr/share/logstash/config

#1. 复制配置文件到logstash home

cp /etc/logstash /usr/share/logstash/config

#2. 配置路径

vim /usr/share/logstash/config/logstash.yml

修改前:

path.config: /etc/logstash/conf.d

修改后:

path.config: /usr/share/logstash/config/conf.d

#3.修改 startup.options

修改前:

LS_SETTINGS_DIR=/etc/logstash

修改后:

LS_SETTINGS_DIR=/usr/share/logstash/config

修改startup.options需要执行/usr/share/logstash/bin/system-install 生效

配置

消费者输出端3个logstash只负责一部分

in-kafka-ngx1-out-es.conf

in-kafka-ngx2-out-es.conf

in-kafka-ngx3-out-es.conf

[root@logstash197 conf.d]# cat in-kafka-ngx1-out-es.conf

input {

kafka {

bootstrap_servers =>"192.168.188.237:9092,192.168.188.238:9092,192.168.188.239:9092"

group_id =>"ngx1"

topics => ["ngx1-168"]

codec =>"json"

consumer_threads => 3

decorate_events =>true

}

}

filter {

mutate {

gsub => ["message","\\x","%"]

remove_field => ["kafka"]

}

json {

source=>"message"

remove_field => ["message"]

}

geoip {

source=>"clientRealIp"

}

urldecode {

all_fields =>true

}

}

output {

elasticsearch {

hosts => ["192.168.188.191:9200","192.168.188.192:9200","192.168.188.193:9200","192.168.188.194:9200","192.168.188.195:9200","192.168.188.196:9200"]

index =>"filebeat-%{type}-%{+YYYY.MM.dd}"

manage_template =>true

template_overwrite =>true

template_name =>"nginx_template"

template =>"/usr/share/logstash/templates/nginx_template"

flush_size => 50000

idle_flush_time => 10

}

}

nginx 模版

[root@logstash197 logstash]# cat /usr/share/logstash/templates/nginx_template

{

"template":"filebeat-*",

"settings": {

"index.refresh_interval":"10s"

},

"mappings": {

"_default_": {

"_all": {"enabled":true,"omit_norms":true},

"dynamic_templates": [

{

"string_fields": {

"match_pattern":"regex",

"match":"(agent)|(status)|(url)|(clientRealIp)|(referrer)|(upstreamhost)|(http_host)|(request)|(request_method)|(upstreamstatus)",

"match_mapping_type":"string",

"mapping": {

"type":"string","index":"analyzed","omit_norms":true,

"fields": {

"raw": {"type":"string","index":"not_analyzed","ignore_above": 512}

}

}

}

} ],

"properties": {

"@version": {"type":"string","index":"not_analyzed"},

"geoip": {

"type":"object",

"dynamic":true,

"properties": {

"location": {"type":"geo_point"}

}

}

}

}

}

}

启动

/usr/share/logstash/bin/logstash -f /usr/share/logstash/config/conf.d/in-kafka-ngx1-out-es.conf  &

默认logstash开机启动

参考

/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-kafka-5.1.5/DEVELOPER.md

报错处理

[2017-05-08T12:24:30,388][ERROR][logstash.inputs.kafka    ] Unknown setting 'zk_connect' for kafka

[2017-05-08T12:24:30,390][ERROR][logstash.inputs.kafka    ] Unknown setting 'topic_id' for kafka

[2017-05-08T12:24:30,390][ERROR][logstash.inputs.kafka    ] Unknown setting 'reset_beginning' for kafka

[2017-05-08T12:24:30,395][ERROR][logstash.agent           ] Cannot load an invalid configuration {:reason=>"Something is wrong with your configuration."}

验证日志

[root@logstash197 conf.d]# cat /var/log/logstash/logstash-plain.log

[2017-05-09T10:43:20,832][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.188.191:9200/, http://192.168.188.192:9200/, http://192.168.188.193:9200/, http://192.168.188.194:9200/, http://192.168.188.195:9200/, http://192.168.188.196:9200/]}}

[2017-05-09T10:43:20,838][INFO ][logstash.outputs.elasticsearch] Running health check to seeifan Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.191:9200/, :path=>"/"}

[2017-05-09T10:43:20,919][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#}

[2017-05-09T10:43:20,920][INFO ][logstash.outputs.elasticsearch] Running health check to seeifan Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.192:9200/, :path=>"/"}

[2017-05-09T10:43:20,922][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#}

[2017-05-09T10:43:20,924][INFO ][logstash.outputs.elasticsearch] Running health check to seeifan Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.193:9200/, :path=>"/"}

[2017-05-09T10:43:20,927][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#}

[2017-05-09T10:43:20,927][INFO ][logstash.outputs.elasticsearch] Running health check to seeifan Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.194:9200/, :path=>"/"}

[2017-05-09T10:43:20,929][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#}

[2017-05-09T10:43:20,930][INFO ][logstash.outputs.elasticsearch] Running health check to seeifan Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.195:9200/, :path=>"/"}

[2017-05-09T10:43:20,932][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#}

[2017-05-09T10:43:20,933][INFO ][logstash.outputs.elasticsearch] Running health check to seeifan Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.196:9200/, :path=>"/"}

[2017-05-09T10:43:20,935][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#}

[2017-05-09T10:43:20,936][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>"/usr/share/logstash/templates/nginx_template"}

[2017-05-09T10:43:20,970][INFO ][logstash.outputs.elasticsearch] Attempting toinstalltemplate {:manage_template=>{"template"=>"filebeat-*","settings"=>{"index.refresh_interval"=>"10s"},"mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true,"omit_norms"=>true},"dynamic_templates"=>[{"string_fields"=>{"match_pattern"=>"regex","match"=>"(agent)|(status)|(url)|(clientRealIp)|(referrer)|(upstreamhost)|(http_host)|(request)|(request_method)","match_mapping_type"=>"string","mapping"=>{"type"=>"string","index"=>"analyzed","omit_norms"=>true,"fields"=>{"raw"=>{"type"=>"string","index"=>"not_analyzed","ignore_above"=>512}}}}}]}}}}

[2017-05-09T10:43:20,974][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/nginx_template

[2017-05-09T10:43:21,009][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#, #, #, #, #, #]}

[2017-05-09T10:43:21,010][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.0.4-java/vendor/GeoLite2-City.mmdb"}

[2017-05-09T10:43:21,022][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main","pipeline.workers"=>4,"pipeline.batch.size"=>125,"pipeline.batch.delay"=>5,"pipeline.max_inflight"=>500}

[2017-05-09T10:43:21,037][INFO ][logstash.pipeline        ] Pipeline main started

[2017-05-09T10:43:21,086][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

六,部署配置filebeat

安装

rpm -ivh filebeat-5.2.2-x86_64.rpm

nginx日志格式需要为json的

log_format access'{ "@timestamp": "$time_iso8601", '

'"clientRealIp": "$clientRealIp", '

'"size": $body_bytes_sent, '

'"request": "$request", '

'"method": "$request_method", '

'"responsetime": $request_time, '

'"upstreamhost": "$upstream_addr", '

'"http_host": "$host", '

'"url": "$uri", '

'"referrer": "$http_referer", '

'"agent": "$http_user_agent", '

'"status": "$status"} ';

配置filebeat

vim /etc/filebeat/filebeat.yml

filebeat.prospectors:

- input_type: log

paths:

-/data/wwwlogs/*.log

document_type: ngx1-168

tail_files:true

json.keys_under_root:true

json.add_error_key:true

output.kafka:

enabled:true

hosts: ["192.168.188.237:9092","192.168.188.238:9092","192.168.188.239:9092"]

topic:'%{[type]}'

partition.round_robin:

reachable_only:false

required_acks: 1

compression:gzip

max_message_bytes: 1000000

worker: 3

processors:

- drop_fields:

fields: ["input_type","beat.hostname","beat.name","beat.version","offset","source"]

logging.to_files:true

logging.files:

path:/var/log/filebeat

name: filebeat

rotateeverybytes: 10485760# = 10MB

keepfiles: 7

filebeat详细配置参考官网

https://www.elastic.co/guide/en/beats/filebeat/5.2/index.html

采用kafka作为日志输出端

https://www.elastic.co/guide/en/beats/filebeat/5.2/kafka-output.html

output.kafka:

# initial brokers for reading cluster metadata

hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]

# message topic selection + partitioning

topic: '%{[type]}'

partition.round_robin:

reachable_only: false

required_acks: 1

compression: gzip

max_message_bytes: 1000000

启动

chkconfig filebeat on

/etc/init.d/filebeat start

报错处理

[root@localhost ~]# tail -f /var/log/filebeat/filebeat

2017-05-09T15:21:39+08:00 ERR Error decoding JSON: invalid character 'x' in string escape code

使用$uri 可以在nginx对URL进行更改或重写,但是用于日志输出可以使用$request_uri代替,如无特殊业务需求,完全可以替换

参考

http://www.mamicode.com/info-detail-1368765.html

七,验证

1,kafka消费者查看

/usr/local/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic ngx1-168

2,elasticserch head查看Index及分片信息

八,部署配置kibana

1,配置启动

cat /etc/kibana/kibana.yml

server.port: 5601

server.host: "192.168.188.191"

elasticsearch.url: "http://192.168.188.191:9200"

chkconfig --add kibana

/etc/init.d/kibana start

2,字段格式

{

"_index":"filebeat-ngx1-168-2017.05.10",

"_type":"ngx1-168",

"_id":"AVvvtIJVy6ssC9hG9dKY",

"_score": null,

"_source": {

"request":"GET /qiche/奥迪A3/ HTTP/1.1",

"agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36",

"geoip": {

"city_name":"Jinhua",

"timezone":"Asia/Shanghai",

"ip":"122.226.77.150",

"latitude": 29.1068,

"country_code2":"CN",

"country_name":"China",

"continent_code":"AS",

"country_code3":"CN",

"region_name":"Zhejiang",

"location": [

119.6442,

29.1068

],

"longitude": 119.6442,

"region_code":"33"

},

"method":"GET",

"type":"ngx1-168",

"http_host":"www.niubi.com",

"url":"/qiche/奥迪A3/",

"referrer":"http://www.niubi.com/qiche/奥迪S6/",

"upstreamhost":"172.17.4.205:80",

"@timestamp":"2017-05-10T08:14:00.000Z",

"size": 10027,

"beat": {},

"@version":"1",

"responsetime": 0.217,

"clientRealIp":"122.226.77.150",

"status":"200"

},

"fields": {

"@timestamp": [

1494404040000

]

},

"sort": [

1494404040000

]

}

3,视图仪表盘

1),添加高德地图

编辑kibana配置文件kibana.yml,最后面添加

tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'

ES 模版的调整,Geo-points 不适用 dynamic mapping 因此这类项目需要显式的指定:

需要将 geoip.location 指定为 geo_point 类型,则在模版的 properties 中增加一个项目,如下所示:

"properties": {

"@version": { "type": "string", "index": "not_analyzed" },

"geoip"  : {

"type": "object",

"dynamic": true,

"properties": {

"location": { "type": "geo_point" }

}

}

}

4,安装x-pack插件

参考

https://www.elastic.co/guide/en/x-pack/5.2/installing-xpack.html#xpack-installing-offline

https://www.elastic.co/guide/en/x-pack/5.2/setting-up-authentication.html#built-in-users

注意要修改密码

http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/1.json

http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/2.json

http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/3.json

或者

curl -XPUT 'localhost:9200/_xpack/security/user/elastic/_password?pretty' -H 'Content-Type: application/json' -d'

{

"password": "elasticpassword"

}

'

curl -XPUT 'localhost:9200/_xpack/security/user/kibana/_password?pretty' -H 'Content-Type: application/json' -d'

{

"password": "kibanapassword"

}

'

curl -XPUT 'localhost:9200/_xpack/security/user/logstash_system/_password?pretty' -H 'Content-Type: application/json' -d'

{

"password": "logstashpassword"

}

'

下面是官网x-pack安装升级卸载文档,后发现注册版本的x-pack,只具有监控功能,就没安装

Installing X-Pack on Offline Machines

The plugininstallscripts require direct Internet access to download andinstallX-Pack. If your server doesn’t have Internet access, you can manually download andinstallX-Pack.

ToinstallX-Pack on a machine that doesn’t have Internet access:

Manually download the X-Pack zipfile: https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.2.2.zip (sha1)

Transfer the zipfileto a temporary directory on the offline machine. (Do NOT put thefileinthe Elasticsearch plugins directory.)

Run bin/elasticsearch-plugininstallfrom the Elasticsearchinstalldirectory and specify the location of the X-Pack zipfile. For example:

bin/elasticsearch-plugininstallfile:///path/to/file/x-pack-5.2.2.zip

Note

You must specify an absolute path to the zipfileafter thefile://protocol.

Run bin/kibana-plugininstallfrom the Kibanainstalldirectory and specify the location of the X-Pack zipfile. (The pluginsforElasticsearch, Kibana, and Logstash are includedinthe same zipfile.) For example:

bin/kibana-plugininstallfile:///path/to/file/x-pack-5.2.2.zip

Run bin/logstash-plugininstallfrom the Logstashinstalldirectory and specify the location of the X-Pack zipfile. (The pluginsforElasticsearch, Kibana, and Logstash are includedinthe same zipfile.) For example:

bin/logstash-plugininstallfile:///path/to/file/x-pack-5.2.2.zip

Enabling and Disabling X-Pack Features

By default, all X-Pack features are enabled. You can explicitlyenableor disable X-Pack featuresinelasticsearch.yml and kibana.yml:

SettingDescription

xpack.security.enabled

Set tofalseto disable X-Pack security. Configureinboth elasticsearch.yml and kibana.yml.

xpack.monitoring.enabled

Set tofalseto disable X-Pack monitoring. Configureinboth elasticsearch.yml and kibana.yml.

xpack.graph.enabled

Set tofalseto disable X-Pack graph. Configureinboth elasticsearch.yml and kibana.yml.

xpack.watcher.enabled

Set tofalseto disable Watcher. Configureinelasticsearch.yml only.

xpack.reporting.enabled

Set tofalseto disable X-Pack reporting. Configureinkibana.yml only.

九、Nginx负载均衡

1,配置负载

[root@~# cat /usr/local/nginx/conf/nginx.conf

server

{

listen   5601;

server_name 192.168.188.215;

index index.html index.htm index.shtml;

location / {

allow  192.168.188.0/24;

deny all;

proxy_pass http://kibanangx_niubi_com;

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection'upgrade';

proxy_set_header Host $host;

proxy_cache_bypass $http_upgrade;

auth_basic"Please input Username and Password";

auth_basic_user_file/usr/local/nginx/conf/.pass_file_elk;

}

access_log/data/wwwlogs/access_kibanangx.niubi.com.log  access;

}

upstream kibanangx_niubi_com {

ip_hash;

server  192.168.188.191:5601;

server  192.168.188.193:5601;

server  192.168.188.195:5601;

}

2,访问

http://192.168.188.215:5601/app/kibana#

-------------------------------------------------------------------------------------------------

完美的分割线

-------------------------------------------------------------------------------------------------

优化文档

ELKB5.2集群优化方案

一,优化效果

优化前

收集日志请求达到1万/s,延时10s内,默认设置数据10s刷新。

优化后

收集日志请求达到3万/s,延时10s内,默认设置数据10s刷新。(预估可以满足最大请求5万/s)

缺点:CPU处理能力不足,在dashboard大时间聚合运算是生成仪表视图会有超时现象发生;另外elasticsarch结构和搜索语法等还有进一步优化空间。

二,优化步骤

1,内存和CPU重新规划

1),es16CPU  48G内存

2),kafka8CPU   16G内存

3),logstash            16CPU  12G内存

2,kafka优化

kafka manager 监控观察消费情况

kafka heap size需要修改

logstash涉及kafka的一个参数修改

1),修改jvm内存数

vi /usr/local/kafka/bin/kafka-server-start.sh

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then

export KAFKA_HEAP_OPTS="-Xmx8G -Xms8G"

export JMX_PORT="8999"

fi

2),Broker参数配置

配置优化都是修改server.properties文件中参数值

网络和io操作线程配置优化

# broker处理消息的最大线程数(默认3,可以为CPU核数)

num.network.threads=4

# broker处理磁盘IO的线程数 (默认4,可以为CPU核数2倍左右)

num.io.threads=8

3),安装kafka监控

/data/scripts/kafka-manager-1.3.3.4/bin/kafka-manager

http://192.168.188.215:8099/clusters/ngxlog/consumers

3,logstah优化

logstas需要修改2个配置文件

1),修改jvm参数

vi /usr/share/logstash/config/jvm.options

-Xms2g

-Xmx6g

2),修改logstash.yml

vi /usr/share/logstash/config/logstash.yml

path.data: /var/lib/logstash

pipeline.workers: 16#cpu核心数

pipeline.output.workers: 4#这里相当于output elasticsearch里面的workers数

pipeline.batch.size: 5000#根据qps,压力情况等填写

pipeline.batch.delay: 5

path.config: /usr/share/logstash/config/conf.d

path.logs: /var/log/logstash

3),修改对应的logstash.conf文件

input文件

vi /usr/share/logstash/config/in-kafka-ngx12-out-es.conf

input {

kafka {

bootstrap_servers =>"192.168.188.237:9092,192.168.188.238:9092,192.168.188.239:9092"

group_id =>"ngx1"

topics => ["ngx1-168"]

codec =>"json"

consumer_threads => 3

auto_offset_reset =>"latest"#添加这行

#decorate_events =>   #true 这行去掉

}

}

filter文件

filter {

mutate {

gsub => ["message","\\x","%"]#这个是转义,url里面的加密方式和request等不一样,用于汉字显示

#remove_field => ["kafka"]这行去掉  decorate events 默认false后就不添加kafka.{}字段了,这里也及不需要再remove了

}

output文件

修改前

flush_size => 50000

idle_flush_time => 10

修改后

4秒集齐8万条一次性输出

flush_size => 80000

idle_flush_time => 4

启动后logstash输出(pipeline.max_inflight是8万)

[2017-05-16T10:07:02,552][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main","pipeline.workers"=>16,"pipeline.batch.size"=>5000,"pipeline.batch.delay"=>5,"pipeline.max_inflight"=>80000}

[2017-05-16T10:07:02,553][WARN ][logstash.pipeline        ] CAUTION: Recommended inflight events max exceeded! Logstash will run with up to 80000 eventsinmemoryinyour current configuration. If your message sizes are large this may cause instability with the default heap size. Please consider setting a non-standard heap size, changing the batch size (currently 5000), or changing the number of pipeline workers (currently 16)

4,elasticsearch优化

1),修改jvm参加

vi /etc/elasticsearch/jvm.options

调整为24g,最大为虚拟机内存的50%

-Xms24g

-Xmx24g

2),修改GC方法(待定,后续观察,该参数不确定时不建议修改)

elasticsearch默认使用的GC是CMS GC

如果你的内存大小超过6G,CMS是不给力的,容易出现stop-the-world

建议使用G1 GC

注释掉:

JAVA_OPTS=”$JAVA_OPTS -XX:+UseParNewGC”

JAVA_OPTS=”$JAVA_OPTS -XX:+UseConcMarkSweepGC”

JAVA_OPTS=”$JAVA_OPTS -XX:CMSInitiatingOccupancyFraction=75″

JAVA_OPTS=”$JAVA_OPTS -XX:+UseCMSInitiatingOccupancyOnly”

修改为:

JAVA_OPTS=”$JAVA_OPTS -XX:+UseG1GC”

JAVA_OPTS=”$JAVA_OPTS -XX:MaxGCPauseMillis=200″

3),安装elasticsearch集群监控工具Cerebro

https://github.com/lmenezes/cerebro

Cerebro 时一个第三方的 elasticsearch 集群管理软件,可以方便地查看集群状态:

https://github.com/lmenezes/cerebro/releases/download/v0.6.5/cerebro-0.6.5.tgz

安装后访问地址

http://192.168.188.215:9000/

4),elasticsearch搜索参数优化(难点问题)

发现没事可做的,首先默认配置已经很好了,其次bulk,刷新等配置里都写好了

5),elasticsarch集群角色优化

es191,es193,es195只做master节点+ingest节点

es192,es194,es196只做data节点(上面是虚拟机2个虚拟机共用一组raid5磁盘,如果都做data节点性能表现不好)

再加2个data节点,这样聚合计算性能提升很大

5,filebeat优化

1),使用json格式输入,这样logstash就不需要dcode减轻后端压力

json.keys_under_root: true

json.add_error_key: true

2),drop不必要的字段如下

vim /etc/filebeat/filebeat.yml

processors:

- drop_fields:

fields: ["input_type", "beat.hostname", "beat.name", "beat.version", "offset", "source"]

3),计划任务删索引

index默认保留5天

cat /data/scripts/delindex.sh

#!/bin/bash

OLDDATE=`date-d  -5days  +%Y.%m.%d`

echo$OLDDATE1

curl -XDELETE http://192.168.188.193:9200/filebeat-ngx1-168-$OLDDATE

curl -XDELETE http://192.168.188.193:9200/filebeat-ngx2-178-$OLDDATE

curl -XDELETE http://192.168.188.193:9200/filebeat-ngx3-188-$OLDDATE

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 158,560评论 4 361
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,104评论 1 291
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 108,297评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,869评论 0 204
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,275评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,563评论 1 216
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,833评论 2 312
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,543评论 0 197
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,245评论 1 241
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,512评论 2 244
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,011评论 1 258
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,359评论 2 253
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,006评论 3 235
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,062评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,825评论 0 194
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,590评论 2 273
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,501评论 2 268

推荐阅读更多精彩内容