ELK-7.10.0安装文档
测试环境:Centos6.x <=7.10.0,新版本的kibana依赖glibc-2.17
下载地址
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-x86_64.rpm
https://artifacts.elastic.co/downloads/logstash/logstash-7.10.0.rpm
https://artifacts.elastic.co/downloads/kibana/kibana-7.10.0-x86_64.rpm
https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.10.0-x86_64.rpm
依赖环境
yum -y install java-1.8.0-openjdk.x86_64 java-1.8.0-openjdk-devel.x86_64
# 查看依赖的版本
# strings /lib64/libc.so.6 |grep GLIBC_
# 升级也无法解决问题,目前来看只能升级系统
# wget https://ftp.gnu.org/gnu/libc/glibc-2.17.tar.gz
# tar xzf glibc-2.17.tar.gz && cd glibc-2.17
# # 千万不要覆盖系统配置,内核会崩溃,依赖的服务也会无法使用,比如sshd
# # 编译不指定prefix参数的警告如下:
# # *** On GNU/Linux systems the GNU C Library should not be installed into
# # *** /usr/local since this might make your system totally unusable.
# # *** We strongly advise to use a different prefix. For details read the FAQ.
# # *** If you really mean to do this, run configure again using the extra
# # *** parameter `--disable-sanity-checks'.
# mkdir build && cd build && ../configure --prefix=/usr/local/glibc-2.17 && make && make install
# # strings /usr/local/glibc-2.17/lib/libc.so.6 |grep GLIBC_
优化参数
echo '
* soft core unlimited
* hard core unlimited
* soft nproc 65500
* hard nproc 65500
* soft nofile 65536
* hard nofile 65536
* hard stack 65536
* soft stack 65536
' >> /etc/security/limits.conf;
sysctl -p | grep -q max_map_count || (echo vm.max_map_count=655360 >> /etc/sysctl.conf && sysctl -p)
安装elasticsearch
# 安装
rpm -ivh elasticsearch-7.10.0-x86_64.rpm
# 核心配置
vim /etc/elasticsearch/elasticsearch.yml
##########################################
cluster.name: test
node.name: node-1
cluster.initial_master_nodes: ["node-1"]
path.data: /data/elastic/data
path.logs: /data/elastic/logs
network.host: 127.0.0.1
action.destructive_requires_name: true # 禁止匹配删除
bootstrap.system_call_filter: false
# 密码验证
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
##########################################
# 内存调整(根据需求调整)
vim /etc/elasticsearch/jvm.options
# 日志调整(根据需求调整)
vim /etc/elasticsearch/log4j2.properties
# 创建相关目录及授权
mkdir -pv /data/elastic/data /data/elastic/logs
chown -R elasticsearch: /data/elastic
# 开机自启动
chkconfig --add elasticsearch
service elasticsearch start
# 生成密码并注意保留
# interactive为交互式设置密码
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
安装kibana
# 安装
rpm -ivh kibana-7.10.0-x86_64.rpm
# 增加环境变量
([ -f /etc/sysconfig/kibana ] && grep -q LD_LIBRARY_PATH /etc/sysconfig/kibana) || echo 'export LD_LIBRARY_PATH=/usr/local/glibc-2.17/lib:$LD_LIBRARY_PATH' >> /etc/sysconfig/kibana
# ldd /var/lib/kibana/headless_shell-linux/headless_shell
# 修改配置文件
vim /etc/kibana/kibana.yml
##########################################
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://127.0.0.1:9200"]
# 用户密码
elasticsearch.username: "kibana_system"
elasticsearch.password: "123456"
i18n.locale: "zh-CN"
##########################################
# 启动
chkconfig kibana on
/etc/init.d/kibana start
# 访问站点
http://x.x.x.x:5601/
安装logstash
1.1 安装
# 安装
rpm -ivh logstash-7.10.0.rpm
# logstash配置用户密码(elasticsearch生成的用户密码)
vim /etc/logstash/logstash.yml
path.data: /data/logstash/data
path.logs: /data/logstash/log
# 监控
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: password
xpack.monitoring.elasticsearch.hosts: ["http://127.0.0.1:9200"]
# 授权
mkdir -pv /data/logstash/data /data/logstash/log
chown -R logstash: /data/logstash
1.2 配置
强烈建议使用:pipeline.yml
参考:https://www.cnblogs.com/unchch/p/12061380.html
1.3 启动
# centos6.x管理脚本
/usr/share/logstash/bin/system-install /etc/logstash/startup.options sysv
chkconfig logstash on
/etc/init.d/logstash start
# 查看日志
# tailf /var/log/logstash-stdout.log
Filebeat
1.1 安装
# 安装
rpm -ivh filebeat-7.10.0-x86_64.rpm
# 启动
chkconfig filebeat on
/etc/init.d/filebeat start
1.1 客户端配置
vim /etc/filebeat/filebeat.yml
#=========================== Filebeat prospectors =============================
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/*.json.log
exclude_lines: ['"request_method":"HEAD"', 'favicon.ico']
fields:
# 此名称将用于索引名称,请注意命名规则
document_type: client
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["127.0.0.1:5026"]
1.2 服务端配置
#=========================== Filebeat prospectors ==============================
filebeat.inputs:
- input_type: log
paths:
- /var/log/*.log
multiline.pattern: '^[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}'
multiline.negate: True
multiline.match: after
fields:
# 此名称将用于索引名称,请注意命名规则
document_type: server
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["127.0.0.1:5027"]
1.3 多个输入源
vim /etc/filebeat/filebeat.yml
#=========================== Filebeat prospectors =============================
# 多个输入源用fields来区分
filebeat.inputs:
- type: log
paths:
- /data/*/*/run/log/*.log*
#- /var/log/wifi.log
fields:
document_type: server
- type: log
paths:
- /data/log/nginx/*.json.log
fields:
document_type: client
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["127.0.0.1:5028"]
内存分配
Kibana 1G
Lucene (总内存-1G)*50%
Elasticsearch (总内存-1G)*25%
Logstash (总内存-1G)*25%,可能会有多个,
16G示例如下:
vim /usr/local/elasticsearch/config/jvm.options
# 下面两个参数值需要一致
-Xms4g
-Xmx4g
su - ops -c "/usr/local/elasticsearch/bin/elasticsearch"
# 512M
cd /usr/local/kibana/ && NODE_OPTIONS="--max-old-space-size=512" bin/kibana
# logstash
vim config/jvm.options
-Xms512m
-Xmx512m
cd /usr/local/logstash && bin/logstash -f test.conf --path.data=/data/logstash/test
# 小内存非核心数据(允许数据丢失及不可扩展),4G主机搭建elk
# elastic
index.term_index_interval: 256 # 使用内存提高命中速度,默认值为128
index.term_index_divisor: 5 # 二次抽样加载到内存
#network.tcp.block: true
index.number_of_shards: 1 # 分片数(多节点分片计算汇总)
index.number_of_replicas: 0 # 副本(多节点备份用)
curl -uelastic -XPUT -H "Content-Type: application/json" 'http://10.0.6.16:9200/_template/default' -d '{
"index_patterns": ["*"],
"settings": {
"number_of_shards": "1",
"number_of_replicas": "0",
"index" : { "refresh_interval":"15s" }
}
}'
# 也可以在Dev tools执行
PUT _template/default
{
"index_patterns" : ["*"],
"settings" : {
"number_of_shards" : 1,
"number_of_replicas": "0",
"index" : { "refresh_interval":"15s" }
}
}
# 单独设置
PUT index/_settings
{"refresh_interval": "15s"}
kibana授权
# 1、创建项目组角色
management-Security-Roles
Name:test
Run As Privileges:kibana
Index Privileges:
Indices:勾选相关日志
Privileges:read
其它选项默认就行了
# 2、创建用户
management-Security-Users
Roles:选择kibana_user,test两个角色
- 原文作者:zaza
- 原文链接:https://zazayaya.github.io/2020/08/19/elk-install.html
- 说明:转载本站文章请标明出处,部分资源来源于网络,如有侵权请及时与我联系!