Centos7下Elasticsearch集群部署记录
今天整理笔记,翻到多年前刚工作的时候记录的ES安装,那时候centos7都是新系统,不像现在这样,什么都是放在k8s上面,helm直接安装就行,分享出来记录一下~
1、基础信息
elk-es01.kevin.cn??? 192.168.10.44
elk-es02.kevin.cn??? 192.168.10.45
elk-es03.kevin.cn??? 192.168.10.46
?
下面操作在三个节点机上都要操作
[root@elk-es01 ~]# systemctl stop firewalld.service
[root@elk-es01 ~]# systemctl disable firewalld.service
[root@elk-es01 ~]# firewall-cmd --state
not running
?
[root@elk-es01 ~]# setenforce 0
setenforce: SELinux is disabled
[root@elk-es01 ~]# getenforce
Disabled
[root@elk-es01 ~]# vim /etc/sysconfig/selinux
SELINUX=disabled
[root@elk-es01 ~]# cat /etc/hosts
.....
192.168.10.44 elk-es01.kevin.cn
192.168.10.45 elk-es02.kevin.cn
192.168.10.46 elk-es03.kevin.cn
2、安装java8环境,官方建议5.4版本最至少Java 8或以上(三个节点机都要操做)
[root@elk-es01 ~]# cd /usr/local/src/
[root@elk-es01 src]# ll jdk-8u131-linux-x64_.rpm
-rw-r--r-- 1 root root 169983496 Nov 19? 2017 jdk-8u131-linux-x64_.rpm
[root@elk-es01 src]# rpm -ivh jdk-8u131-linux-x64_.rpm
[root@elk-es01 src]# java -version
java version?"1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
3、安装es(3个node都要)
官方下载地址:https://www.elastic.co/downloads/past-releases
这里选择5.6.9版本
[root@elk-es01 src]# pwd
/usr/local/src
[root@elk-es01 src]# ll /usr/local/src/elasticsearch-5.6.9.rpm
-rw-r--r-- 1 root root 33701914 May 28 09:54?/usr/local/src/elasticsearch-5.6.9.rpm
[root@elk-es01 src]# rpm -ivh elasticsearch-5.6.9.rpm --force
???
elasticsearch集群配置
[root@elk-es01 src]# cat /etc/elasticsearch/elasticsearch.yml |grep -v "#"
cluster.name: kevin-elk??????????????????????????????
#集群名称,三个节点的集群名称配置要一样
node.name: elk-es01.kevin.cn????????????????
#集群节点名称,一般为本节点主机名。注意这个要是能ping通的,即在各\
节点的/etc/hosts里绑定。
path.data:?/data/es-data????????????????????????????
#集群数据存放目录
path.logs:?/var/log/elasticsearch???????????????????
#日志路径
network.host: 192.168.10.44?????????????????????????????
#服务绑定的网络地址,一般填写本节点ip;也可以填写0.0.0.0
http.port: 9200????????????????????????????????????
?#服务接收请求的端口号
discovery.zen.ping.unicast.hosts: ["192.168.10.44",?"192.168.10.45",?"192.168.10.46"]????
#添加集群中的主机地址,会自动发现并自动选择master主节点
???
另外两个节点的elasticsearch.yml文件配置,如上相似,只需修改节点名和地址即可。
???
[root@elk-es01 src]# mkdir -p /data/es-data
[root@elk-es01 src]# chown -R elasticsearch.elasticsearch /data/es-data??????
#这一步授权不能忘记,否则下面的es服务器启动会失败!
?
启动elasticsearch
[root@elk-es01 src]# systemctl daemon-reload
[root@elk-es01 src]# systemctl start elasticsearch
???
[root@elk-es01 src]# systemctl status elasticsearch
[root@elk-es01 src]# lsof -i:9200
COMMAND?? PID????????? USER?? FD?? TYPE? DEVICE SIZE/OFF?NODE NAME
java??? 20061 elasticsearch? 195u? IPv6 1940586????? 0t0? TCP elk-es01.kevin.cn:wap-wsp (LISTEN)
4、查看集群信息(任意一台node都可以)
注意:Elasticsearch 5.x版本不再支持相关插件,比如elasticsearch-head,
解释可以访问官网,实在需要,可以独立运行(此处跳过)。
??
a)查询集群状态方法
[root@elk-es01 src]# curl -XGET 'http://192.168.10.44:9200/_cat/nodes'
192.168.10.44? 8 37 0 0.00 0.01 0.05 mdi - elk-es01.kevin.cn
192.168.10.46 20 36 0 0.00 0.01 0.05 mdi - elk-es03.kevin.cn
192.168.10.45 14 36 0 0.00 0.01 0.05 mdi * elk-es02.kevin.cn????
#带*号表示该节点是master主节点
??
后面添加 ?v?,表示详细显示
[root@elk-es01 src]# curl -XGET 'http://192.168.10.44:9200/_cat/nodes?v'
ip??????? heap.percent?ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.10.44??????????? 9????????? 37?? 0??? 0.00??? 0.01???? 0.05 mdi?????? -????? elk-es01.kevin.cn
192.168.10.46?????????? 20????????? 36?? 0??? 0.00??? 0.01???? 0.05 mdi?????? -????? elk-es03.kevin.cn
192.168.10.45?????????? 16????????? 36?? 0??? 0.00??? 0.01???? 0.05 mdi?????? *????? elk-es02.kevin.cn
??
b)查询集群状态方法
[root@elk-es01 src]#
curl -XGET 'http://192.168.10.44:9200/_cluster/state/nodes?pretty'
{
??"cluster_name"?:?"kevin-elk",
??"nodes"?: {
????"8xvAOooeQlK1cilfHGTdHw"?: {
??????"name"?:?"elk-es01.kevin.cn",
??????"ephemeral_id"?:?"9MeQir6KQ-aG0_nlZnq87g",
??????"transport_address"?:?"192.168.10.44:9300",
??????"attributes"?: { }
????},
????"Uq94w9gHRR6ewtI4SoXC2Q"?: {
??????"name"?:?"elk-es03.kevin.cn",
??????"ephemeral_id"?:?"PLZfo1q9TzyJ61v2v4-5aA",
??????"transport_address"?:?"192.168.10.46:9300",
??????"attributes"?: { }
????},
????"NOM0bFmvRDSJDLbJzRsKEQ"?: {
??????"name"?:?"elk-es02.kevin.cn",
??????"ephemeral_id"?:?"VnhtQtjrT4eL3P4C3cY6uA",
??????"transport_address"?:?"192.168.10.45:9300",
??????"attributes"?: { }
????}
??}
}
??
c)查询集群中的master
[root@elk-es01 src]#
curl -XGET 'http://192.168.10.44:9200/_cluster/state/master_node?pretty'
{
??"cluster_name"?:?"kevin-elk",
??"master_node"?:?"NOM0bFmvRDSJDLbJzRsKEQ"
}
??
或者
[root@elk-es01 src]#
curl -XGET 'http://192.168.10.44:9200/_cat/master?v'
id?????????????????????host????? ip??????? node
NOM0bFmvRDSJDLbJzRsKEQ 192.168.10.45 192.168.10.45 elk-es02.kevin.cn
??
d)查询集群的健康状态(一共三种状态:green、yellow,red;其中green表示健康。)
[root@elk-es01 src]#
curl -XGET 'http://192.168.10.44:9200/_cat/health?v'
epoch????? timestamp cluster? status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1527489534 14:38:54? kevin-elk green?????????? 3???????? 3????? 2?? 1??? 0??? 0??????? 0???????????? 0????????????????? -??????????????? 100.0%
??
或者
[root@elk-es01 src]#
curl -XGET 'http://192.168.10.44:9200/_cluster/health?pretty'
{
??"cluster_name"?:?"kevin-elk",
??"status"?:?"green",
??"timed_out"?:?false,
??"number_of_nodes"?: 3,
??"number_of_data_nodes"?: 3,
??"active_primary_shards"?: 1,
??"active_shards"?: 2,
??"relocating_shards"?: 0,
??"initializing_shards"?: 0,
??"unassigned_shards"?: 0,
??"delayed_unassigned_shards"?: 0,
??"number_of_pending_tasks"?: 0,
??"number_of_in_flight_fetch"?: 0,
??"task_max_waiting_in_queue_millis"?: 0,
??"active_shards_percent_as_number"?: 100.0
}
使用了elasticsearch集群服务, 集群一开始是3个节点, 后来随着业务量增加, 需要再追加一台es节点到这个集群中去.
以下是操作记录:
?在追加的那台es节点集群上:
1)安装jdk和elasticsearch
[root@qd-vpc-op-es04 ~]# cd tools/
[root@qd-vpc-op-es04 tools]# ls
elasticsearch-5.5.0.rpm? jdk-8u5-linux-x64.rpm
[root@qd-vpc-op-es04 tools]# rpm -ivh elasticsearch-5.5.0.rpm
[root@qd-vpc-op-es04 tools]# rpm -ivh jdk-8u5-linux-x64.rpm
[root@qd-vpc-op-es04 tools]# java -version
java version?"1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
?
2)配置elasticsearch
[root@qd-vpc-op-es04 ~]# cd /etc/elasticsearch/
[root@qd-vpc-op-es04 elasticsearch]# ls
elasticsearch.yml? elasticsearch.yml.bak? jvm.options? log4j2.properties??nohup.out? scripts
?
[root@qd-vpc-op-es04 elasticsearch]# cat elasticsearch.yml|grep -v "#"?????????? //集群中每个节点的配置内容都基本一致
cluster.name: image_search???????????????????????????????????????????????????????//集群名称
node.name: image_search_node_4???????????????????????????????????????????????????//集群中本节点的节点名称,这里定义即可
path.data:?/data/es/data?????????????????????????????????????????????????????????//服务目录路径
path.logs:?/data/es/logs?????????????????????????????????????????????????????????//服务日志路径
discovery.zen.ping.unicast.hosts: ["172.16.50.247","172.16.50.249","172.16.50.254","172.16.50.16"]?????//这里是各节点的ip地址
network.host: 0.0.0.0????????????????????????????????????????????????????????????//服务绑定的网络地址
?
默认elasticsearch服务端口时9200
[root@qd-vpc-op-es04 elasticsearch]# cat elasticsearch.yml|grep 9200
#http.port: 9200
?
[root@qd-vpc-op-es04 elasticsearch]# systemctl start elasticsearch.service
[root@qd-vpc-op-es04 elasticsearch]# systemctl restart elasticsearch.service
[root@qd-vpc-op-es04 elasticsearch]# systemctl status elasticsearch.service
?
[root@qd-vpc-op-es04 elasticsearch]# ps -ef|grep elasticsearch
[root@qd-vpc-op-es04 elasticsearch]# lsof -i:9200
COMMAND?? PID????????? USER?? FD?? TYPE DEVICE SIZE/OFF?NODE NAME
java??? 10998 elasticsearch? 320u? IPv4? 39255????? 0t0? TCP *:wap-wsp (LISTEN)
?
检查elasticsearch的健康状态
[root@qd-vpc-op-es04 elasticsearch]#
curl 'localhost:9200/_cat/indices?v'
health status index??????????????????? uuid???????????????????? pri rep docs.count?? docs.deleted store.size pri.store.size
green??open???video_filter???????????? Bx7He6ZtTEWuRBqXYC6gRw?? 5?? 1?? 458013?????? 0??????????? 4.1gb????? 2gb
green??open???recommend_history_image? svYo_Do4SM6wUiv6taUWug?? 5?? 1?? 2865902????? 0??????????? 24.9gb???? 12.4gb
green??open???recommend_history_gif??? rhN3MDN2TbuYILqEDksQSg?? 5?? 1?? 265731?????? 0??????????? 2.4gb????? 1.2gb
green??open???post_images????????????? TMsMsMEoR5Sdb7UEQJsR5Q?? 5?? 1?? 48724932???? 0??????????? 407.3gb??? 203.9gb
green??open???review_images_v2???????? qzqnknpgTniU4rCsvXzs0w?? 5?? 1?? 50375955???? 0??????????? 61.6gb???? 30.9gb
green??open???review_images??????????? rWC4WlfMS8aGe-GOkTauZg?? 5?? 1?? 51810877???? 0??????????? 439.3gb??? 219.7gb
green??open???sensitive_images???????? KxSrjvXdSz-y8YcqwBMsZA?? 5?? 1?? 13393??????? 0??????????? 128.1mb??? 64mb
green??open???post_images_v2?????????? FDphBV4-QuKVoD4_G3vRtA?? 5?? 1?? 49340491???? 0??????????? 55.8gb???? 27.8gb
?
从上面的命令结果中可以看出,本节点已经成功加入到名为image_search的elasticsearch\
集群中了,green表示节点状态很健康,数据也已经在同步中了。
?
3)在代码中更新elasticsearch的配置
通知开发同事,在代码中增加新增elasticsearch节点的配置,上线更新后,到新节点上查看elasticsearch日志是否有信息写入:
[root@qd-vpc-op-es04 ~]# cd /data/es/logs/
[root@qd-vpc-op-es04 ~]# chown -R elasticsearch.elasticsearch /data/es
[root@qd-vpc-op-es04 logs]# ls
image_search_deprecation.log? image_search_index_indexing_slowlog.log? image_search_index_search_slowlog.log? image_search.log
?
======特别注意=======
如果往elasticsearch集群中新增一个节点,做法如下:
1)在新节点上安装jdk和elasticsearch服务,配置elasticsearch.yml文件了,启动elasticsearch服务
2)在集群中其他节点上配置elasticsearch.yml文件,不需要启动elasticsearch服务
3)在新节点上执行curl?'localhost:9200/_cat/indices?v'命令,查看健康状态以及数据同步情况,数据会自动同步过去
4)在代码中增加新增elasticsearch节点的配置,上线更新后,查看新增节点的elasticsearch日志是否有信息写入
?
==============================================================
顺便贴一下之前其他三个es节点的elasticsearch.yml文件配置:
[root@qd-vpc-op-es01 ~]# cat /etc/elasticsearch/elasticsearch.yml |grep -v "#"
cluster.name: image_search
node.name: image_search_node_1
path.data:?/data/es/data
path.logs:?/data/es/logs
discovery.zen.ping.unicast.hosts: ["172.16.50.16","172.16.50.247","172.16.50.249","172.16.50.254"]
network.host: 0.0.0.0
?
[root@qd-vpc-op-es02 ~]# cat /etc/elasticsearch/elasticsearch.yml |grep -v "#"
cluster.name: image_search
node.name: image_search_node_2
path.data:?/data/es/data
path.logs:?/data/es/logs
discovery.zen.ping.unicast.hosts: ["172.16.50.16","172.16.50.247","172.16.50.249","172.16.50.254"]
network.host: 0.0.0.0
?
[root@qd-vpc-op-es03 ~]# cat /etc/elasticsearch/elasticsearch.yml|grep -v "#"
cluster.name: image_search
node.name: image_search_node_3
path.data:?/data/es/data
path.logs:?/data/es/logs
discovery.zen.ping.unicast.hosts: ["172.16.50.247","172.16.50.249","172.16.50.254","172.16.50.16"]
network.host: 0.0.0.0