ELK+redis搭建nginx日志分析平台

网友投稿 794 2023-03-11

本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。

ELK+redis搭建nginx日志分析平台

ELK简介

ELKStack即Elasticsearch + Logstash + Kibana。日志监控和分析在保障业务稳定运行时,起到了很重要的作用。比如对nginx日志的监控分析,nginx是有日志文件的,它的每个请求的状态等都有日志文件进行记录,所以可以通过读取日志文件来分析;redis的list结构正好可以作为队列使用,用来存储logstash传输的日志数据。然后elasticsearch就可以进行分析和查询了。

本文搭建的的是一个分布式的日志收集和分析系统。logstash有agent和indexer两个角色。对于agent角色,放在单独的web机器上面,然后这个agent不断地读取nginx的日志文件,每当它读到新的日志信息以后,就将日志传送到网络上的一台redis队列上。对于队列上的这些未处理的日志,有不同的几台logstash indexer进行接收和分析。分析之后存储到elasticsearch进行搜索分析。再由统一的kibana进行日志web界面的展示[3]。

目前我用两台机器做测试,hadoop-master安装nginx和logstash agent(tar源码包安装),hadoop-slave机器安装安装logstash agent、elasticsearch、redis、nginx。同时分析两台机器的nginx日志,具体配置可参见说明文档。以下记录了ELK+redis来收集和分析日志的配置过程,参考了官方文档和前人的文章。

系统环境

主机环境

hadoop-master	192.168.186.128   #logstash index、nginxhadoop-slave	192.168.186.129   #安装logstash agent、elasticsearch、redis、nginx

hadoop-master 192.168.186.128 #logstash index、nginxhadoop-slave 192.168.186.129 #安装logstash agent、elasticsearch、redis、nginx

系统信息

[root@hadoop-slave ~]# java -version   #Elasticsearch是java开发的,需要JDK环境,本机安装JDK 1.8     java version "1.8.0_20"     Java(TM) SE Runtime Environment (build 1.8.0_20-b26)     Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode)[root@hadoop-slave ~]# cat /etc/issue     CentOS release 6.4 (Final)     Kernel \r on an \m

[root@hadoop-slave ~]# java -version #Elasticsearch是java开发的,需要JDK环境,本机安装JDK 1.8 java version "1.8.0_20" Java(TM) SE Runtime Environment (build 1.8.0_20-b26) Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode)[root@hadoop-slave ~]# cat /etc/issue CentOS release 6.4 (Final) Kernel \r on an \m

Redis安装

执行完后,会在当前目录中的src目录中生成相应的执行文件,如:redis-server redis-cli等;

我们在/usr/local/目录中创建redis位置目录和相应的数据存储目录、配置文件目录等

[root@hadoop-slave local]# mkdir /usr/local/redis/{conf,run,db} -pv[root@hadoop-slave local]# cd /usr/local/src/redis-2.8.20/[root@hadoop-slave redis-2.8.20]# cp redis.conf /usr/local/redis/conf/[root@hadoop-slave redis-2.8.20]# cd src/[root@hadoop-slave src]# cp redis-benchmark redis-check-aof redis-check-dump redis-cli redis-server mkreleasehdr.sh /usr/local/redis/

[root@hadoop-slave local]# mkdir /usr/local/redis/{conf,run,db} -pv[root@hadoop-slave local]# cd /usr/local/src/redis-2.8.20/[root@hadoop-slave redis-2.8.20]# cp redis.conf /usr/local/redis/conf/[root@hadoop-slave redis-2.8.20]# cd src/[root@hadoop-slave src]# cp redis-benchmark redis-check-aof redis-check-dump redis-cli redis-server mkreleasehdr.sh /usr/local/redis/

到此Redis安装完成了。

下面来试着启动一下,并查看相应的端口是否已经启动

[root@hadoop-slave src]# /usr/local/redis/redis-server /usr/local/redis/conf/redis.conf   & #可以打入后台[root@hadoop-slave redis]# netstat -antulp | grep 6379     tcp        0      0 0.0.0.0:6379                0.0.0.0:*                   LISTEN      72669/redis-server       tcp        0      0 :::6379                     :::*                        LISTEN      72669/redis-server

[root@hadoop-slave src]# /usr/local/redis/redis-server /usr/local/redis/conf/redis.conf & #可以打入后台[root@hadoop-slave redis]# netstat -antulp | grep 6379 tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 72669/redis-server tcp 0 0 :::6379 :::* LISTEN 72669/redis-server

启动没问题了,ok!

Elasticserach安装

ElasticSearch默认的对外服务的HTTP端口是9200,节点间交互的TCP端口是9300,注意打开tcp端口。

Elasticsearch安装

从官网下载最新版本的tar包Search & Analyze in Real Time: Elasticsearch is a distributed, open source search and analytics engine, designed for horizontal scalability, reliability, and easy management.

测试

出现200返回码表示ok

Logstash安装

Logstash is a flexible, open source, data collection, enrichment, and transport pipeline designed to efficiently process a growing list of log, event, and unstructured data sources for distribution into a variety of outputs, including Elasticsearch.Logstash默认的对外端口是9292,如果防火墙开启了要打开tcp端口。

源码安装

192.168.186.128主机源码安装,解压到/usr/local/目录下

yum安装

192.168.186.129采用yum安装

测试

[root@hadoop-slave ~]# cd /opt/logstash/[root@hadoop-slave logstash]# ls     bin  CHANGELOG.md  CONTRIBUTORS  Gemfile  Gemfile.jruby-1.9.lock  lib  LICENSE  NOTICE.TXT  vendor[root@hadoop-slave logstash]# bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'

[root@hadoop-slave ~]# cd /opt/logstash/[root@hadoop-slave logstash]# ls bin CHANGELOG.md CONTRIBUTORS Gemfile Gemfile.jruby-1.9.lock lib LICENSE NOTICE.TXT vendor[root@hadoop-slave logstash]# bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'

然后你会发现终端在等待你的输入。没问题,敲入 Hello World,回车,然后看看会返回什么结果!

[root@hadoop-slave logstash]# vi logstash-simple.conf     #sleasticsearch的host是本机     input { stdin { } }     output {       elasticsearch { host => localhost }       stdout { codec => rubydebug }     }[root@hadoop-slave logstash]# ./bin/logstash -f logstash-simple.conf  #可以打入后台运行     ……     {            "message" => "",           "@version" => "1",         "@timestamp" => "2015-08-18T06:26:19.348Z",               "host" => "hadoop-slave"     }     ……

[root@hadoop-slave logstash]# vi logstash-simple.conf #sleasticsearch的host是本机 input { stdin { } } output { elasticsearch { host => localhost } stdout { codec => rubydebug } }[root@hadoop-slave logstash]# ./bin/logstash -f logstash-simple.conf #可以打入后台运行 …… { "message" => "", "@version" => "1", "@timestamp" => "2015-08-18T06:26:19.348Z", "host" => "hadoop-slave" } ……

表明elasticsearch已经收到logstash传来的数据了,通信ok!

也可以通过下面的方式

ogstash配置

logstash语法

摘录自说明文档:Logstash 社区通常习惯用 shipper,broker 和 indexer 来描述数据流中不同进程各自的角色。如下图:

broker一般选择redis。不过我见过很多运用场景里都没有用 logstash 作为 shipper(也是agent的概念),或者说没有用 elasticsearch 作为数据存储也就是说也没有 indexer。所以,我们其实不需要这些概念。只需要学好怎么使用和配置 logstash 进程,然后把它运用到你的日志管理架构中最合适它的位置就够了。

设置nginx日志格式

两台机器都安装了nginx,所以都要修改nginx.conf,设置日志格式。

hadoop-slave机器同上操作

开启logstash agent

logstash agent负责收集信息传送到redis队列上

[root@hadoop-master ~]# cd /usr/local/logstash-1.5.3/[root@hadoop-master logstash-1.5.3]# mkdir etc[root@hadoop-master etc]# vi logstash_agent.conf input {        file {                type => "nginx access log"                path => ["/usr/local/nginx/logs/host.access.log"]           }}output {        redis {                host => "192.168.186.129" #redis server                data_type => "list"                key => "logstash:redis"        }}[root@hadoop-master etc]# nohup /usr/local/logstash-1.5.3/bin/logstash -f /usr/local/logstash-1.5.3/etc/logstash_agent.conf &#在另一台机器上的logstash_agent也同样配置

[root@hadoop-master ~]# cd /usr/local/logstash-1.5.3/[root@hadoop-master logstash-1.5.3]# mkdir etc[root@hadoop-master etc]# vi logstash_agent.conf input { file { type => "nginx access log" path => ["/usr/local/nginx/logs/host.access.log"] }}output { redis { host => "192.168.186.129" #redis server data_type => "list" key => "logstash:redis" }}[root@hadoop-master etc]# nohup /usr/local/logstash-1.5.3/bin/logstash -f /usr/local/logstash-1.5.3/etc/logstash_agent.conf &#在另一台机器上的logstash_agent也同样配置

开启logstash indexer

配置完成!

Kibana安装

ELK+redis测试

如果ELK+redis都没启动,以下命令启动:

[root@hadoop-slave src]# /usr/local/redis/redis-server /usr/local/redis/conf/redis.conf  & #启动redis[root@hadoop-slave ~]# elasticsearch start -d  #启动elasticsearch[root@hadoop-master etc]# nohup /usr/local/logstash-1.5.3/bin/logstash -f /usr/local/logstash-1.5.3/etc/logstash_agent.conf &[root@hadoop-slave etc]# nohup /opt/logstash/bin/logstash -f /opt/logstash/etc/logstash_indexer.conf &[root@hadoop-slave etc]# nohup /opt/logstash/bin/logstash -f /opt/logstash/etc/logstash_agent.conf &[root@hadoop-slave bin]# ./kibana  & #启动kibana

[root@hadoop-slave src]# /usr/local/redis/redis-server /usr/local/redis/conf/redis.conf & #启动redis[root@hadoop-slave ~]# elasticsearch start -d #启动elasticsearch[root@hadoop-master etc]# nohup /usr/local/logstash-1.5.3/bin/logstash -f /usr/local/logstash-1.5.3/etc/logstash_agent.conf &[root@hadoop-slave etc]# nohup /opt/logstash/bin/logstash -f /opt/logstash/etc/logstash_indexer.conf &[root@hadoop-slave etc]# nohup /opt/logstash/bin/logstash -f /opt/logstash/etc/logstash_agent.conf &[root@hadoop-slave bin]# ./kibana & #启动kibana

每刷新一次页面会产生一条访问记录,记录在host.access.log文件中。

[root@hadoop-master logs]# cat host.access.log      ……     192.168.186.1 - - [18/Aug/2015:22:59:00 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"     192.168.186.1 - - [18/Aug/2015:23:00:21 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"     192.168.186.1 - - [18/Aug/2015:23:06:38 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"     192.168.186.1 - - [18/Aug/2015:23:15:52 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"     192.168.186.1 - - [18/Aug/2015:23:16:52 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"[root@hadoop-master logs]#

[root@hadoop-master logs]# cat host.access.log …… 192.168.186.1 - - [18/Aug/2015:22:59:00 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-" 192.168.186.1 - - [18/Aug/2015:23:00:21 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-" 192.168.186.1 - - [18/Aug/2015:23:06:38 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-" 192.168.186.1 - - [18/Aug/2015:23:15:52 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-" 192.168.186.1 - - [18/Aug/2015:23:16:52 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"[root@hadoop-master logs]#

打开kibana页面即可显示两台机器nginx的访问日志信息,显示时间是由于虚拟机的时区和物理机时区不一致,不影响。

此时访问出现如下界面

上一篇:看 nova-scheduler 如何选择计算节点 - 每天5分钟玩转 OpenStack(27)
下一篇:Nova 组件详解 - 每天5分钟玩转 OpenStack(26)
相关文章

 发表评论

暂时没有评论,来抢沙发吧~