====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
通过阅读本文,我们了解如何在web项目中配置log4j2使得其可以与ELK协作。
说明:ELK是Logstash+ElasticSearch+Kibana,其中,Logstash负责收集日志,ElasticSearch负责存储日志,而Kibana提供界面查看日志信息。
Log4j2为我们提供SocketAppender,使得我们可以通过TCP或UDP发送日志,详见:http://logging.apache.org/log4j/2.x/manual/appenders.html#SocketAppender。
为了将日志发送到Logstash,我们的配置如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<?`xml version="1.0" encoding="UTF-8"`?>
<`Configuration status="WARN" monitorInterval="60"`>
<`Properties`>
<`Property name="PATTERN">%d %-5p [%t] %C{1} (%F:%L) - %m%n</Property`>
</`Properties`>
<`Appenders`>
<`Socket name="Logstash" host="172.30.20.8" port="4560" protocol="TCP"`>
<`PatternLayout pattern="${PATTERN}" />`
</`Socket`>
</`Appenders`>
<`Loggers`>
<`Root level="error"`>
<`AppenderRef ref="Logstash"`/>
</`Root`>
<`Logger name="Error" level="error" additivity="false"`>
<`AppenderRef ref="Logstash"`/>
</`Logger`>
<`Logger name="Request" level="info" additivity="false"`>
<`AppenderRef ref="Logstash"`/>
</`Logger`>
</`Loggers`>
</`Configuration`>
我们在程序里打印我们的日志信息:
1
LogService.getLogger(LogService.REQUEST_LOG).info(`"{} tries to login"`, login);
为了获取Log4j2的日志信息,我们编写logstash的配置文件micro-wiki.conf,如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
input {
tcp {
host => `"0.0.0.0"`
port => `"4560"`
mode => `"server"`
type => `"microwiki"`
add_field => {
"name" => `"Routh"`
}
}
}
filter {
}
output {
stdout {
codec => rubydebug
}
}
Logstash提供了log4j输入插件,但是只能用于log4j1.x,不能用于log4j2,因此,我们在配置文件中使用tcp输入插件,关于该插件的参数解释,详见:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-tcp.html。
在该配置文件中,我们使用stdout输出插件以及rubydebug的codec插件,这使得我们的logstash输出打印在控制台,并且使用ruby的输出格式。
因此,当我们在控制台启动logstash,如下:
1
.`/bin/logstash -f config/micro-wiki.conf`
当我们在应用程序打印日志,logstash的输出如下:
1
2
3
4
5
6
7
8
{
"message" => `"2015-12-08 12:57:45,178 INFO [qtp981012032-24] UserController (UserController.java:37) - hello tries to login"`,
"@version" => `"1"`,
"@timestamp" => `"2015-12-08T04:57:45.180Z"`,
"host" => `"172.30.20.8"`,
"type" => `"microwiki"`,
"name" => `"Routh"`
}
为了让logstash将日志信息输出到elasticsearch,我们更改logstash的配置文件,增加了名为elasticsearch的输出插件,如下:(详见:https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
input {
tcp {
host => `"0.0.0.0"`
port => `"4560"`
mode => `"server"`
type => `"microwiki"`
add_field => {
"name" => `"Routh"`
}
}
stdin {}
}
filter {
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => [`"172.30.20.8:9200"`]
action => `"index"`
codec => rubydebug
index => `"microwiki-%{+YYYY.MM.dd}"`
template_name => `"microwiki"`
}
}
我们修改elasticsearch的配置文件config/elasticsearch.yml,主要的修改为:
1
2
3
4
cluster.name: MicroWiki-Cluster
node.name: microwiki-node1
network.host: `172.30.20.8`
http.port: `9200`
elasticsearch的其他配置采用默认的配置项。配置完成后,我们启动elasticsearch,如下:
bin/elasticsearch -d
此时,我们通过应用程序打印日志,即可将日志信息通过logstash输出至elasticsearch,我们通过elasticsearch提供的API查看我们的日志信息,如下:
输入:
http://172.30.20.8:9200/microwiki-2015.12.08/_search
输出:
elasticsearch的输出结果Collapse source
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
{
"took"`: 2,`
"timed_out"`: false,`
"_shards"`: {`
"total"`: 5,`
"successful"`: 5,`
"failed"`: `0
},
"hits"`: {`
"total"`: 4,`
"max_score"`: 1,`
"hits"`: [`
{
"_index"`: "microwiki-2015.12.08",`
"_type"`: "microwiki",`
"_id"`: "AVGALOrsilzU44B28xlP",`
"_score"`: 1,`
"_source"`: {`
"message"`: "2015-12-08 14:00:04,884 INFO [qtp981012032-24] UserController (UserController.java:37) - hello tries to login",`
"@version"`: "1",`
"@timestamp"`: "2015-12-08T06:00:04.886Z",`
"host"`: "172.30.20.8",`
"type"`: "microwiki",`
"name"`: `"Routh"
}
},
{
"_index"`: "microwiki-2015.12.08",`
"_type"`: "microwiki",`
"_id"`: "AVGAMByJilzU44B28xlR",`
"_score"`: 1,`
"_source"`: {`
"message"`: "2015-12-08 14:03:35,357 INFO [qtp981012032-22] UserController (UserController.java:37) -hello tries to login",`
"@version"`: "1",`
"@timestamp"`: "2015-12-08T06:03:35.358Z",`
"host"`: "172.30.20.8",`
"type"`: "microwiki",`
"name"`: `"Routh"
}
},
{
"_index"`: "microwiki-2015.12.08",`
"_type"`: "microwiki",`
"_id"`: "AVGAMCCIilzU44B28xlS",`
"_score"`: 1,`
"_source"`: {`
"message"`: "2015-12-08 14:03:35,831 INFO [qtp981012032-22] UserController (UserController.java:37) - hello tries to login",`
"@version"`: "1",`
"@timestamp"`: "2015-12-08T06:03:35.831Z",`
"host"`: "172.30.20.8",`
"type"`: "microwiki",`
"name"`: `"Routh"
}
},
{
"_index"`: "microwiki-2015.12.08",`
"_type"`: "microwiki",`
"_id"`: "AVGAMByJilzU44B28xlQ",`
"_score"`: 1,`
"_source"`: {`
"message"`: "2015-12-08 14:03:34,608 INFO [qtp981012032-25] UserController (UserController.java:37) - hello tries to login",`
"@version"`: "1",`
"@timestamp"`: "2015-12-08T06:03:34.609Z",`
"host"`: "172.30.20.8",`
"type"`: "microwiki",`
"name"`: `"Routh"
}
}
]
}
}
Kibana提供良好的用户界面,使得我们可以很方便地访问elasticsearch并通过图形化工具展示。我们修改Kibana的配置文件,使之能与我们的elasticsearch配合使用,主要修改项如下:
1
2
3
server.host: `"172.30.20.8"`
server.port: `5601`
elasticsearch.url: `"http://172.30.20.8:9200"`
因为ELK所有组件都在同一台机器上跑,所以将其相关的URL都设置为172.30.20.8。此时,我们可以启动Kibana,它会自动连接elasticsearch,如下:
1
2
3
4
5
6
7
8
9
10
11
localhost:kibana-`4.3.0`-darwin-x64 routh$ bin/kibana
log [`14:29:33.048`] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
log [`14:29:33.066] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch`
log [`14:29:33.074`] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
log [`14:29:33.080`] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
log [`14:29:33.084`] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
log [`14:29:33.090`] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
log [`14:29:33.093`] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
log [`14:29:33.096`] [info][status][plugin:elasticsearch] Status changed from yellow to green - Kibana index ready
log [`14:29:33.097`] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
log [`14:29:33.105] [info][listening] Server running at http://172.30.20.8:5601`
此时,我们通过浏览器访问:http://172.30.20.8:5601,进入Kibana页面,如下:

logstash默认的index为“logstash-%{+YYYY.MM.dd}”,而上述配置文件更改为“microwiki-%{+YYYY.MM.dd}”,因此我们配置该索引项,如下:

点击“Create”按钮,即可创建我们的索引项,该索引项所包含的字段如下:

导航条有四个选项:Discover、Visualize、Dashboard和Settings,点击“Discover”选项,如下:

注意图片右上角的红色框,我们可以在这筛选出我们所关心时间段的数据,图片所示为“Today”,即今天的数据。我们可以搜索我们关心的数据,也可以进入“Visualize”使用图形化工具查看我们所关心的数据。
此时,我们即可看到我们的日志数据,当然大部分字段是logstash和elasticsearch附加上去的,只有message字段是我们程序打印的日志信息。在logstash中,我们可以剔除我们不关心的字段,如host,我们也可以使用logstash提供的filter插件方便地解析我们的日志,即解析message内容,当然,这些内容不在本文所介绍的范围内。
这里的实战主要为了说明如何根据项目中的日志样式进行解析。例如,我在项目中Log4j2使用的PatternLayout为:
<PatternLayout pattern="%d %p [%t] %C{1} (%F:%L) [%marker] - %m%n" />
在我的项目中,一共有三条打印日志的方式:
1
2
3
4
5
6
7
8
LogService.getLogger(LogService.REQUEST_LOG).info(`"{} tries to login"`, login);
LogService.getLogger(LogService.SERVICE_LOG).info(
MarkerConstants.USER_MARKER, `"{} tries to create wiki with title {}"`, UserUtils.getLoginUser(), title);
LogService.getLogger(LogService.SERVICE_LOG).info(MarkerConstants.WIKI_MARKER,
"title={} header={} content={} footer={}"`,`
wiki.getTitle(), wiki.getHeader(), wiki.getContent(), wiki.getFooter());
它们打印的日志信息如下:
2015-12-11 10:37:57,291 INFO [qtp1975171943-26] UserController (UserController.java:27) [] - hello tries to login
2015-12-11 10:38:42,277 INFO [qtp1975171943-24] WikiController (WikiController.java:31) [User] - UserModel{name='Routh', sex=1, age=25} tries to create wiki with title c95
2015-12-11 10:38:42,278 INFO [qtp1975171943-24] WikiController (WikiController.java:36) [Wiki] - title=c95 header=c95_header content=c95_content footer=c95_footer
此处,我们用两个Marker(User和Wiki)标记日志,这可以方便处理日志信息。User Marker用于记录用户日志,Wiki Marker用于记录Wiki每部分的内容,它是kv形式。(这不是实际项目,只是示例项目)
因此,我在Logstash中,为了解析我的日志格式,我使用grok和kv两个filter,配置文件如下:(input和output部分与上文示例一样)
1
2
3
4
5
6
7
8
9
10
11
12
filter {
grok {
match => {
"message" => `"(?<datetime>d{4}-d{2}-d{2}sd{2}:d{2}:d{2},d{3})s(?<level>w)s[(?<thread>S)]s(?<class>S)s((?<file>1)[:]{1}(?<line>d))s[(?<marker>w)]s[-]s(?<msg>.*)"`
}
}
if ([marker] == `"Wiki"`) {
kv {
include_keys => [ `"title", "header", "content", "footer" ]`
}
}
}
此处,我使用自定义的grok模式(grok内置约120种模式)解析日志信息,并对marker为"Wiki"的日志信息,使用kv解析"title"、"header"、"content"和"footer"四个字段,我们看一看解析后的日志信息,如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
"message" => `"2015-12-10 19:51:43,367 INFO [qtp1975171943-23] WikiController (WikiController.java:31) [User] - UserModel{name='Routh', sex=1, age=25} tries to create wiki with title c75"`,
"@version" => `"1"`,
"@timestamp" => `"2015-12-10T11:51:43.367Z"`,
"host" => `"172.30.20.8"`,
"type" => `"microwiki"`,
"name" => `"Routh"`,
"datetime" => `"2015-12-10 19:51:43,367"`,
"level" => `"INFO"`,
"thread" => `"qtp1975171943-23"`,
"class" => `"WikiController"`,
"file" => `"WikiController.java"`,
"line" => `"31"`,
"marker" => `"User"`,
"msg" => `"UserModel{name='Routh', sex=1, age=25} tries to create wiki with title c75"`
}
{
"message" => `"2015-12-10 19:51:43,367 INFO [qtp1975171943-23] WikiController (WikiController.java:36) [Wiki] - title=c75 header=c75_header content=c75_content footer=c75_footer"`,
"@version" => `"1"`,
"@timestamp" => `"2015-12-10T11:51:43.367Z"`,
"host" => `"172.30.20.8"`,
"type" => `"microwiki"`,
"name" => `"Routh"`,
"datetime" => `"2015-12-10 19:51:43,367"`,
"level" => `"INFO"`,
"thread" => `"qtp1975171943-23"`,
"class" => `"WikiController"`,
"file" => `"WikiController.java"`,
"line" => `"36"`,
"marker" => `"Wiki"`,
"msg" => `"title=c75 header=c75_header content=c75_content footer=c75_footer"`,
"title" => `"c75"`,
"header" => `"c75_header"`,
"content" => `"c75_content"`,
"footer" => `"c75_footer"`
}
此是其中的两条日志信息解析后的样子,有一些字段是logstash自带的,message为解析前的日志信息。若我们不希望记录某些日志,我们可以通过grok提供的remove_field移除我们不需要的字段信息,如:
grok {
remove_field => [ "message", "name", "type" ]
}
至此,我们已经知道如何使用logstash解析我们的日志信息了。在我们的日志信息通过解析后传入elasticsearch处理。最后,我们可以通过Kibana查看我们的日志数据,如下图:

在此示例中,我logstash解析后的所有字段都输出到elasticsearch。若我们想统计某段时间内,创建wiki的情况,见下图:

_在搜索框输入“_exists__:title”,目的在于筛选具有title字段的日志;x轴将时间按每分钟进行展示,y轴为对应的数量;右上角为所选的时间段。关于Kibana的搜索,采用Lucene的语法,详见:https://lucene.apache.org/core/2_9_4/queryparsersyntax.html。
至此,我们了解如何通过logstash的grok解析我们应用程序的日志,并通过Kibana搜索相关字段以图形化的方式展示。
我们了解如何将log4j2的日志输出到ELK以及如何使用ELK收集、处理和展示我们的日志数据。为了更好地使用Log4j2+ELK,我们还需要深入学习。
Original url: Access
Created at: 2019-04-22 20:06:47
Category: default
Tags: none
未标明原创文章均为采集,版权归作者所有,转载无需和我联系,请注明原出处,南摩阿彌陀佛,知识,不只知道,要得到
最新评论