Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/yingzhuo/logback-flume-appender

logback appender for apache-flume
https://github.com/yingzhuo/logback-flume-appender

apache-flume apache-hadoop apache-hive flume logback logback-appender logback-flume-appender slf4j

Last synced: 25 days ago
JSON representation

logback appender for apache-flume

Awesome Lists containing this project

README

        

[![License](http://img.shields.io/badge/License-Apache_2-red.svg?style=flat)](http://www.apache.org/licenses/LICENSE-2.0)
[![JDK](http://img.shields.io/badge/JDK-v8.0-yellow.svg)](http://www.oracle.com/technetwork/java/javase/downloads/index.html)
[![Build](http://img.shields.io/badge/Build-Maven_2-green.svg)](https://maven.apache.org/)
[![Maven Central](https://img.shields.io/maven-central/v/com.github.yingzhuo/logback-flume-appender.svg?label=Maven%20Central)](https://search.maven.org/search?q=g:%22com.github.yingzhuo%22%20AND%20a:%22logback-flume-appender%22)

# logback-flume-appender

这个东西不是我原创的。我只找到了如下这个jar包用于将logback产生的业务日志发送到flume,最终传递到hdfs/hive。

```xml

com.teambytes.logback
logback-flume-appender_2.11
0.0.9

```

由于兼容性等原因,我做了一些工作:

* 原作者的scala语言的部分,我用java改写了。
* 升级flume-ng-sdk到1.9.0版本。
* JDK的最低要求调整到1.8

**对于原作者,得罪了。**

改写后的jar包坐标:

```xml

com.github.yingzhuo
logback-flume-appender
1.0.0

```

### 用法

1) flume agent配置

```config
myagent.sources = mysource
myagent.channels = mychannel
myagent.sinks = mysink

# sources
myagent.sources.mysource.type = avro
myagent.sources.mysource.bind = 0.0.0.0
myagent.sources.mysource.port = 4141

# channel selector
myagent.sources.mysource.selector.type = replicating

# channels
myagent.channels.mychannel.type = org.apache.flume.channel.kafka.KafkaChannel
myagent.channels.mychannel.kafka.bootstrap.servers = 192.168.99.127:9092,192.168.99.128:9092,192.168.99.129:9092
myagent.channels.mychannel.kafka.topic = flume-channel
myagent.channels.mychannel.kafka.group.id = flume

# sinks
myagent.sinks.mysink.type = hdfs
myagent.sinks.mysink.hdfs.path = hdfs://192.168.99.130:8020/%{application}/log/%{type}/%Y-%m-%d
myagent.sinks.mysink.hdfs.useLocalTimeStamp = true
myagent.sinks.mysink.hdfs.fileType = CompressedStream
myagent.sinks.mysink.hdfs.codeC = lzop
myagent.sinks.mysink.hdfs.fileSuffix = .lzo
myagent.sinks.mysink.hdfs.writeFormat = Text
myagent.sinks.mysink.hdfs.round = true
myagent.sinks.mysink.hdfs.rollInterval = 600
myagent.sinks.mysink.hdfs.rollSize = 268435456
myagent.sinks.mysink.hdfs.rollCount = 0
myagent.sinks.mysink.hdfs.timeZone = Asia/Shanghai

# 集成
myagent.sources.mysource.channels = mychannel
myagent.sinks.mysink.channel = mychannel
```

2) logback配置 (片段)

```xml


10.211.55.3:4141,
10.211.55.4:4141,
10.211.55.5:4141,


connect-timeout=4000;
request-timeout=8000

100
1000

my application
my tier
my log type
my tag


key1 = value1;
key2 = value2


%message%n%ex


说明

```

> **注意:** 配置复数个Agents时,每条日志只会发送到其中一个Agent。

### 许可证

[Apache License](LICENSE)