{"id":19993352,"url":"https://github.com/JThink/SkyEye","last_synced_at":"2025-05-04T12:31:21.765Z","repository":{"id":18073030,"uuid":"83278537","full_name":"JThink/SkyEye","owner":"JThink","description":"对java、scala等运行于jvm的程序进行实时日志采集、索引和可视化，对系统进行进程级别的监控，对系统内部的操作进行策略性的报警、对分布式的rpc调用进行trace跟踪以便于进行性能分析","archived":false,"fork":false,"pushed_at":"2022-06-17T01:50:36.000Z","size":17090,"stargazers_count":865,"open_issues_count":4,"forks_count":399,"subscribers_count":111,"default_branch":"master","last_synced_at":"2024-11-13T04:56:22.311Z","etag":null,"topics":["apm","capacity-planning","deployment-assistant","dubbo","dubbox","log-collect","log-indexer","log-visualization","log4j","log4j-kafka-appender","log4j2","log4j2-kafka-appender","logback","logback-kafka-appender","rpc-trace","skyeye","spring-cloud","system-alarm","system-monitor","tracer"],"latest_commit_sha":null,"homepage":"","language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/JThink.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-02-27T06:56:59.000Z","updated_at":"2024-10-27T02:41:11.000Z","dependencies_parsed_at":"2022-08-31T19:02:02.120Z","dependency_job_id":null,"html_url":"https://github.com/JThink/SkyEye","commit_stats":null,"previous_names":[],"tags_count":8,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JThink%2FSkyEye","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JThink%2FSkyEye/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JThink%2FSkyEye/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JThink%2FSkyEye/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/JThink","download_url":"https://codeload.github.com/JThink/SkyEye/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252334357,"owners_count":21731391,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["apm","capacity-planning","deployment-assistant","dubbo","dubbox","log-collect","log-indexer","log-visualization","log4j","log4j-kafka-appender","log4j2","log4j2-kafka-appender","logback","logback-kafka-appender","rpc-trace","skyeye","spring-cloud","system-alarm","system-monitor","tracer"],"created_at":"2024-11-13T04:52:36.451Z","updated_at":"2025-05-04T12:31:16.756Z","avatar_url":"https://github.com/JThink.png","language":"Java","readme":"# SkyEye\n对java、scala等运行于jvm的程序进行实时日志采集、索引和可视化，对系统进行进程级别的监控，对系统内部的操作进行策略性的报警、对分布式的rpc调用进行trace跟踪以便于进行性能分析\n\n# 交流方式\n\n1. QQ群: 624054633\n2. Email: leviqian@sina.com\n3. blog: [blog](http://blog.csdn.net/jthink_)\n\n# 架构\n![](architecture.png)\n- APP: 接入skyeye-client的系统会通过kafkaAppender向kafka写入日志\n- es-indexer-group: kafka的es消费组，读取kafka的数据并批量bulk到es\n- monitor-group: kafka的监控消费组，app在日志中进行各种event埋点（如：第三方异常报警、请求耗时异常报警等）\n- business-group: kafka的业务消费组\n- trace-group: 通过日志进行rpc调用trace跟踪（dapper论文）\n- es: 日志存储db，并建立相关索引\n- zookeeper: app注册中心\n- monitor: 监控中心，监听zookeeper注册中心中相应的节点变化进行监控报警\n- rabbitmq: 监控报警缓冲队列\n- alert: 具体报警手段，包括邮件和微信\n\n# 项目介绍\n对java、scala等运行于jvm的程序进行实时日志采集、索引和可视化，对系统进行进程级别的监控，对系统内部的操作进行策略性的报警、对分布式的rpc调用进行trace跟踪以便于进行性能分析\n\n- 日志实时采集（支持log4j、logback和log4j2）\n- 日志实时页面实时展示（支持关键字过滤）\n- 历史日志查询（支持多种条件过滤，支持sql语句查询）\n- app实时部署位置展示（机器和文件夹）\n- app实时日志采集状态展示\n- app历史部署位置展示\n- api请求实时统计和历史统计\n- 第三方请求实时统计和历史统计\n- 基于dubbox的rpc调用数据收集和调用链展示（支持多种条件检索）\n- 系统上下线报警\n- 系统内嵌采集器报警\n- 中间件、api、第三方、job执行异常报警（策略报警和异常报警）\n\n# 部署步骤\n\n修改根目录gradle文件中的私服地址（这样才能打包deploy到自己的本地私服）\n打包：gradle clean install upload -x test\n\n## 容器部署\n\n需要自己修改每个项目下的image下的Dockerfile文件\n\nPS: rancher一键部署skyeye后期出教程，基本符合持续交付的场景。\n\n```shell\nsudo bash build.sh 1.3.0 master\n```\n\n## skyeye-base\n\n本项目没有具体的业务逻辑，主要是各个模块通用的类定义，如：常量、dto、dapper相关、公用util，所以该项目无需部署，只需要打包。\n\n## skyeye-client\n\n本项目主要是提供给对接的项目使用，包含了log4j和logback的自定义appender和项目注册相关，所以该项目无需部署，只需要打包提供给对接方对接。\n\n## skyeye-data\n\n本项目主要是用来提供和数据操作相关的中间件，具体分为以下5个子modoule。本项目无需部署，只需要打包。\n\n### skyeye-data-dubbox\n\n该项目主要是自定义的spring-boot的dubbox starter，为spring-boot相关的项目使用dubbox提供简易的方式并集成spring-boot的auto configuration，见我的另一个开源项目：[spring-boot-starter-dubbox](https://github.com/JThink/spring-boot-starter-dubbox)\n\n### skyeye-data-hbase\n\n该项目主要是自定义的spring-boot的hbase starter，为hbase的query和更新等操作提供简易的api并集成spring-boot的auto configuration，见我的另一个开源项目：[spring-boot-starter-hbase](https://github.com/JThink/spring-boot-starter-hbase)\n\n### skyeye-data-httpl\n\n该项目主要使用连接池简单封装了http的请求，如果项目中使用的spring版本较高可以使用RestTemplate代替。\n\n### skyeye-data-jpa\n\n该项目主要是jpa相关的定义，包含domain、repository、dto相关的定义，主要用来操作mysql的查询。\n\n### skyeye-data-rabbitmq\n\n该项目主要封装了报警模块中存取rabbitmq中消息的相关代码。\n\n## skyeye-trace\n\n该项目封装了所有rpc trace相关的代码，包含rpc数据采集器、分布式唯一ID生成、分布式递增ID生成、注册中心、采样器、跟踪器等功能，该项目无需部署，只需要打包。\n\n### dubbox\n\n由于使用dubbox，为了能够采集到dubbox里面的rpc数据，需要修改dubbox的源码，见我修改的dubbox项目：[dubbox](https://github.com/JThink/dubbox/tree/skyeye-trace-1.3.0)，该项目主要实现了rpc跟踪的具体实现，需要单独打包。\n\n```shell\ngit clone https://github.com/JThink/dubbox.git\ncd dubbox\ngit checkout skyeye-trace-1.3.0\n修改相关pom中的私服地址\nmvn clean install deploy -Dmaven.test.skip=true\n```\n\n## 软件安装\n\n如果软件版本和以下所列不一致，需要修改gradle中的依赖版本，并且需自行测试可用性（hadoop、hbase、spark等相应的版本可以自己来指定，代码层面无需修改，需要修改依赖）。\n\n| 软件名           | 版本             | 备注                                       |\n| :------------ | -------------- | ---------------------------------------- |\n| mysql         | 5.5+           |                                          |\n| elasticsearch | 2.3.3          | 未测试5.x版本（开发的时候最新版本只有2.3.x），需要假设sql引擎，见: [elasticsearch-sql](https://github.com/NLPchina/elasticsearch-sql/)，需要安装IK分词并启动，见: [es ik分词](http://blog.csdn.net/jthink_/article/details/51878738) |\n| kafka         | 0.10.0.1       | 如果spark的版本较低，那么需要将kafka的日志的格式降低，具体在kafka的配置项加入：log.message.format.version=0.8.2，该项按需配置 |\n| jdk           | 1.7+           |                                          |\n| zookeeper     | 3.4.6          |                                          |\n| rabbitmq      | 3.5.7          |                                          |\n| hbase         | 1.0.0-cdh5.4.0 | 不支持1.x以下的版本，比如0.9x.x                     |\n| gradle        | 3.0+           |                                          |\n| hadoop        | 2.6.0-cdh5.4.0 |                                          |\n| spark         | 1.3.0-cdh5.4.0 |                                          |\n| redis         | 3.x            | 单机版即可                                    |\n\n### 初始化\n\n### mysql\n\n```shell\nmysql -uroot -p\nsource skyeye-data/skyeye-data-jpa/src/main/resources/sql/init.sql\n```\n\n### hbase\n\n创建三张表，用来保存rpc的数据（一张数据表，两张二级索引表）\n\n```Shell\nhbase shell\n执行skyeye-collector/skyeye-collector-trace/src/main/resources/shell/hbase这个文件里面的内容\n```\n\n### elasticsearch\n\n首先安装相应的es python的module，然后再创建索引，根据需要修改es的的ip、端口\n\n```shell\ncd skyeye-collector/skyeye-collector-indexer/src/main/resources/shell\n./install.sh\nbash start.sh app-log http://192.168.xx.xx:9200,http://192.168.xx.xx:9200,......\ncd skyeye-collector/skyeye-collector-metrics/src/main/resources/shell\nbash start.sh event-log http://192.168.xx.xx:9200,http://192.168.xx.xx:9200,......\n\n注意点：如果es版本为5.x，那么需要修改skyeye-collector/src/main/resources/shell/es/app-log/create-index.py的49和50行为下面内容：\n'messageSmart': { 'type': 'text', 'analyzer': 'ik_smart', 'search_analyzer': 'ik_smart', 'include_in_all': 'true', 'boost': 8},\n'messageMax': { 'type': 'text', 'analyzer': 'ik_max_word', 'search_analyzer': 'ik_max_word', 'include_in_all': 'true', 'boost': 8}\n```\n\n### kafka\n\n创建相应的topic，根据需要修改—partitions和zk的ip、端口的值，如果日志量特别大可以适当提高这个值\n\n```Shell\nkafka-topics.sh --create --zookeeper 192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181/kafka/0.10.0.1 --replication-factor 3 --partitions 9 --topic app-log\n```\n\n### zookeeper\n\n初始化注册中心的节点信息\n\n```shell\n./zkCli.sh\n执行skyeye-monitor/src/main/resources/shell/zk这个文件里面的内容\n```\n\n### rabbitmq\n\n相关项目启动的时候会自动创建相关的队列\n\n## skyeye-alarm\n\n### 配置文件\n\n配置文件外部化，需要在机器上创建配置文件\n\n```shell\nssh 到部署节点\nmkdir -p /opt/jthink/jthink-config/skyeye/alarm\nvim alarm.properties\n\n# log_mailer request queue\nrabbit.request.addresses=localhost:5672\nrabbit.request.username=jthink\nrabbit.request.password=jthink\nrabbit.request.vhost=/dev\nrabbit.request.channelCacheSize=50\nrabbit.request.queue=log_mailer\nrabbit.request.exchange=direct.log\nrabbit.request.routingKey=log.key\n\n# mail\nmail.jthink.smtphost=smtp.xxx.com\nmail.jthink.port=25\nmail.jthink.from=xxx@xxx.com\nmail.jthink.cc=xxx@xxx.com\nmail.jthink.password=jthink_0926\n```\n\n需要修改rabbitmq和邮件相关的配置\n\n### 打包部署\n\n```shell\ncd skyeye-alarm\ngradle clean distZip -x test\ncd target/distributions\nunzip skyeye-alarm-x.x.x.zip(替换相应的x为自己的版本)\n\ncd skyeye-alarm-x.x.x\nnohup bin/skyeye-alarm \u0026\n```\n\n## skyeye-collector\n\n本项目从v1.0.0版本开始按不同的kafka消费group组织子module以实现可插拔的功能模块，主要包含如下5个module：\n\n- skyeye-collector-core: 收集项目的所有公用的配置和公用代码，改module不需要部署\n- skyeye-collector-backup: 对采集的所有日志进行备份\n- skyeye-collector-indexer: 对采集的所有日志进行索引存入es\n- kyeye-collector-metrics: 对事件日志进行meta  data的采集和相关报警metrics进行索引存入es\n- skyeye-collector-trace: 对rpc跟踪数据进行采集入hbase\n\n## 打包\n\n```shell\ncd skyeye-collector\ngradle clean build -x test\n```\n\n### skyeye-collector-backup\n\n#### 配置文件\n\n配置文件外部化，需要在机器上创建配置文件，根据对接系统的个数和产生日志的量进行部署，最好部署3个节点（每个节点消费3个partition的数据）\n\n```shell\nssh 到部署节点\nmkdir -p /opt/jthink/jthink-config/skyeye/collector\nvim collector-backup.properties\n\n# kafka config\nkafka.brokers=riot01:9092,riot02:9092,riot03:9092\nkafka.topic=app-log\nkafka.consume.group=log-backup-consume-group\nkafka.poll.timeout=100\n\n# hdfs\nhadoop.hdfs.namenode.port=8020\nhadoop.hdfs.namenode.host=192.168.88.131\nhadoop.hdfs.user=xxx\nhadoop.hdfs.baseDir=/user/xxx/JThink/\nhadoop.hdfs.fileRoot=/tmp/monitor-center/\nupload.log.cron=0 30 0 * * ?\n```\n\n### 部署\n\n多个节点部署需要部署多次\n\n```shell\ncd skyeye-collector-backup/target/distributions\nunzip skyeye-collector-backup-x.x.x.zip(替换相应的x为自己的版本)\n\ncd skyeye-collector-backup-x.x.x\nnohup bin/skyeye-collector-backup \u0026\n```\n### skyeye-collector-indexer\n\n#### 配置文件\n\n配置文件外部化，需要在机器上创建配置文件，根据对接系统的个数和产生日志的量进行部署，最好部署3个节点（每个节点消费3个partition的数据）\n\n```shell\nssh 到部署节点\nmkdir -p /opt/jthink/jthink-config/skyeye/collector\nvim collector-indexer.properties\n\n# kafka config\nkafka.brokers=riot01:9092,riot02:9092,riot03:9092\nkafka.topic=app-log\nkafka.consume.group=es-indexer-consume-group\nkafka.poll.timeout=100\n\n# es config\nes.ips=riot01,riot02,riot03\nes.cluster=mondeo\nes.port=9300\nes.sniff=true\nes.index=app-log\nes.doc=log\n```\n\n### 部署\n\n多个节点部署需要部署多次\n\n```shell\ncd skyeye-collector-indexer/target/distributions\nunzip skyeye-collector-indexer-x.x.x.zip(替换相应的x为自己的版本)\n\ncd skyeye-collector-indexer-x.x.x\nnohup bin/skyeye-collector-indexer \u0026\n```\n\n### skyeye-collector-metrics\n\n#### 配置文件\n\n配置文件外部化，需要在机器上创建配置文件，根据对接系统的个数和产生日志的量进行部署，最好部署3个节点（每个节点消费3个partition的数据）\n\n```shell\nssh 到部署节点\nmkdir -p /opt/jthink/jthink-config/skyeye/collector\nvim collector-metrics.properties\n\n# kafka config\nkafka.brokers=riot01:9092,riot02:9092,riot03:9092\nkafka.topic=app-log\nkafka.consume.group=info-collect-consume-group\nkafka.poll.timeout=100\n\n# es config\nes.ips=riot01,riot02,riot03\nes.cluster=mondeo\nes.port=9300\nes.sniff=true\nes.index=event-log\nes.doc=log\n\n# redis config\nredis.host=localhost\nredis.port=6379\nredis.password=\n\n# mysql config\ndatabase.address=localhost:3306\ndatabase.name=monitor-center\ndatabase.username=root\ndatabase.password=root\n\n# log_mailer request queue\nrabbit.request.addresses=localhost:5672\nrabbit.request.username=jthink\nrabbit.request.password=jthink\nrabbit.request.vhost=/dev\nrabbit.request.channelCacheSize=50\nrabbit.request.queue=log_mailer\nrabbit.request.exchange=direct.log\nrabbit.request.routingKey=log.key\n\n# zk\nzookeeper.zkServers=riot01:2181,riot02:2181,riot03:2181\nzookeeper.sessionTimeout=60000\nzookeeper.connectionTimeout=5000\n```\n\n### 部署\n\n多个节点部署需要部署多次\n\n```shell\ncd skyeye-collector-metrics/target/distributions\nunzip skyeye-collector-metrics-x.x.x.zip(替换相应的x为自己的版本)\n\ncd skyeye-collector-metrics-x.x.x\nnohup bin/skyeye-collector-metrics \u0026\n```\n\n### skyeye-collector-trace\n\n#### 配置文件\n\n配置文件外部化，需要在机器上创建配置文件，根据对接系统的个数和产生日志的量进行部署，最好部署3个节点（每个节点消费3个partition的数据）\n\n```shell\nssh 到部署节点\nmkdir -p /opt/jthink/jthink-config/skyeye/collector\nvim collector-trace.properties\n\n# kafka config\nkafka.brokers=riot01:9092,riot02:9092,riot03:9092\nkafka.topic=app-log\nkafka.consume.group=rpc-trace-consume-group\nkafka.poll.timeout=100\n\n# redis config\nredis.host=localhost\nredis.port=6379\nredis.password=\n\n# mysql config\ndatabase.address=localhost:3306\ndatabase.name=monitor-center\ndatabase.username=root\ndatabase.password=root\n\n# hbase config\nhbase.quorum=panda-01,panda-01,panda-03\nhbase.rootDir=hdfs://panda-01:8020/hbase\nhbase.zookeeper.znode.parent=/hbase\n```\n\n### 部署\n\n多个节点部署需要部署多次\n\n```shell\ncd skyeye-collector-trace/target/distributions\nunzip skyeye-collectortracemetrics-x.x.x.zip(替换相应的x为自己的版本)\n\ncd skyeye-collector-trace-x.x.x\nnohup bin/skyeye-collector-trace \u0026\n```\n\n## skyeye-monitor\n\n### 配置文件\n\n配置文件外部化，需要在机器上创建配置文件\n\n```shell\nssh 到部署节点\nmkdir -p /opt/jthink/jthink-config/skyeye/monitor\nvim monitor.properties\n\n# zk\nzookeeper.zkServers=riot01:2181,riot02:2181,riot03:2181\nzookeeper.sessionTimeout=60000\nzookeeper.connectionTimeout=5000\nzookeeper.baseSleepTimeMs=1000\nzookeeper.maxRetries=3\n\n# log_mailer request queue\nrabbit.request.addresses=localhost:5672\nrabbit.request.username=jthink\nrabbit.request.password=jthink\nrabbit.request.vhost=/dev\nrabbit.request.channelCacheSize=50\nrabbit.request.queue=log_mailer\nrabbit.request.exchange=direct.log\nrabbit.request.routingKey=log.key\n\n# mysql config\ndatabase.address=localhost:3306\ndatabase.name=monitor-center\ndatabase.username=root\ndatabase.password=root\n```\n\n需要修改相关的配置（rabbitmq的配置需和alarm一致，zk也需要前后一致）\n\n### 打包部署\n\n```shell\ncd skyeye-monitor\ngradle clean distZip -x test\ncd target/distributions\nunzip skyeye-monitor-x.x.x.zip(替换相应的x为自己的版本)\n\ncd skyeye-monitor-x.x.x\nnohup bin/skyeye-monitor \u0026\n```\n## skyeye-web\n\n### 配置文件\n\n配置文件外部化，需要在机器上创建配置文件\n\n```shell\nssh 到部署节点\nmkdir -p /opt/jthink/jthink-config/skyeye/web\nvim web.properties\n\n# server\nserverAddress=0.0.0.0\nserverPort=8090\n\n# mysql config\ndatabase.address=localhost:3306\ndatabase.name=monitor-center\ndatabase.username=root\ndatabase.password=root\n\n# es sql url\nes.sql.url=http://riot01:9200/_sql?sql=\nes.sql.sql=select * from app-log/log\nes.query.delay=10\nes.sql.index.event=event-log/log\n\n# log_mailer request queue\nrabbit.request.addresses=localhost:5672\nrabbit.request.username=jthink\nrabbit.request.password=jthink\nrabbit.request.vhost=/dev\nrabbit.request.channelCacheSize=50\nrabbit.request.queue=log_mailer\nrabbit.request.exchange=direct.log\nrabbit.request.routingKey=log.key\n\n# monitor\nmonitor.es.interval=0 */1 * * * ?\t\t\t\t\t# 监控代码执行的周期，建议不修改\nmonitor.es.mail=leviqian@sina.com\n\n# hbase config\nhbase.quorum=panda-01,panda-01,panda-03\nhbase.rootDir=hdfs://panda-01:8020/hbase\nhbase.zookeeper.znode.parent=/hbase\n```\n\n需要修改相关的配置（rabbitmq的配置需和alarm一致，es也需要前后一致），注释过的是要注意的\n\n### 打包部署\n\n```shell\ncd skyeye-web\ngradle clean distZip -x test\ncd target/distributions\nunzip skyeye-web-x.x.x.zip(替换相应的x为自己的版本)\n\ncd skyeye-web-x.x.x\nnohup bin/skyeye-web \u0026\n```\n\n# 项目对接\n\n需要进行日志采集的项目需要按照如下操作\n\n## logback\n### 依赖\ngradle或者pom中加入skyeye-client的依赖\n\n``` xml\ncompile \"skyeye:skyeye-client-logback:1.3.0\"\n```\n### 配置\n在logback.xml中加入一个kafkaAppender，并在properties中配置好相关的值，如下（rpc这个项目前支持none和dubbo，所以如果项目中有dubbo服务的配置成dubbo，没有dubbo服务的配置成none，以后会支持其他的rpc框架，如：thrift、spring cloud等）：\n\n``` xml\n\u003cproperty name=\"APP_NAME\" value=\"your-app-name\" /\u003e\n\u003c!-- kafka appender --\u003e\n\u003cappender name=\"kafkaAppender\" class=\"com.jthink.skyeye.client.logback.appender.KafkaAppender\"\u003e\n    \u003cencoder class=\"com.jthink.skyeye.client.logback.encoder.KafkaLayoutEncoder\"\u003e\n      \u003clayout class=\"ch.qos.logback.classic.PatternLayout\"\u003e\n        \u003cpattern\u003e%d{yyyy-MM-dd HH:mm:ss.SSS};${CONTEXT_NAME};HOSTNAME;%thread;%-5level;%logger{96};%line;%msg%n\u003c/pattern\u003e\n      \u003c/layout\u003e\n    \u003c/encoder\u003e\n    \u003ctopic\u003eapp-log\u003c/topic\u003e\n    \u003crpc\u003enone\u003c/rpc\u003e\n    \u003czkServers\u003eriot01.jthink.com:2181,riot02.jthink.com:2181,riot03.jthink.com:2181\u003c/zkServers\u003e\n    \u003cmail\u003exxx@xxx.com\u003c/mail\u003e\n    \u003ckeyBuilder class=\"com.jthink.skyeye.client.logback.builder.AppHostKeyBuilder\" /\u003e\n\n    \u003cconfig\u003ebootstrap.servers=riot01.jthink.com:9092,riot02.jthink.com:9092,riot03.jthink.com:9092\u003c/config\u003e\n    \u003cconfig\u003eacks=0\u003c/config\u003e\n    \u003cconfig\u003elinger.ms=100\u003c/config\u003e\n    \u003cconfig\u003emax.block.ms=5000\u003c/config\u003e\n  \u003c/appender\u003e\n```\n## log4j\n### 依赖\ngradle或者pom中加入skyeye-client的依赖\n\n``` xml\ncompile \"skyeye:skyeye-client-log4j:1.3.0\"\n```\n### 配置\n在log4j.xml中加入一个kafkaAppender，并在properties中配置好相关的值，如下（rpc这个项目前支持none和dubbo，所以如果项目中有dubbo服务的配置成dubbo，没有dubbo服务的配置成none，以后会支持其他的rpc框架，如：thrift、spring cloud等）：\n\n``` xml\n\u003cappender name=\"kafkaAppender\" class=\"com.jthink.skyeye.client.log4j.appender.KafkaAppender\"\u003e\n        \u003cparam name=\"topic\" value=\"app-log\"/\u003e\n        \u003cparam name=\"zkServers\" value=\"riot01.jthink.com:2181,riot02.jthink.com:2181,riot03.jthink.com:2181\"/\u003e\n        \u003cparam name=\"app\" value=\"xxx\"/\u003e\n        \u003cparam name=\"rpc\" value=\"dubbo\"/\u003e\n        \u003cparam name=\"mail\" value=\"xxx@xxx.com\"/\u003e\n        \u003cparam name=\"bootstrapServers\" value=\"riot01.jthink.com:9092,riot02.jthink.com:9092,riot03.jthink.com:9092\"/\u003e\n        \u003cparam name=\"acks\" value=\"0\"/\u003e\n        \u003cparam name=\"maxBlockMs\" value=\"2000\"/\u003e\n        \u003cparam name=\"lingerMs\" value=\"100\"/\u003e\n\n        \u003clayout class=\"org.apache.log4j.PatternLayout\"\u003e\n            \u003cparam name=\"ConversionPattern\" value=\"%d{yyyy-MM-dd HH:mm:ss.SSS};APP_NAME;HOSTNAME;%t;%p;%c;%L;%m%n\"/\u003e\n        \u003c/layout\u003e\n    \u003c/appender\u003e\n```\n## log4j2\n\n### 依赖\n\ngradle或者pom中加入skyeye-client的依赖\n\n``` xml\ncompile \"skyeye:skyeye-client-log4j2:1.3.0\"\n```\n\n### 配置\n\n在log4j2.xml中加入一个KafkaCustomize，并在properties中配置好相关的值，如下（rpc这个项目前支持none和dubbo，所以如果项目中有dubbo服务的配置成dubbo，没有dubbo服务的配置成none，以后会支持其他的rpc框架，如：thrift、spring cloud等）：\n\n```xml\n\u003cKafkaCustomize name=\"KafkaCustomize\" topic=\"app-log\" zkServers=\"riot01.jthink.com:2181,riot02.jthink.com:2181,riot03.jthink.com:2181\"\n                mail=\"qianjc@unionpaysmart.com\" rpc=\"none\" app=\"${APP_NAME}\"\u003e\n  \u003cThresholdFilter level=\"info\" onMatch=\"ACCEPT\" onMismatch=\"DENY\"/\u003e\n  \u003cPatternLayout pattern=\"%d{yyyy-MM-dd HH:mm:ss.SSS};${APP_NAME};HOSTNAME;%t;%-5level;%logger{96};%line;%msg%n\"/\u003e\n  \u003cProperty name=\"bootstrap.servers\"\u003eriot01.jthink.com:9092,riot02.jthink.com:9092,riot03.jthink.com:9092\u003c/Property\u003e\n  \u003cProperty name=\"acks\"\u003e0\u003c/Property\u003e\n  \u003cProperty name=\"linger.ms\"\u003e100\u003c/Property\u003e\n\u003c/KafkaCustomize\u003e\n```\n\n## 注意点\n\n## logback\n- logback在对接kafka的时候有个bug，[jira bug](https://jira.qos.ch/browse/LOGBACK-1328)，所以需要将root level设置为INFO（不能是DEBUG）\n\n### log4j\n由于log4j本身的appender比较复杂难写，所以在稳定性和性能上没有logback支持得好，应用能使用logback请尽量使用logback\n### rpc trace\n使用自己打包的dubbox（[dubbox](https://github.com/JThink/dubbox/tree/skyeye-trace-1.3.0)），在soa中间件dubbox中封装了rpc的跟踪\n\n``` shell\ncompile \"com.101tec:zkclient:0.10\"\ncompile (\"com.alibaba:dubbo:2.8.4-skyeye-trace-1.3.0\") {\n  exclude group: 'org.springframework', module: 'spring'\n}\n```\n### spring boot\n\n如果项目使用的是spring-boot+logback，那么需要将spring-boot对logback的初始化去掉，防止初始化的时候在zk注册两次而报错，具体见我的几篇博客就可以解决：\n\nhttp://blog.csdn.net/jthink_/article/details/52513963\n\nhttp://blog.csdn.net/jthink_/article/details/52613953\n\nhttp://blog.csdn.net/jthink_/article/details/73106745\n\n## 埋点\n\n### 日志类型\n| 日志类型             | 说明                        |\n| :--------------- | :------------------------ |\n| normal           | 正常入库日志                    |\n| invoke_interface | api调用日志                   |\n| middleware_opt   | 中间件操作日志(目前仅支持hbase和mongo) |\n| job_execute      | job执行日志                   |\n| rpc_trace        | rpc trace跟踪日志             |\n| custom_log       | 自定义埋点日志                   |\n| thirdparty_call  | 第三方系统调用日志                 |\n### 正常日志\n\n``` shell\nLOGGER.info(\"我是测试日志打印\")\n```\n### api日志\n\n``` shell\n// 参数依次为EventType(事件类型)、api、账号、请求耗时、成功还是失败、具体自定义的日志内容\nLOGGER.info(ApiLog.buildApiLog(EventType.invoke_interface, \"/app/status\", \"800001\", 100, EventLog.MONITOR_STATUS_SUCCESS, \"我是mock api成功日志\").toString());\nLOGGER.info(ApiLog.buildApiLog(EventType.invoke_interface, \"/app/status\", \"800001\", 10, EventLog.MONITOR_STATUS_FAILED, \"我是mock api失败日志\").toString());\n```\n### 中间件日志\n\n``` shell\n// 参数依次为EventType(事件类型)、MiddleWare(中间件名称)、操作耗时、成功还是失败、具体自定义的日志内容\nLOGGER.info(EventLog.buildEventLog(EventType.middleware_opt, MiddleWare.HBASE.symbol(), 100, EventLog.MONITOR_STATUS_SUCCESS, \"我是mock middle ware成功日志\").toString());\nLOGGER.info(EventLog.buildEventLog(EventType.middleware_opt, MiddleWare.MONGO.symbol(), 10, EventLog.MONITOR_STATUS_FAILED, \"我是mock middle ware失败日志\").toString());\n```\n### job执行日志\n\n```\n// job执行仅仅处理失败的日志（成功的不做处理，所以只需要构造失败的日志）, 参数依次为EventType(事件类型)、job 的id号、操作耗时、失败、具体自定义的日志内容\nLOGGER.info(EventLog.buildEventLog(EventType.job_execute, \"application_1477705439920_0544\", 10, EventLog.MONITOR_STATUS_FAILED, \"我是mock job exec失败日志\").toString());\n```\n\n### 第三方请求日志\n\n```\n// 参数依次为EventType(事件类型)、第三方名称、操作耗时、成功还是失败、具体自定义的日志内容\nLOGGER.info(EventLog.buildEventLog(EventType.thirdparty_call, \"xx1\", 100, EventLog.MONITOR_STATUS_FAILED, \"我是mock third 失败日志\").toString());\nLOGGER.info(EventLog.buildEventLog(EventType.thirdparty_call, \"xx1\", 100, EventLog.MONITOR_STATUS_SUCCESS, \"我是mock third 成功日志\").toString());\nLOGGER.info(EventLog.buildEventLog(EventType.thirdparty_call, \"xx2\", 100, EventLog.MONITOR_STATUS_SUCCESS, \"我是mock third 成功日志\").toString());\nLOGGER.info(EventLog.buildEventLog(EventType.thirdparty_call, \"xx2\", 100, EventLog.MONITOR_STATUS_FAILED, \"我是mock third 失败日志\").toString());\n```\n","funding_links":[],"categories":["应用分析与监控","Java"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FJThink%2FSkyEye","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FJThink%2FSkyEye","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FJThink%2FSkyEye/lists"}