{"id":28560139,"url":"https://github.com/dtstack/jlogstash-input-plugin","last_synced_at":"2025-06-10T09:07:39.238Z","repository":{"id":57718921,"uuid":"67188818","full_name":"DTStack/jlogstash-input-plugin","owner":"DTStack","description":"java 版本 logstash input 插件","archived":false,"fork":false,"pushed_at":"2018-12-20T09:36:10.000Z","size":446,"stargazers_count":21,"open_issues_count":0,"forks_count":16,"subscribers_count":12,"default_branch":"master","last_synced_at":"2024-02-25T12:39:57.082Z","etag":null,"topics":["logstash"],"latest_commit_sha":null,"homepage":null,"language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/DTStack.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2016-09-02T04:07:11.000Z","updated_at":"2022-11-24T10:00:34.000Z","dependencies_parsed_at":"2022-08-27T19:41:42.527Z","dependency_job_id":null,"html_url":"https://github.com/DTStack/jlogstash-input-plugin","commit_stats":null,"previous_names":[],"tags_count":6,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DTStack%2Fjlogstash-input-plugin","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DTStack%2Fjlogstash-input-plugin/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DTStack%2Fjlogstash-input-plugin/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DTStack%2Fjlogstash-input-plugin/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/DTStack","download_url":"https://codeload.github.com/DTStack/jlogstash-input-plugin/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DTStack%2Fjlogstash-input-plugin/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":259043771,"owners_count":22797163,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["logstash"],"created_at":"2025-06-10T09:07:38.352Z","updated_at":"2025-06-10T09:07:39.222Z","avatar_url":"https://github.com/DTStack.png","language":"Java","readme":"# Beats:\n   codec:默认plain\n\n   prot: 端口必填没有默认值\n\n   host: ip地址，默认 0.0.0.0\n   \n   addFields: 需要添加的属性，map 结构\n\n# Kafka09:\n   encoding:编码 默认 utf8\n\n   codec:默认plain\n \n   topic:必填，map结构，需要说明分区数（{dt_all_test_log: 6}）\n\n   consumerSettings:必填 consumer 连接kafka的属性配置，map结构 {group.id: jlogstashvvvvv,zookeeper.connect: 127.0.0.1:2181,auto.commit.interval.ms:\"1000\",auto.offset.reset: smallest}\n\n   addFields: 需要添加的属性，map 结构\n   \n# Kafka10:\n\n   codec:默认plain\n \n   topic:必填，string\n   \n   groupId:必填，string\n\n   consumerSettings:必填 consumer 连接kafka的属性配置，map结构 {zookeeper.connect: 127.0.0.1:2181,auto.commit.interval.ms:\"1000\",auto.offset.reset: smallest}\n   \n   bootstrapServers: 必填 127.0.0.1:9020,127.0.0.2:9020\n\n   addFields: 需要添加的属性，map 结构   \n   \n# KafkaDistribute:\n   encoding:编码 默认 utf8\n\n   codec:默认plain\n \n   topic:必填，map结构，需要说明分区数（{dt_all_test_log: 6}）\n\n   consumerSettings:必填 consumer 连接kafka的属性配置，map结构 {group.id: jlogstashvvvvv,zookeeper.connect: 127.0.0.1:2181,auto.commit.interval.ms:\"1000\",auto.offset.reset: smallest}\n   \n   addFields: 需要添加的属性，map 结构 \n   \n  distributed: map结构，属性值不为空 说明要开启分布式（主要的应用场景是单个文件日志无序，有单行，有多行，需要后台做聚合和解析，需要把同一个日志发送     到同一台服务器）每种日志需要定制化开发聚合规则，现在的版本里有cms1.8 gc log 的日志聚合规则\n    样本: {\"zkAddress\":\"127.0.0.1:2181/distributed\",\"localAddress\":\"127.0.0.1:8555\",\"hashKey\":\"%{tenant_id}:%                {hostname}_%{appname}_%{path}\"}\n   \n     \n# Netty:\n  codec:默认plain\n\n  prot: 端口必填没有默认值\n\n  host: ip地址，默认 0.0.0.0\n\n  encoding:编码 默认 utf8\n\n  codec:默认plain\n\n  receiveBufferSize:接收缓存区大小 默认值20M\n\n  delimiter:数据的分隔符 默认是根据系统的换行分隔符\n\n  addFields: 需要添加的属性，map 结构\n  \n  whiteListPath: ip白名单路径（多个用逗号隔开）\n  \n  isExtract: true|false是否开启解压功能（gzip）\n\n# Tcp:\n  \n  codec:默认plain\n\n  prot: 端口必填没有默认值\n\n  host: ip地址，默认 0.0.0.0\n\n  encoding:编码 默认 utf8\n\n  bufSize: 接收缓存区大小 默认值20M\n\n  maxLineLength:一次接收最大的数据包大小 默认1M\n\n  addFields: 需要添加的属性，map 结构\n\n# Stdin:\n  标准输入\n  addFields: 需要添加的属性，map 结构\n\n\n# File:\n addFields: 需要添加的属性，map 结构\n\n path:文件输入路径(可以是文件,文件夹),参数类型为list[\"home/admin/ysq.log\"]\n\n pathcodecMap: 文件路径,参数是Map类型(key:文件路径,value:该文件类型对应的codec)\n\n --注意path参数和pathcodecMap参数不能同时为空\n\n exclude:排除文件路径(可以是文件,文件夹),参数类型为list\n\n encoding:读取文件的编码格式,默认是UTF-8\n\n maxOpenFiles:最大配置读取文件数量,默认为0(表示无上限)\n\n startPosition: 文件开始读取位置,[\"beginning\", \"end\"],默认为end\n\n sinceDbPath:文件读取位置信息存储位置,默认\"./sincedb.yaml\"\n\n sinceDbWriteInterval:文件读取位置信息刷新到存储点的时间间隔,默认是15s\n\n delimiter:行分割符号,默认是'\\n'\n\n readFileThreadNum:文件读取的线程数,默认是:cpu处理器数+1\n \n # Redis:\n host:redis服务主机地址\n \n key:键值，当data_type为channel或者channel_pattern时表示订阅的频道\n \n data_size:数据数量，只有当data_type为list和sorted_set时有效\n \n data_type:数据格式，取值范围为：string,list,set,sorted_set,hash,channel,channel_pattern,其中channel和channel_pattern表示订阅模式下所监听的频道\n \n# Elasticsearch:\n  hosts: elasticsearch 集群地址，也可以slb地址 类型是数组（[\"node01\",\"node02\"]）必填\n  \n  cluster: elasticsearch 集群名称(默认值 elasticsearch)\n  \n  sniff:是否自动发现（默认值 true）\n  \n  index: 索引（默认值 logstash-*）\n  \n  type: 索引类型\n  \n  query: dsl 查询语法（默认 {\\\"query\\\": {\\\"match_all\\\":{}},\\\"sort\\\" : [\\\"_doc\\\"]}）\n  \n  scroll: 翻页（默认 5）\n  \n  size: 一次获取的数据 （默认 1000）\n  \n  user: 用户名\n  \n  password: 密码\n  \n# Jdbc:\n  jdbcConnectionString: jdbc url地址 必填项\n  \n  jdbcDriverClass: 驱动类 必填项\n  \n  jdbcDriverLibrary: 驱动包的路径 必填项\n  \n  jdbcFetchSize: 一次获取的数据 必填项\n  \n  jdbcUser: 用户名 必填项\n  \n  jdbcPassword: 密码 必填项\n  \n  statement: 查询语句 必填项\n  \n  parameters: 参数\n\n# MongoDB:\n  uri: MongoDB连接URI 必填项\n  \n  dbName: database名称 必填项\n  \n  collection: collection名称 必填项\n  \n  query: Filter语句 \n  \n  sinceTime: 增量抽取的起始时间\n\n# Binlog\n  host: MySQL主机名 必填项\n\n  port: MySQL端口号 默认3306\n\n  username: MySQL用户名 必填项\n\n  password: MySQL密码 必填项\n\n  start: 日志起始位置， 格式为 {\"journalName\":\"mysql-bin.000002\",\"position\":39493,timestamp\":1537948008000}，其中journalName为binlog日志文件名，position为日志偏移量，timestamp为日志时间戳\n\n  filter: 过滤器列表，由若干个过滤器组成， 格式为 {schema1\\.table1,schema2\\.table2}，多个过滤器之间用逗号分隔；\n  默认为空，表示不过滤schema和table。\n\n  cat: 数据操作类别列表，由若干个数据操作类别组成，格式为 {insert,update,select,delete}，多个数据操作类别用逗号分隔；\n  默认为空，表示处理binlog所有操作类别的日志。\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdtstack%2Fjlogstash-input-plugin","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdtstack%2Fjlogstash-input-plugin","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdtstack%2Fjlogstash-input-plugin/lists"}