{"id":28560155,"url":"https://github.com/dtstack/jlogstash-output-plugin","last_synced_at":"2025-07-23T04:33:48.111Z","repository":{"id":57718922,"uuid":"67188853","full_name":"DTStack/jlogstash-output-plugin","owner":"DTStack","description":"java 版本 logstash output  插件","archived":false,"fork":false,"pushed_at":"2018-09-04T06:00:55.000Z","size":192,"stargazers_count":7,"open_issues_count":1,"forks_count":12,"subscribers_count":6,"default_branch":"master","last_synced_at":"2025-07-11T04:53:26.091Z","etag":null,"topics":["logstash"],"latest_commit_sha":null,"homepage":null,"language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/DTStack.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2016-09-02T04:07:56.000Z","updated_at":"2024-04-02T17:42:48.000Z","dependencies_parsed_at":"2022-08-27T19:41:49.602Z","dependency_job_id":null,"html_url":"https://github.com/DTStack/jlogstash-output-plugin","commit_stats":null,"previous_names":[],"tags_count":6,"template":false,"template_full_name":null,"purl":"pkg:github/DTStack/jlogstash-output-plugin","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DTStack%2Fjlogstash-output-plugin","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DTStack%2Fjlogstash-output-plugin/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DTStack%2Fjlogstash-output-plugin/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DTStack%2Fjlogstash-output-plugin/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/DTStack","download_url":"https://codeload.github.com/DTStack/jlogstash-output-plugin/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DTStack%2Fjlogstash-output-plugin/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":266618800,"owners_count":23957273,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-07-23T02:00:09.312Z","response_time":66,"last_error":null,"robots_txt_status":null,"robots_txt_updated_at":null,"robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["logstash"],"created_at":"2025-06-10T09:07:39.778Z","updated_at":"2025-07-23T04:33:48.085Z","avatar_url":"https://github.com/DTStack.png","language":"Java","readme":"\n# Elasticsearch:\n\n      index:索引(dtlog-%{tenant_id}-%{+YYYY.MM.dd}) 必填\n    \n      indexTimezone: 如果索引有时间，配置时区 默认 UTC\n\n      documentId:文档id\n    \n      documentType: 文档类型 默认 logs;\n    \n      cluster:集群名称\n    \n      hosts:ip地址，带的端口是tcp端口，数组结构（[\"172.16.1.185:9300\",\"172.16.1.188:9300\"]）必填\n    \n      sniff:默认true\n    \n      bulkActions: 默认 20000 \n    \n      bulkSize:默认 15\n\n      consistency:false 数据一致性的开关，默认关闭；打开之后，在elasticsearch 集群不可用的情况下，数据会不断重试，不会再消费input数据，直到elasticsearch集群可用\n\n# Elasticsearch5:\n\n      index:索引(dtlog-%{tenant_id}-%{+YYYY.MM.dd}) 必填\n    \n      indexTimezone: 如果索引有时间，配置时区 默认 UTC\n\n      documentId:文档id\n    \n      documentType: 文档类型 默认 logs;\n    \n      cluster:集群名称\n    \n      hosts:ip地址，带的端口是tcp端口，数组结构（[\"172.16.1.185:9300\",\"172.16.1.188:9300\"]）必填\n    \n      sniff:默认true\n    \n      bulkActions: 默认 20000 \n    \n      bulkSize:默认 15\n\n      consistency:false 数据一致性的开关，默认关闭；打开之后，在elasticsearch 集群不可用的情况下，数据会不断重试，不会再消费input数据，直到elasticsearch集群可用\n\n\n# Kafka:\n\n    encoding:默认utf-8\n    \n    topic:必填(dt-%{tenant_id})\n\n    brokerList:kafka集群地址，多个逗号隔开（12.24.36.128:9092,11.37.67.213:9092）\n\n    keySerializer: 默认值 kafka.serializer.StringEncoder 可以自定义\n\t\n    valueSerializer:默认值 kafka.serializer.StringEncoder\t 可以自定义\n\t\n    partitionerClass:默认值 kafka.producer.DefaultPartitioner  可以自定义\n\t \n    producerType:默认值 \"sync\" //sync,async 可选\n\t \n    compressionCodec: \"none\" //gzip,snappy,lz4,none\n\t \n    clientId 默认没有\n\t\n    batchNum 默认kafka自带的值\n\t\n    requestRequiredAcks 默认值为1\n    \n# OutOdps:\n\n    accessId: aliyun accessId 需要到阿里云官网申请 （必填）\n    \n    accessKey: aliyun accessKey 需要到阿里云官网申请（必填）\n    \n    odpsUrl: http://service.odps.aliyun.com/api（默认值）\n    \n    project: odps 项目(必填)\n    \n    table: odps 项目里表(必填)\n    \n    partition: 表分区，支持静态分区和动态分区  dt ='dtlog-%{tenant_id}-%{+YYYY.MM.dd}',pt= 'dtlog-%{tenant_id}-%{+YYYY.MM.dd}'\n    \n    bufferSize: default 10M \n    \n    interval: default 300000 mills\n    \n# Performance:\n\n   interval: 数据刷入文件的间隔时间，默认30秒\n\n   timeZone: 时区 默认UTC\n\n   path: 文件路径（home/admin/jlogserver/logs/srsyslog-performance-%{+YYYY.MM.dd}.txt）必填\n\n# File:\n\n   timeZone:时区 默认UTC\n\n   path:文件路径（home/admin/jlogserver/logs/srsyslog-performance-%{+YYYY.MM.dd}.txt）必填\n\n   codec:默认是json_lines(可选值：line(可以自定义输出的属性和属性之间的分隔符)，json_lines（json格式的字符串格式输出）)\n   \n   format:自定义输出的格式（tenant_id|ip）\n   \n   split:自定义输出格式属性之间的分隔符\n\n# Stdout:\n\n  codec:line(默认值)\n  \n  line,json_lines, java_lines三种值可以选择\n\n\n# Netty \n\n  host:连接远程的ip 必填\n\t\n  port:连接远程的端口 必填\n\n  openCompression: 是否开启数据压缩--开启之后会使用本地的缓存,达到设定的时间或者长度之后才会发送,默认false\n\n  compressionLevel: 压缩等级，使用gzip压缩，默认是6\n\n  sendGapTime：使用本地缓存的时候最大缓存时间，超过设定时间将会发送, 默认值：2 * 1000(ms)\n\n  maxBufferSize：使用本地缓存的时候的最大缓存大小，超过设定大小的时候会发送,默认值：5 * 1024 字符\n\n  openCollectIp: 是否获取本地的ip地址添加到消息里\n\n  format：输出数据格式，eg:${HOSTNAME} ${appname} [${user_token} type=${logtype} tag=\"${logtag}\"],会将对应的变量名称替换成消息里面的存在值\n\n  delimiter: 发送的字符串的分隔符，默认是系统行分隔符\n  \n\n# Hdfs \n\n  hadoopConf:hadoop 配置文件目录（默认读取环境变量HADOOP_CONF_DIR）\n\t\n  path:写入hdfs路径目录 必填\n\n  store: 存储类型（现在支持text，orc）\n\n  compression：数据写入的压缩类型（NONE,GZIP,BZIP2,SNAPPY）\n\n  charsetName: 字符集（默认 utf-8）\n\n  delimiter：分隔符（text 类型适用）\n\n  timezone: 时区\n  \n  hadoopUserName ： 访问hadoop 的用户名\n  \n  schema：写入hadoop的数据格式（[\"name:varchar\"]）\n  \n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdtstack%2Fjlogstash-output-plugin","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdtstack%2Fjlogstash-output-plugin","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdtstack%2Fjlogstash-output-plugin/lists"}