Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/camathieu/storm-opentsdb
OpenTSDB storm mapper
https://github.com/camathieu/storm-opentsdb
Last synced: 20 days ago
JSON representation
OpenTSDB storm mapper
- Host: GitHub
- URL: https://github.com/camathieu/storm-opentsdb
- Owner: camathieu
- Created: 2015-05-06T08:01:30.000Z (over 9 years ago)
- Default Branch: master
- Last Pushed: 2015-05-06T08:01:59.000Z (over 9 years ago)
- Last Synced: 2024-10-16T05:35:07.269Z (2 months ago)
- Language: Java
- Size: 152 KB
- Stars: 4
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
Storm connector for OpenTSDB
============================This connector for Apache Storm use OpenTSDB java library to
persist raw data and Trident states directly to HBase using
AsyncHBase client.
As you should have only one AsyncHBase client per application
storm-opentsdb uses the storm-asynchbase client factory to get
a unique instance per cluster.I suggest you to read the javadoc for all more detailed information.
http://javadoc.root.gg/storm-opentsdbUsage
-----Usage example can be found in the storm.opentsdb.example package
* Client configuration
You have to register a configuration Map in the topology Config for
each hbase client and for each opentsdb instance you want to use.```
Map hBaseConfig = new HashMap<>();
hBaseConfig.put("zkQuorum", "node1,node2,node3");
conf.put("hbase-cluster", hBaseConfig);
Map openTsdbConfig = new HashMap<>();
openTsdbConfig.put("tsd.core.auto_create_metrics", "true");
openTsdbConfig.put("tsd.storage.hbase.data_table", "test_tsdb");
openTsdbConfig.put("tsd.storage.hbase.uid_table", "test_tsdb-uid");
conf.put("test-tsdb", openTsdbConfig);
```* Mapper
To map Storm tuple to OpenTSDB put requests you'll have to provide
some mappers to the bolts, function, states.
You can use method chaining syntax to configure them.
You can map a put parameter to a tuple field or to a
fixed constant value and you can also provide serializers to format
input values.For now you have two basic field mapper types, TupleMapper which maps
tuple fields metric, timestamp, value and tags to an OpenTSDB put
request and EventMapper which maps an OpenTsdbEvent to an OpenTSDB put
request. As you can execute more than one put request for a given storm
tuple you have to wrap FieldMappers into Mappers.```
OpenTsdb mapper = OpenTsdbMapper mapper = new OpenTsdbMapper()
.addFieldMapper(
new OpenTsdbTupleFieldMapper("metric","timestamp","value,"tags)
.addFieldMapper(
new OpenTsdbEventFieldMapper("event")
);
```* Bolts
OpenTsdbBolt is used to execute put requests for each incoming tuple, it use
on or more FieldMapper to build the requests from the tuple's fields. All
requests executed from a tuple are executed in parallel. By default this
bolt is asynchronous, be sure to read the doc to fully understand what does it
means.```
builder
.setBolt("opentsdb",
new OpenTsdbBolt("hbase-cluster", "test-tsdb", mapper, 1)
.shuffleGrouping("events");
```* Trident State
This is a TridentState implementation to persist a partition to OpenTSDB.
It should be used with the partition persist method.
You should only use this state if your update is idempotent regarding batch replay. Use the
AsyncHBaseStateUpdater / AsyncHBaseStateQuery and AsyncHBaseStateFactory to interact with it.
You have to provide a mapper.```
TridentState streamRate = stream
.aggregate(new Fields(), new SomeAggregator(2), new Fields("value"))
.partitionPersist(
new OpenTsdbStateFactory("hbase-cluster", "test-tsdb",mapper),
new Fields("value"),
new OpenTsdbStateUpdater()
)
```TODO
----* Handle OpenTSDB query to get data from OpenTSDB to Storm