Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/davidenunes/jexperiment
A Java framework to build and execute experiments and collect data.
https://github.com/davidenunes/jexperiment
configuration experiment java parameters runner simulation
Last synced: 26 days ago
JSON representation
A Java framework to build and execute experiments and collect data.
- Host: GitHub
- URL: https://github.com/davidenunes/jexperiment
- Owner: davidenunes
- License: lgpl-3.0
- Created: 2013-12-19T16:41:49.000Z (about 11 years ago)
- Default Branch: master
- Last Pushed: 2017-10-03T09:40:13.000Z (over 7 years ago)
- Last Synced: 2024-11-19T17:58:32.233Z (3 months ago)
- Topics: configuration, experiment, java, parameters, runner, simulation
- Language: Java
- Size: 91.8 KB
- Stars: 0
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
Experiment Building Library
===========================A Java framework to build and execute experiments and collect data.
Uses the parameter sweeping library [param-sweeper](https://github.com/davidenunes/param-sweeper/)
Currently, it provides a basic multi-thread experiment runner and a simple console to load experimentsExample of an experiment configuration file using kafka as a service to produce and collect data:
```
# Experiment Configuration File
euid = 123
#load the model of experiments here
model = org.bhave.experiment.dummy.DummyModel
#your default experiment runner, we will add grid capabilities later
runner = org.bhave.experiment.run.MultiThreadedRunner#kafka
data.producers.0 = org.bhave.experiment.data.producer.KafkaDataProducer
data.producers.0.broker.port = 9092
data.producers.0.broker.host = localhost
data.producers.0.broker.topic = test#statistics to be used in the data production
data.producers.0.stats.0 = org.bhave.experiment.dummy.DummyStats
data.producers.0.stats.1 = org.bhave.experiment.dummy.DummyStats#consumes data stored in memory in the data producers
data.consumers.0 = org.bhave.experiment.data.consumer.KafkaDataConsumer#dummy exporter that prints the data received in kafka to the standard output
data.consumers.0.export.0 = org.bhave.experiment.dummy.StdOutDataExporter#map consumer to producer
data.consumers.0.producer = 0#if you use the FileDataExporter class you need to add these configurations
#data.consumers.0.export.0.file.name = test.csv
#data.consumers.0.export.0.file.append = true# experiment parameter space
runs = 1params.0.name = p1
params.0.sweep = sequence
params.0.type = double
params.0.value.from = 0
params.0.value.to = 100
params.0.value.step = 0.5params.1.name = p2
params.1.sweep = sequence
params.1.type = int
params.1.value.from = 0
params.1.value.to = 10
params.1.value.step = 1```
## Running an Experiment ##
To run experiments and take advantage of multiple existing processing nodes (multiple cores for instance) I include a MultiThreadedRunner class (see example above). This runner basically takes the experiment and submits the multiple runs making sure all the cores are allways busy. I have developed another runner that uses a grid infrastructure (using the [gridgain](http://www.gridgain.com/) platform) to deploy the simulation runs but this is not yet included.To run an experiment we need something to assemble all the experiment components into one place. For this we have the _ExperimentConsole_ implementations. There are two options, a simple GUI based console which allow us to load configuration files and start experiments and a text-based console which basically loads a given configuration file and displays the progress using a text-based progress bar.