https://github.com/softdevteam/warmup_experiment
Experiment designed to investigate JIT warmup times.
https://github.com/softdevteam/warmup_experiment
benchmark benchmarks experiment vms
Last synced: 10 months ago
JSON representation
Experiment designed to investigate JIT warmup times.
- Host: GitHub
- URL: https://github.com/softdevteam/warmup_experiment
- Owner: softdevteam
- License: other
- Created: 2015-06-05T13:44:08.000Z (over 10 years ago)
- Default Branch: master
- Last Pushed: 2019-12-09T09:32:23.000Z (about 6 years ago)
- Last Synced: 2025-03-23T19:44:53.346Z (10 months ago)
- Topics: benchmark, benchmarks, experiment, vms
- Language: Python
- Size: 958 KB
- Stars: 8
- Watchers: 3
- Forks: 2
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
- License: LICENSE-APACHE
Awesome Lists containing this project
README
# SoftDev Warmup Experiment
This is the main repository for the Software Development Team Warmup Experiment
as detailed in the paper "Virtual Machine Warmup Blows Hot and Cold", by Edd
Barrett, Carl Friedrich Bolz, Rebecca Killick, Sarah Mount and Laurence Tratt.
The paper is available [here](http://arxiv.org/abs/1602.00602)
## Running the warmup experiment
The script `build.sh` will fetch and build the VMs and the Krun benchmarking
system. Once the VMs are built the `Makefile` contains a target
`bench-with-reboots` which will run the experiment in full, however, you should
consult the Krun documentation (fetched into `krun/` by `build.sh`), as there
is a great deal of manual intervention needed to compile a tickless kernel,
disable Intel P-states, set up `rc.local` etc.
Note that the experiment is designed to run on amd64 machines running Debian 8
or OpenBSD. Newer versions of Debian do not currently work due to a C++ ABI
bump which would require a newer C++ compiler (a newer GCC or perhaps clang).
Calling `build.sh` will also install our
[warmup_stats](https://github.com/softdevteam/warmup_stats) code, which includes
a number of scripts to format benchmark results as plots or tables (similar to
those seen in the paper), and diff between results files. `warmup_stats` has a
number of dependencies, some of which are also needed by the code in this
repository, in particular:
* Python 2.7 - the code here is not Python 3.x ready
* bzip2 / bunzip2 and bzip2 (including header files)
* curl (including header files)
* gcc and make
* liblzma library (including header files)
* Python modules: numpy, pip, libcap
* openssl (including header files)
* pkg-config
* pcre library (including header files)
* readline (including header files)
* wget
The [install instructions](https://github.com/softdevteam/warmup_stats/blob/master/INSTALL.md) for `warmup_stats` contain more details.
## Print-traced Benchmarks
The paper mentions that to ensure benchmarks are "AST deterministic", we
instrumented them with print statements. These versions can be found alongside
the "proper" benchmarks under the `benchmarks/` directory.
For example under `benchmarks/fasta/lua/`:
* `bench.lua` is the un-instrumented benchmark used in the proper experiment.
* `trace_bench.lua` is the instrumented version.
Special notes:
* Java benchmarks have and additional `trace_KrunEntry.java` file as well.
* Since we cannot distribute Java Richards, a patch is required to derive the
tracing version (`patches/trace_java_richards.diff`)