Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/innixma/autogluon-benchmark
AutoGluon Benchmarking Repository
https://github.com/innixma/autogluon-benchmark
Last synced: about 1 month ago
JSON representation
AutoGluon Benchmarking Repository
- Host: GitHub
- URL: https://github.com/innixma/autogluon-benchmark
- Owner: Innixma
- License: apache-2.0
- Created: 2019-12-13T01:18:08.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2024-11-15T00:50:48.000Z (about 2 months ago)
- Last Synced: 2024-12-02T22:34:46.966Z (about 1 month ago)
- Language: Python
- Homepage:
- Size: 5.93 MB
- Stars: 2
- Watchers: 3
- Forks: 10
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
Code for benchmarking AutoGluon.
## Development Installation
To get started, run the following commands:
```
# Do this if you are locally developing AutoGluon to avoid installing it from pip:
git clone https://github.com/autogluon/autogluon
cd autogluon
./full_install.sh
cd ..
```Install AutoGluon Bench if running evaluation code.
```
git clone https://github.com/autogluon/autogluon-bench.git
pip install -e autogluon-bench
```Install AutoGluon Benchmark
```
# Install autogluon-benchmark
git clone https://github.com/Innixma/autogluon-benchmark.git
pip install -e autogluon-benchmark
```## Full AutoMLBenchmark
To run AutoMLBenchmark, see instructions in:
`examples/automlbenchmark/README_automlbenchmark.md`
Please note that these instructions are quite technical.
## Local Testing
To benchmark individual OpenML datasets, you can check out the examples in `examples/train_***`
The scripts are fairly primitive. They mimic what AutoMLBenchmark does contained to a single dataset.
## Generating Task Metadata
To generate task metadata files, refer to autogluon_benchmark/data/metadata/README.md