Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/open-mmlab/mim
MIM Installs OpenMMLab Packages
https://github.com/open-mmlab/mim
mim openmmlab
Last synced: 1 day ago
JSON representation
MIM Installs OpenMMLab Packages
- Host: GitHub
- URL: https://github.com/open-mmlab/mim
- Owner: open-mmlab
- License: apache-2.0
- Created: 2021-05-19T12:35:18.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2023-11-24T02:24:59.000Z (about 1 year ago)
- Last Synced: 2024-12-14T12:02:47.571Z (8 days ago)
- Topics: mim, openmmlab
- Language: Python
- Homepage: https://openmim.readthedocs.io/en/latest/
- Size: 759 KB
- Stars: 350
- Watchers: 8
- Forks: 64
- Open Issues: 39
-
Metadata Files:
- Readme: README.md
- Contributing: .github/CONTRIBUTING.md
- License: LICENSE
- Code of conduct: .github/CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
# MIM: MIM Installs OpenMMLab Packages
MIM provides a unified interface for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.
## Major Features
- **Package Management**
You can use MIM to manage OpenMMLab codebases, install or uninstall them conveniently.
- **Model Management**
You can use MIM to manage OpenMMLab model zoo, e.g., download checkpoints by name, search checkpoints that meet specific criteria.
- **Unified Entrypoint for Scripts**
You can execute any script provided by all OpenMMLab codebases with unified commands. Train, test and inference become easier than ever. Besides, you can use `gridsearch` command for vanilla hyper-parameter search.
## License
This project is released under the [Apache 2.0 license](LICENSE).
## Changelog
v0.1.1 was released in 13/6/2021.
## Customization
You can use `.mimrc` for customization. Now we support customize default values of each sub-command. Please refer to [customization.md](docs/en/customization.md) for details.
## Build custom projects with MIM
We provide some examples of how to build custom projects based on OpenMMLAB codebases and MIM in [MIM-Example](https://github.com/open-mmlab/mim-example).
Without worrying about copying codes and scripts from existing codebases, users can focus on developing new components and MIM helps integrate and run the new project.## Installation
Please refer to [installation.md](docs/en/installation.md) for installation.
## Command
1. install
- command
```bash
# install latest version of mmcv-full
> mim install mmcv-full # wheel
# install 1.5.0
> mim install mmcv-full==1.5.0# install latest version of mmcls
> mim install mmcls
# install master branch
> mim install git+https://github.com/open-mmlab/mmclassification.git
# install local repo
> git clone https://github.com/open-mmlab/mmclassification.git
> cd mmclassification
> mim install .# install extension based on OpenMMLab
mim install git+https://github.com/xxx/mmcls-project.git
```- api
```python
from mim import install# install mmcv
install('mmcv-full')# install mmcls will automatically install mmcv if it is not installed
install('mmcls')# install extension based on OpenMMLab
install('git+https://github.com/xxx/mmcls-project.git')
```2. uninstall
- command
```bash
# uninstall mmcv
> mim uninstall mmcv-full# uninstall mmcls
> mim uninstall mmcls
```- api
```python
from mim import uninstall# uninstall mmcv
uninstall('mmcv-full')# uninstall mmcls
uninstall('mmcls')
```3. list
- command
```bash
> mim list
> mim list --all
```- api
```python
from mim import list_packagelist_package()
list_package(True)
```4. search
- command
```bash
> mim search mmcls
> mim search mmcls==0.23.0 --remote
> mim search mmcls --config resnet18_8xb16_cifar10
> mim search mmcls --model resnet
> mim search mmcls --dataset cifar-10
> mim search mmcls --valid-field
> mim search mmcls --condition 'batch_size>45,epochs>100'
> mim search mmcls --condition 'batch_size>45 epochs>100'
> mim search mmcls --condition '128 mim search mmcls --sort batch_size epochs
> mim search mmcls --field epochs batch_size weight
> mim search mmcls --exclude-field weight paper
```- api
```python
from mim import get_model_infoget_model_info('mmcls')
get_model_info('mmcls==0.23.0', local=False)
get_model_info('mmcls', models=['resnet'])
get_model_info('mmcls', training_datasets=['cifar-10'])
get_model_info('mmcls', filter_conditions='batch_size>45,epochs>100')
get_model_info('mmcls', filter_conditions='batch_size>45 epochs>100')
get_model_info('mmcls', filter_conditions='1285. download
- command
```bash
> mim download mmcls --config resnet18_8xb16_cifar10
> mim download mmcls --config resnet18_8xb16_cifar10 --dest .
```- api
```python
from mim import downloaddownload('mmcls', ['resnet18_8xb16_cifar10'])
download('mmcls', ['resnet18_8xb16_cifar10'], dest_root='.')
```6. train
- command
```bash
# Train models on a single server with CPU by setting `gpus` to 0 and
# 'launcher' to 'none' (if applicable). The training script of the
# corresponding codebase will fail if it doesn't support CPU training.
> mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0
# Train models on a single server with one GPU
> mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1
# Train models on a single server with 4 GPUs and pytorch distributed
> mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 4 \
--launcher pytorch
# Train models on a slurm HPC with one 8-GPU node
> mim train mmcls resnet101_b16x8_cifar10.py --launcher slurm --gpus 8 \
--gpus-per-node 8 --partition partition_name --work-dir tmp
# Print help messages of sub-command train
> mim train -h
# Print help messages of sub-command train and the training script of mmcls
> mim train mmcls -h
```- api
```python
from mim import traintrain(repo='mmcls', config='resnet18_8xb16_cifar10.py', gpus=0,
other_args=('--work-dir', 'tmp'))
train(repo='mmcls', config='resnet18_8xb16_cifar10.py', gpus=1,
other_args=('--work-dir', 'tmp'))
train(repo='mmcls', config='resnet18_8xb16_cifar10.py', gpus=4,
launcher='pytorch', other_args=('--work-dir', 'tmp'))
train(repo='mmcls', config='resnet18_8xb16_cifar10.py', gpus=8,
launcher='slurm', gpus_per_node=8, partition='partition_name',
other_args=('--work-dir', 'tmp'))
```7. test
- command
```bash
# Test models on a single server with 1 GPU, report accuracy
> mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
tmp/epoch_3.pth --gpus 1 --metrics accuracy
# Test models on a single server with 1 GPU, save predictions
> mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
tmp/epoch_3.pth --gpus 1 --out tmp.pkl
# Test models on a single server with 4 GPUs, pytorch distributed,
# report accuracy
> mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
tmp/epoch_3.pth --gpus 4 --launcher pytorch --metrics accuracy
# Test models on a slurm HPC with one 8-GPU node, report accuracy
> mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
tmp/epoch_3.pth --gpus 8 --metrics accuracy --partition \
partition_name --gpus-per-node 8 --launcher slurm
# Print help messages of sub-command test
> mim test -h
# Print help messages of sub-command test and the testing script of mmcls
> mim test mmcls -h
```- api
```python
from mim import test
test(repo='mmcls', config='resnet101_b16x8_cifar10.py',
checkpoint='tmp/epoch_3.pth', gpus=1, other_args=('--metrics', 'accuracy'))
test(repo='mmcls', config='resnet101_b16x8_cifar10.py',
checkpoint='tmp/epoch_3.pth', gpus=1, other_args=('--out', 'tmp.pkl'))
test(repo='mmcls', config='resnet101_b16x8_cifar10.py',
checkpoint='tmp/epoch_3.pth', gpus=4, launcher='pytorch',
other_args=('--metrics', 'accuracy'))
test(repo='mmcls', config='resnet101_b16x8_cifar10.py',
checkpoint='tmp/epoch_3.pth', gpus=8, partition='partition_name',
launcher='slurm', gpus_per_node=8, other_args=('--metrics', 'accuracy'))
```8. run
- command
```bash
# Get the Flops of a model
> mim run mmcls get_flops resnet101_b16x8_cifar10.py
# Publish a model
> mim run mmcls publish_model input.pth output.pth
# Train models on a slurm HPC with one GPU
> srun -p partition --gres=gpu:1 mim run mmcls train \
resnet101_b16x8_cifar10.py --work-dir tmp
# Test models on a slurm HPC with one GPU, report accuracy
> srun -p partition --gres=gpu:1 mim run mmcls test \
resnet101_b16x8_cifar10.py tmp/epoch_3.pth --metrics accuracy
# Print help messages of sub-command run
> mim run -h
# Print help messages of sub-command run, list all available scripts in
# codebase mmcls
> mim run mmcls -h
# Print help messages of sub-command run, print the help message of
# training script in mmcls
> mim run mmcls train -h
```- api
```python
from mim import runrun(repo='mmcls', command='get_flops',
other_args=('resnet101_b16x8_cifar10.py',))
run(repo='mmcls', command='publish_model',
other_args=('input.pth', 'output.pth'))
run(repo='mmcls', command='train',
other_args=('resnet101_b16x8_cifar10.py', '--work-dir', 'tmp'))
run(repo='mmcls', command='test',
other_args=('resnet101_b16x8_cifar10.py', 'tmp/epoch_3.pth', '--metrics accuracy'))
```9. gridsearch
- command
```bash
# Parameter search on a single server with CPU by setting `gpus` to 0 and
# 'launcher' to 'none' (if applicable). The training script of the
# corresponding codebase will fail if it doesn't support CPU training.
> mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0 \
--search-args '--optimizer.lr 1e-2 1e-3'
# Parameter search with on a single server with one GPU, search learning
# rate
> mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
--search-args '--optimizer.lr 1e-2 1e-3'
# Parameter search with on a single server with one GPU, search
# weight_decay
> mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
--search-args '--optimizer.weight_decay 1e-3 1e-4'
# Parameter search with on a single server with one GPU, search learning
# rate and weight_decay
> mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
--search-args '--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \
1e-4'
# Parameter search on a slurm HPC with one 8-GPU node, search learning
# rate and weight_decay
> mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \
--partition partition_name --gpus-per-node 8 --launcher slurm \
--search-args '--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \
1e-4'
# Parameter search on a slurm HPC with one 8-GPU node, search learning
# rate and weight_decay, max parallel jobs is 2
> mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \
--partition partition_name --gpus-per-node 8 --launcher slurm \
--max-jobs 2 --search-args '--optimizer.lr 1e-2 1e-3 \
--optimizer.weight_decay 1e-3 1e-4'
# Print the help message of sub-command search
> mim gridsearch -h
# Print the help message of sub-command search and the help message of the
# training script of codebase mmcls
> mim gridsearch mmcls -h
```- api
```python
from mim import gridsearchgridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=0,
search_args='--optimizer.lr 1e-2 1e-3',
other_args=('--work-dir', 'tmp'))
gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=1,
search_args='--optimizer.lr 1e-2 1e-3',
other_args=('--work-dir', 'tmp'))
gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=1,
search_args='--optimizer.weight_decay 1e-3 1e-4',
other_args=('--work-dir', 'tmp'))
gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=1,
search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
'1e-3 1e-4',
other_args=('--work-dir', 'tmp'))
gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=8,
partition='partition_name', gpus_per_node=8, launcher='slurm',
search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
' 1e-3 1e-4',
other_args=('--work-dir', 'tmp'))
gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=8,
partition='partition_name', gpus_per_node=8, launcher='slurm',
max_workers=2,
search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
' 1e-3 1e-4',
other_args=('--work-dir', 'tmp'))
```## Contributing
We appreciate all contributions to improve mim. Please refer to [CONTRIBUTING.md](https://github.com/open-mmlab/mmcv/blob/master/CONTRIBUTING.md) for the contributing guideline.
## License
This project is released under the [Apache 2.0 license](LICENSE).
## Projects in OpenMMLab
- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.
- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
- [MMEval](https://github.com/open-mmlab/mmeval): A unified evaluation library for multiple machine learning libraries.
- [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab pre-training toolbox and benchmark.
- [MMagic](https://github.com/open-mmlab/mmagic): Open**MM**Lab **A**dvanced, **G**enerative and **I**ntelligent **C**reation toolbox.
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
- [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO series toolbox and benchmark.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
- [Playground](https://github.com/open-mmlab/playground): A central hub for gathering and showcasing amazing projects built upon OpenMMLab.