Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/aimhubio/aim
Aim ๐ซ โ An easy-to-use & supercharged open-source experiment tracker.
https://github.com/aimhubio/aim
ai data-science data-visualization experiment-tracking machine-learning metadata metadata-tracking ml mlflow mlops prompt-engineering python pytorch tensorboard tensorflow visualization
Last synced: 3 days ago
JSON representation
Aim ๐ซ โ An easy-to-use & supercharged open-source experiment tracker.
- Host: GitHub
- URL: https://github.com/aimhubio/aim
- Owner: aimhubio
- License: apache-2.0
- Created: 2019-05-31T18:25:07.000Z (over 5 years ago)
- Default Branch: main
- Last Pushed: 2024-10-23T20:02:09.000Z (about 2 months ago)
- Last Synced: 2024-10-24T12:58:12.714Z (about 2 months ago)
- Topics: ai, data-science, data-visualization, experiment-tracking, machine-learning, metadata, metadata-tracking, ml, mlflow, mlops, prompt-engineering, python, pytorch, tensorboard, tensorflow, visualization
- Language: Python
- Homepage: https://aimstack.io
- Size: 63.8 MB
- Stars: 5,193
- Watchers: 46
- Forks: 319
- Open Issues: 388
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Citation: CITATION.cff
- Codeowners: .github/CODEOWNERS
Awesome Lists containing this project
- Awesome-AIML-Data-Ops - Aim - A super-easy way to record, search and compare AI experiments. (Model and Data Versioning)
- awesome-list - Aim - An open-source, self-hosted ML experiment tracking tool. (Machine Learning Framework / Experiment Management)
- awesome-production-machine-learning - Aim - A super-easy way to record, search and compare AI experiments. (Model, Data and Experiment Tracking)
- awesome-mlops - Aim - A super-easy way to record, search and compare 1000s of ML training runs. (Model Lifecycle)
- awesome-python-applications - Repo - hostable machine learning experiment tracker designed to handle 10,000s of training runs. `(linux, server, fastapi)` (<a id="tag-ai" href="#tag-ai">AI/ML</a>)
- awesome-python-machine-learning-resources - GitHub - 21% open ยท โฑ๏ธ 25.08.2022): (ๅทฅไฝๆต็จๅๅฎ้ช่ท่ธช)
- project-awesome - aimhubio/aim - Aim ๐ซ โ An easy-to-use & supercharged open-source experiment tracker. (Python)
- awesome-llmops - Aim - to-use and performant open-source experiment tracker. | ![GitHub Badge](https://img.shields.io/github/stars/aimhubio/aim.svg?style=flat-square) | (Training / Experiment Tracking)
- awesomeLibrary - aim - Aim ๐ซ โ easy-to-use and performant open-source ML experiment tracker. (่ฏญ่จ่ตๆบๅบ / python)
README
An easy-to-use & supercharged open-source experiment tracker
Aim logs your training runs and any AI Metadata, enables a beautiful UI to compare, observe them and an API to query them programmatically.
[![Discord Server](https://dcbadge.vercel.app/api/server/zXq2NfVdtF?compact=true&style=flat)](https://community.aimstack.io/)
[![Twitter Follow](https://img.shields.io/twitter/follow/aimstackio?style=social)](https://twitter.com/aimstackio)
[![Medium](https://img.shields.io/badge/Medium-12100E?style=flat&logo=medium&logoColor=white)](https://medium.com/aimstack)
[![Platform Support](https://img.shields.io/badge/platform-Linux%20%7C%20macOS-blue)]()
[![PyPI - Python Version](https://img.shields.io/badge/python-%3E%3D%203.7-blue)](https://pypi.org/project/aim/)
[![PyPI Package](https://img.shields.io/pypi/v/aim?color=yellow)](https://pypi.org/project/aim/)
[![License](https://img.shields.io/badge/License-Apache%202.0-orange.svg)](https://opensource.org/licenses/Apache-2.0)
[![PyPI Downloads](https://img.shields.io/pypi/dw/aim?color=green)](https://pypi.org/project/aim/)
[![Issues](https://img.shields.io/github/issues/aimhubio/aim)](http://github.com/aimhubio/aim/issues)
SEAMLESSLY INTEGRATES WITH:
TRUSTED BY ML TEAMS FROM:
AimStack offers enterprise support that's beyond core Aim. Contact via [email protected] e-mail.---
About โข
Demos โข
Ecosystem โข
Quick Start โข
Examples โข
Documentation โข
Community โข
Blog---
# โน๏ธ About
Aim is an open-source, self-hosted ML experiment tracking tool designed to handle 10,000s of training runs.
Aim provides a performant and beautiful UI for exploring and comparing training runs.
Additionally, its SDK enables programmatic access to tracked metadata โ perfect for automations and Jupyter Notebook analysis.
Aim's mission is to democratize AI dev tools ๐ฏ
Log Metadata Across Your ML Pipeline ๐พ
Visualize & Compare Metadata via UI ๐
- ML experiments and any metadata tracking
- Integration with popular ML frameworks
- Easy migration from other experiment trackers
- Metadata visualization via Aim Explorers
- Grouping and aggregation
- Querying using Python expressions
Run ML Trainings Effectively โก
Organize Your Experiments ๐๏ธ
- System info and resource usage tracking
- Real-time alerting on training progress
- Logging and configurable notifications
- Detailed run information for easy debugging
- Centralized dashboard for holistic view
- Runs grouping with tags and experiments
# ๐ฌ Demos
Check out live Aim demos NOW to see it in action.
| [Machine translation experiments](https://play.aimstack.io/nmt/metrics?grouping=O-JTdCJTIyY29sb3IlMjI6JTVCJTIycnVuLnBhcmFtcy5ocGFyYW1zLm1heF9rJTIyJTVELCUyMnN0cm9rZSUyMjolNUIlNUQsJTIyY2hhcnQlMjI6JTVCJTIybmFtZSUyMiwlMjJjb250ZXh0LnN1YnNldCUyMiU1RCwlMjJyZXZlcnNlTW9kZSUyMjolN0IlMjJjb2xvciUyMjpmYWxzZSwlMjJzdHJva2UlMjI6ZmFsc2UsJTIyY2hhcnQlMjI6ZmFsc2UlN0QsJTIyaXNBcHBsaWVkJTIyOiU3QiUyMmNvbG9yJTIyOnRydWUsJTIyc3Ryb2tlJTIyOnRydWUsJTIyY2hhcnQlMjI6dHJ1ZSU3RCwlMjJwZXJzaXN0ZW5jZSUyMjolN0IlMjJjb2xvciUyMjpmYWxzZSwlMjJzdHJva2UlMjI6ZmFsc2UlN0QsJTIyc2VlZCUyMjolN0IlMjJjb2xvciUyMjoxMCwlMjJzdHJva2UlMjI6MTAlN0QsJTIycGFsZXR0ZUluZGV4JTIyOjAlN0Q&chart=O-JTdCJTIyaGlnaGxpZ2h0TW9kZSUyMjoyLCUyMmlnbm9yZU91dGxpZXJzJTIyOnRydWUsJTIyem9vbSUyMjolN0IlMjJhY3RpdmUlMjI6ZmFsc2UsJTIybW9kZSUyMjowLCUyMmhpc3RvcnklMjI6JTVCJTVEJTdELCUyMmF4ZXNTY2FsZVR5cGUlMjI6JTdCJTIyeEF4aXMlMjI6JTIybGluZWFyJTIyLCUyMnlBeGlzJTIyOiUyMmxpbmVhciUyMiU3RCwlMjJheGVzU2NhbGVSYW5nZSUyMjolN0IlMjJ5QXhpcyUyMjolN0IlN0QsJTIyeEF4aXMlMjI6JTdCJTdEJTdELCUyMnNtb290aGluZyUyMjolN0IlMjJhbGdvcml0aG0lMjI6JTIyRVhQT05FTlRJQUxfTU9WSU5HX0FWRVJBR0UlMjIsJTIyZmFjdG9yJTIyOjAsJTIyY3VydmVJbnRlcnBvbGF0aW9uJTIyOiUyMmN1cnZlTGluZWFyJTIyLCUyMmlzQXBwbGllZCUyMjpmYWxzZSU3RCwlMjJhbGlnbm1lbnRDb25maWclMjI6JTdCJTIybWV0cmljJTIyOiUyMiUyMiwlMjJ0eXBlJTIyOiUyMnN0ZXAlMjIlN0QsJTIyZGVuc2l0eVR5cGUlMjI6NTAwLCUyMmFnZ3JlZ2F0aW9uQ29uZmlnJTIyOiU3QiUyMm1ldGhvZHMlMjI6JTdCJTIyYXJlYSUyMjoxLCUyMmxpbmUlMjI6MCU3RCwlMjJpc0FwcGxpZWQlMjI6dHJ1ZSwlMjJpc0VuYWJsZWQlMjI6dHJ1ZSU3RCwlMjJ0b29sdGlwJTIyOiU3QiUyMmFwcGVhcmFuY2UlMjI6JTIyYXV0byUyMiwlMjJkaXNwbGF5JTIyOnRydWUsJTIyc2VsZWN0ZWRGaWVsZHMlMjI6JTVCJTVELCUyMnNlbGVjdGVkUGFyYW1zJTIyOiU1QiU1RCU3RCwlMjJsZWdlbmRzJTIyOiU3QiUyMmRpc3BsYXklMjI6dHJ1ZSwlMjJtb2RlJTIyOiUyMnBpbm5lZCUyMiU3RCwlMjJmb2N1c2VkU3RhdGUlMjI6JTdCJTIyYWN0aXZlJTIyOnRydWUsJTIya2V5JTIyOiUyMk8tSlRkQ0pUSXljblZ1U0dGemFDVXlNam9sTWpKa1lUTmtNV1UzSlRJeUxDVXlNbTFsZEhKcFkwNWhiV1VsTWpJNkpUSXlZbVZ6ZEY5c2IzTnpKVEl5TENVeU1uUnlZV05sUTI5dWRHVjRkQ1V5TWpvbE4wSWxNakp6ZFdKelpYUWxNakk2SlRJeWRtRnNKVEl5SlRkRUpUZEUlMjIsJTIyeFZhbHVlJTIyOjIxLCUyMnlWYWx1ZSUyMjozLjQ3Mzk5OTk3NzEsJTIyY2hhcnRJbmRleCUyMjowLCUyMnZpc0lkJTIyOiUyMjAlMjIlN0QlN0Q&select=O-JTdCJTIyb3B0aW9ucyUyMjolNUIlN0IlMjJsYWJlbCUyMjolMjJiZXN0X2xvc3MlMjIsJTIyZ3JvdXAlMjI6JTIyYmVzdF9sb3NzJTIyLCUyMnR5cGUlMjI6JTIybWV0cmljcyUyMiwlMjJjb2xvciUyMjolMjIjN0E0Q0UwJTIyLCUyMmtleSUyMjolMjJPLUpUZENKVEl5YldWMGNtbGpUbUZ0WlNVeU1qb2xNakppWlhOMFgyeHZjM01sTWpJc0pUSXlZMjl1ZEdWNGRFNWhiV1VsTWpJNkpUSXlKVEl5SlRkRSUyMiwlMjJ2YWx1ZSUyMjolN0IlMjJvcHRpb25fbmFtZSUyMjolMjJiZXN0X2xvc3MlMjIsJTIyY29udGV4dCUyMjpudWxsJTdEJTdELCU3QiUyMmxhYmVsJTIyOiUyMmJsZXUlMjIsJTIyZ3JvdXAlMjI6JTIyYmxldSUyMiwlMjJ0eXBlJTIyOiUyMm1ldHJpY3MlMjIsJTIyY29sb3IlMjI6JTIyIzNFNzJFNyUyMiwlMjJrZXklMjI6JTIyTy1KVGRDSlRJeWJXVjBjbWxqVG1GdFpTVXlNam9sTWpKaWJHVjFKVEl5TENVeU1tTnZiblJsZUhST1lXMWxKVEl5T2lVeU1pVXlNaVUzUkElMjIsJTIydmFsdWUlMjI6JTdCJTIyb3B0aW9uX25hbWUlMjI6JTIyYmxldSUyMiwlMjJjb250ZXh0JTIyOm51bGwlN0QlN0QlNUQsJTIycXVlcnklMjI6JTIycnVuLmhwYXJhbXMubGVhcm5pbmdfcmF0ZSUyMCUzRSUyMDAuMDAwMDElMjIsJTIyYWR2YW5jZWRNb2RlJTIyOmZhbHNlLCUyMmFkdmFuY2VkUXVlcnklMjI6JTIycnVuLmhwYXJhbXMubGVhcm5pbmdfcmF0ZSUyMCUzRSUyMDAuMDAwMDElMjBhbmQlMjAoKG1ldHJpYy5uYW1lJTIwPT0lMjAlNUMlMjJibGV1JTVDJTIyKSUyMG9yJTIwKG1ldHJpYy5uYW1lJTIwPT0lMjAlNUMlMjJiZXN0X2xvc3MlNUMlMjIpJTIwb3IlMjAobWV0cmljLm5hbWUlMjA9PSUyMCU1QyUyMmJzeiU1QyUyMiUyMGFuZCUyMG1ldHJpYy5jb250ZXh0LnN1YnNldCUyMD09JTIwJTVDJTIydHJhaW4lNUMlMjIpKSUyMiU3RA) | [lightweight-GAN experiments](https://play.aimstack.io/image-generation/images?grouping=O-JTdCJTIycm93JTIyOiU1QiU1RCwlMjJyZXZlcnNlTW9kZSUyMjolN0IlMjJyb3clMjI6ZmFsc2UsJTIyZ3JvdXAlMjI6ZmFsc2UlN0QsJTIyaXNBcHBsaWVkJTIyOiU3QiUyMnJvdyUyMjp0cnVlLCUyMmdyb3VwJTIyOnRydWUlN0QsJTIyZ3JvdXAlMjI6JTVCJTIyaW5kZXglMjIsJTIyc3RlcCUyMiU1RCU3RA&select=O-JTdCJTIyb3B0aW9ucyUyMjolNUIlN0IlMjJsYWJlbCUyMjolMjJnZW5lcmF0ZWQlMjIsJTIyZ3JvdXAlMjI6JTIyZ2VuZXJhdGVkJTIyLCUyMmNvbG9yJTIyOiUyMiMzRTcyRTclMjIsJTIya2V5JTIyOiUyMk8tSlRkQ0pUSXliV1YwY21salRtRnRaU1V5TWpvbE1qSm5aVzVsY21GMFpXUWxNaklzSlRJeVkyOXVkR1Y0ZEU1aGJXVWxNakk2SlRJeUpUSXlKVGRFJTIyLCUyMnZhbHVlJTIyOiU3QiUyMm9wdGlvbl9uYW1lJTIyOiUyMmdlbmVyYXRlZCUyMiwlMjJjb250ZXh0JTIyOm51bGwlN0QlN0QlNUQsJTIycXVlcnklMjI6JTIyaW1hZ2VzLmNvbnRleHQuaW50ZXJwb2xhdGVkJTIwYW5kJTIwcnVuLmhwYXJhbXMubmFtZSUyMD09JTIwJTVDJTIybWV0ZmFjZXMlNUMlMjIlMjIsJTIyYWR2YW5jZWRNb2RlJTIyOmZhbHNlLCUyMmFkdmFuY2VkUXVlcnklMjI6JTIyaW1hZ2VzLmNvbnRleHQuaW50ZXJwb2xhdGVkJTIwYW5kJTIwcnVuLmhwYXJhbXMubmFtZSUyMD09JTIwJTVDJTIybWV0ZmFjZXMlNUMlMjIlMjIlN0Q&images=O-JTdCJTIyaW5kZXhEZW5zaXR5JTIyOjcsJTIycmVjb3JkRGVuc2l0eSUyMjoxNSwlMjJ0b29sdGlwJTIyOiU3QiUyMmFwcGVhcmFuY2UlMjI6JTIyYXV0byUyMiwlMjJkaXNwbGF5JTIyOnRydWUsJTIyc2VsZWN0ZWRGaWVsZHMlMjI6JTVCJTVELCUyMnNlbGVjdGVkUGFyYW1zJTIyOiU1QiU1RCU3RCwlMjJhZGRpdGlvbmFsUHJvcGVydGllcyUyMjolN0IlMjJhbGlnbm1lbnRUeXBlJTIyOiUyMkhlaWdodCUyMiwlMjJtZWRpYUl0ZW1TaXplJTIyOjIzLCUyMmltYWdlUmVuZGVyaW5nJTIyOiUyMnBpeGVsYXRlZCUyMiwlMjJzdGFja2luZyUyMjpmYWxzZSU3RCwlMjJmb2N1c2VkU3RhdGUlMjI6JTdCJTIyYWN0aXZlJTIyOmZhbHNlLCUyMmtleSUyMjpudWxsJTdELCUyMnNvcnRGaWVsZHMlMjI6JTVCJTVELCUyMnNvcnRGaWVsZHNEaWN0JTIyOiU3QiU3RCwlMjJpbnB1dHNWYWxpZGF0aW9ucyUyMjolN0IlMjJpbmRleERlbnNpdHklMjI6dHJ1ZSwlMjJyZWNvcmREZW5zaXR5JTIyOnRydWUlN0QsJTIyY2FsY1JhbmdlcyUyMjpmYWxzZSwlMjJzdGVwUmFuZ2UlMjI6JTVCMSw0Njk1NiU1RCwlMjJpbmRleFJhbmdlJTIyOiU1QjAsNyU1RCwlMjJyZWNvcmRTbGljZSUyMjolNUIzMTE0Miw0Njk1NyU1RCwlMjJpbmRleFNsaWNlJTIyOiU1QjAsOCU1RCwlMjJpbWFnZVByb3BlcnRpZXMlMjI6JTdCJTIyYWxpZ25tZW50VHlwZSUyMjolMjJIZWlnaHQlMjIsJTIyaW1hZ2VTaXplJTIyOjI1LCUyMmltYWdlUmVuZGVyaW5nJTIyOiUyMnBpeGVsYXRlZCUyMiU3RCU3RA)|
|:---:|:---:|
| | |
| Training logs of a neural translation model(from WMT'19 competition). | Training logs of 'lightweight' GAN, proposed in ICLR 2021. || [FastSpeech 2 experiments](https://play.aimstack.io/fastspeech2/runs/d9e89aa7875e44b2ba85612a/audios)| [Simple MNIST](https://play.aimstack.io/digit-recognition/runs/426032ad2d7e4b0385bc6c51/distributions) |
|:---:|:---:|
| | |
| Training logs of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech". | Simple MNIST training logs. |# ๐ Ecosystem
Aim is not just an experiment tracker. It's a groundwork for an ecosystem.
Check out the two most famous Aim-based tools.| [aimlflow](https://github.com/aimhubio/aimlflow) | [Aim-spaCy](https://github.com/aimhubio/aim-spacy) |
|:---:|:---:|
| ![aimlflow](https://user-images.githubusercontent.com/97726819/225957836-cdec88e3-4993-435a-a135-d78be3ac1635.png) | ![Aim-spaCy](https://user-images.githubusercontent.com/97726819/225957990-4edc4525-1f65-4405-b663-ce7af888bdfa.png) |
| Exploring MLflow experiments with a powerful UI | an Aim-based spaCy experiment tracker |# ๐ Quick start
Follow the steps below to get started with Aim.
## 1. Install Aim on your training environment
```shell
pip3 install aim
```## 2. Integrate Aim with your code
```python
from aim import Run# Initialize a new run
run = Run()# Log run parameters
run["hparams"] = {
"learning_rate": 0.001,
"batch_size": 32,
}# Log metrics
for i in range(10):
run.track(i, name='loss', step=i, context={ "subset":"train" })
run.track(i, name='acc', step=i, context={ "subset":"train" })
```_See the full list of supported trackable objects(e.g. images, text, etc) [here](https://aimstack.readthedocs.io/en/latest/quick_start/supported_types.html)._
## 3. Run the training as usual and start Aim UI
```shell
aim up
```## Learn more
Migrate from other tools
Aim has built-in converters to easily migrate logs from other tools.
These migrations cover the most common usage scenarios.
In case of custom and complex scenarios you can use Aim SDK to implement your own conversion script.- [TensorBoard logs converter](https://aimstack.readthedocs.io/en/latest/quick_start/convert_data.html#show-tensorboard-logs-in-aim)
- [MLFlow logs converter](https://aimstack.readthedocs.io/en/latest/quick_start/convert_data.html#show-mlflow-logs-in-aim)
- [Weights & Biases logs converter](https://aimstack.readthedocs.io/en/latest/quick_start/convert_data.html#show-weights-and-biases-logs-in-aim)Integrate Aim into an existing project
Aim easily integrates with a wide range of ML frameworks, providing built-in callbacks for most of them.
- [Integration with Pytorch Ignite](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-pytorch-ignite)
- [Integration with Pytorch Lightning](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-pytorch-lightning)
- [Integration with Hugging Face](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-hugging-face)
- [Integration with Keras & tf.Keras](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-keras-tf-keras)
- [Integration with Keras Tuner](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-keras-tuner)
- [Integration with XGboost](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-xgboost)
- [Integration with CatBoost](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-catboost)
- [Integration with LightGBM](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-lightgbm)
- [Integration with fastai](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-fastai)
- [Integration with MXNet](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-mxnet)
- [Integration with Optuna](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-optuna)
- [Integration with PaddlePaddle](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-paddlepaddle)
- [Integration with Stable-Baselines3](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-stable-baselines3)
- [Integration with Acme](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-acme)
- [Integration with Prophet](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-prophet)Query runs programmatically via SDK
Aim Python SDK empowers you to query and access any piece of tracked metadata with ease.
```python
from aim import Repomy_repo = Repo('/path/to/aim/repo')
query = "metric.name == 'loss'" # Example query
# Get collection of metrics
for run_metrics_collection in my_repo.query_metrics(query).iter_runs():
for metric in run_metrics_collection:
# Get run params
params = metric.run[...]
# Get metric values
steps, metric_values = metric.values.sparse_numpy()
```Set up a centralized tracking server
Aim remote tracking server allows running experiments in a multi-host environment and collect tracked data in a centralized location.
See the docs on how to [set up the remote server](https://aimstack.readthedocs.io/en/latest/using/remote_tracking.html).
Deploy Aim on kubernetes
- The official Aim docker image: https://hub.docker.com/r/aimstack/aim
- A guide on how to deploy Aim on kubernetes: https://aimstack.readthedocs.io/en/latest/using/k8s_deployment.htmlRead the full documentation on [aimstack.readthedocs.io](https://aimstack.readthedocs.io) ๐
# ๐ Comparisons to familiar tools
TensorBoard vs Aim
**Training run comparison**
Order of magnitude faster training run comparison with Aim
- The tracked params are first class citizens at Aim. You can search, group, aggregate via params - deeply explore all the tracked data (metrics, params, images) on the UI.
- With tensorboard the users are forced to record those parameters in the training run name to be able to search and compare. This causes a super-tedius comparison experience and usability issues on the UI when there are many experiments and params. **TensorBoard doesn't have features to group, aggregate the metrics****Scalability**
- Aim is built to handle 1000s of training runs - both on the backend and on the UI.
- TensorBoard becomes really slow and hard to use when a few hundred training runs are queried / compared.**Beloved TB visualizations to be added on Aim**
- Embedding projector.
- Neural network visualization.MLflow vs Aim
MLFlow is an end-to-end ML Lifecycle tool.
Aim is focused on training tracking.
The main differences of Aim and MLflow are around the UI scalability and run comparison features.Aim and MLflow are a perfect match - check out the [aimlflow](https://github.com/aimhubio/aimlflow) - the tool that enables Aim superpowers on Mlflow.
**Run comparison**
- Aim treats tracked parameters as first-class citizens. Users can query runs, metrics, images and filter using the params.
- MLFlow does have a search by tracked config, but there are no grouping, aggregation, subplotting by hyparparams and other comparison features available.**UI Scalability**
- Aim UI can handle several thousands of metrics at the same time smoothly with 1000s of steps. It may get shaky when you explore 1000s of metrics with 10000s of steps each. But we are constantly optimizing!
- MLflow UI becomes slow to use when there are a few hundreds of runs.Weights and Biases vs Aim
Hosted vs self-hosted
- Weights and Biases is a hosted closed-source MLOps platform.
- Aim is self-hosted, free and open-source experiment tracking tool.# ๐ฃ๏ธ Roadmap
## Detailed milestones
The [Aim product roadmap](https://github.com/orgs/aimhubio/projects/3) :sparkle:
- The `Backlog` contains the issues we are going to choose from and prioritize weekly
- The issues are mainly prioritized by the highly-requested features## High-level roadmap
The high-level features we are going to work on the next few months:
**In progress**
- [ ] Aim SDK low-level interface
- [ ] Dashboards โ customizable layouts with embedded explorers
- [ ] Ergonomic UI kit
- [ ] Text ExplorerNext-up
**Aim UI**
- Runs management
- Runs explorer โ query and visualize runs data(images, audio, distributions, ...) in a central dashboard
- Explorers
- Distributions Explorer**SDK and Storage**
- Scalability
- Smooth UI and SDK experience with over 10.000 runs
- Runs management
- CLI commands
- Reporting - runs summary and run details in a CLI compatible format
- Manipulations โ copy, move, delete runs, params and sequences
- Cloud storage support โ store runs blob(e.g. images) data on the cloud
- Artifact storage โ store files, model checkpoints, and beyond**Integrations**
- ML Frameworks:
- Shortlist: scikit-learn
- Resource management tools
- Shortlist: Kubeflow, Slurm
- Workflow orchestration toolsDone
- [x] Live updates (Shipped: _Oct 18 2021_)
- [x] Images tracking and visualization (Start: _Oct 18 2021_, Shipped: _Nov 19 2021_)
- [x] Distributions tracking and visualization (Start: _Nov 10 2021_, Shipped: _Dec 3 2021_)
- [x] Jupyter integration (Start: _Nov 18 2021_, Shipped: _Dec 3 2021_)
- [x] Audio tracking and visualization (Start: _Dec 6 2021_, Shipped: _Dec 17 2021_)
- [x] Transcripts tracking and visualization (Start: _Dec 6 2021_, Shipped: _Dec 17 2021_)
- [x] Plotly integration (Start: _Dec 1 2021_, Shipped: _Dec 17 2021_)
- [x] Colab integration (Start: _Nov 18 2021_, Shipped: _Dec 17 2021_)
- [x] Centralized tracking server (Start: _Oct 18 2021_, Shipped: _Jan 22 2022_)
- [x] Tensorboard adaptor - visualize TensorBoard logs with Aim (Start: _Dec 17 2021_, Shipped: _Feb 3 2022_)
- [x] Track git info, env vars, CLI arguments, dependencies (Start: _Jan 17 2022_, Shipped: _Feb 3 2022_)
- [x] MLFlow adaptor (visualize MLflow logs with Aim) (Start: _Feb 14 2022_, Shipped: _Feb 22 2022_)
- [x] Activeloop Hub integration (Start: _Feb 14 2022_, Shipped: _Feb 22 2022_)
- [x] PyTorch-Ignite integration (Start: _Feb 14 2022_, Shipped: _Feb 22 2022_)
- [x] Run summary and overview info(system params, CLI args, git info, ...) (Start: _Feb 14 2022_, Shipped: _Mar 9 2022_)
- [x] Add DVC related metadata into aim run (Start: _Mar 7 2022_, Shipped: _Mar 26 2022_)
- [x] Ability to attach notes to Run from UI (Start: _Mar 7 2022_, Shipped: _Apr 29 2022_)
- [x] Fairseq integration (Start: _Mar 27 2022_, Shipped: _Mar 29 2022_)
- [x] LightGBM integration (Start: _Apr 14 2022_, Shipped: _May 17 2022_)
- [x] CatBoost integration (Start: _Apr 20 2022_, Shipped: _May 17 2022_)
- [x] Run execution details(display stdout/stderr logs) (Start: _Apr 25 2022_, Shipped: _May 17 2022_)
- [x] Long sequences(up to 5M of steps) support (Start: _Apr 25 2022_, Shipped: _Jun 22 2022_)
- [x] Figures Explorer (Start: _Mar 1 2022_, Shipped: _Aug 21 2022_)
- [x] Notify on stuck runs (Start: _Jul 22 2022_, Shipped: _Aug 21 2022_)
- [x] Integration with KerasTuner (Start: _Aug 10 2022_, Shipped: _Aug 21 2022_)
- [x] Integration with WandB (Start: _Aug 15 2022_, Shipped: _Aug 21 2022_)
- [x] Stable remote tracking server (Start: _Jun 15 2022_, Shipped: _Aug 21 2022_)
- [x] Integration with fast.ai (Start: _Aug 22 2022_, Shipped: _Oct 6 2022_)
- [x] Integration with MXNet (Start: _Sep 20 2022_, Shipped: _Oct 6 2022_)
- [x] Project overview page (Start: _Sep 1 2022_, Shipped: _Oct 6 2022_)
- [x] Remote tracking server scaling (Start: _Sep 11 2022_, Shipped: _Nov 26 2022_)
- [x] Integration with PaddlePaddle (Start: _Oct 2 2022_, Shipped: _Nov 26 2022_)
- [x] Integration with Optuna (Start: _Oct 2 2022_, Shipped: _Nov 26 2022_)
- [x] Audios Explorer (Start: _Oct 30 2022_, Shipped: _Nov 26 2022_)
- [x] Experiment page (Start: _Nov 9 2022_, Shipped: _Nov 26 2022_)
- [x] HuggingFace datasets (Start: _Dec 29 2022_, _Feb 3 2023_)# ๐ฅ Community
## Aim README badge
Add Aim badge to your README, if you've enjoyed using Aim in your work:
[![Aim](https://img.shields.io/badge/powered%20by-Aim-%231473E6)](https://github.com/aimhubio/aim)
```
[![Aim](https://img.shields.io/badge/powered%20by-Aim-%231473E6)](https://github.com/aimhubio/aim)
```## Cite Aim in your papers
In case you've found Aim helpful in your research journey, we'd be thrilled if you could acknowledge Aim's contribution:
```bibtex
@software{Arakelyan_Aim_2020,
author = {Arakelyan, Gor and Soghomonyan, Gevorg and {The Aim team}},
doi = {10.5281/zenodo.6536395},
license = {Apache-2.0},
month = {6},
title = {{Aim}},
url = {https://github.com/aimhubio/aim},
version = {3.9.3},
year = {2020}
}
```## Contributing to Aim
Considering contibuting to Aim?
To get started, please take a moment to read the [CONTRIBUTING.md](https://github.com/aimhubio/aim/blob/main/CONTRIBUTING.md) guide.Join Aim contributors by submitting your first pull request. Happy coding! ๐
Made with [contrib.rocks](https://contrib.rocks).
## More questions?
1. [Read the docs](https://aimstack.readthedocs.io/en/latest/)
2. [Open a feature request or report a bug](https://github.com/aimhubio/aim/issues)
3. [Join Discord community server](https://community.aimstack.io/)