https://github.com/sanggusti/wandb-mlops
A project of Semantic Segmentations Modelling over Cityscape Datasets that incorporating Workflows, Experiment Tracking, Pipeline and testing
https://github.com/sanggusti/wandb-mlops
cicd experiment-tracking github-actions mlops semantic-segmentation wandb
Last synced: 5 days ago
JSON representation
A project of Semantic Segmentations Modelling over Cityscape Datasets that incorporating Workflows, Experiment Tracking, Pipeline and testing
- Host: GitHub
- URL: https://github.com/sanggusti/wandb-mlops
- Owner: sanggusti
- Created: 2023-02-19T11:06:50.000Z (over 2 years ago)
- Default Branch: master
- Last Pushed: 2023-08-28T09:21:37.000Z (about 2 years ago)
- Last Synced: 2025-06-26T12:53:22.609Z (4 months ago)
- Topics: cicd, experiment-tracking, github-actions, mlops, semantic-segmentation, wandb
- Language: Python
- Homepage:
- Size: 112 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# BDD Semantic Segmentation using WandB
This is semantic segmentation project on BDD cityscapes-like dataset that are tracked via wandb. You can see the run on [my wandb](https://wandb.ai/gustiwinata/mlops-course-001).
The objectives of this project is to segment pedestrian view of car camera view for semantic segmentation problem using wandb to track and run the code modularly with this repository to ensure the project reproducible.
## How to set up the project
The setups are pretty basic, just go
```bash
> pip install virtualenv
> virtualenv venv
> source venv/bin/activate
> pip install -r requirements.txt
> wandb login
```Then run the code sequentially
```bash
> python data_loader.py
> python split.py
> python baseline.py
> python eval.py
```> You only need to run `data_loader.py` and `split.py` once since this data is static, but you could run `baseline.py` and `eval.py` multiple times since it is what the experiments about.
You can set configs of hyperparameters to experiment in the `baseline.py` file, try tweak some of the hyperparameters. The `eval.py` is to check the model that are produced on `baseline.py` executions on test holdout set.
## Reports on Wandb
You can check my reports on this repository executions on these pages
- [Dataset Exploration](https://api.wandb.ai/links/gustiwinata/etuh4k5c)
- [Hyperparameter Sweep](https://api.wandb.ai/links/gustiwinata/x2vn7bk9)
- [Model Evaluation](https://api.wandb.ai/links/gustiwinata/8rw8l59g)