https://github.com/wiktorlazarski/head-segmentation
👦 Human head semantic segmentation
https://github.com/wiktorlazarski/head-segmentation
celeba-dataset computer-vision deep-learning human-head hydra pytorch pytorch-lightning semantic-segmentation semantic-segmentation-pytorch unet wandb
Last synced: about 1 month ago
JSON representation
👦 Human head semantic segmentation
- Host: GitHub
- URL: https://github.com/wiktorlazarski/head-segmentation
- Owner: wiktorlazarski
- License: other
- Created: 2022-02-05T21:03:58.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2024-05-31T09:55:31.000Z (12 months ago)
- Last Synced: 2025-03-24T17:50:10.151Z (2 months ago)
- Topics: celeba-dataset, computer-vision, deep-learning, human-head, hydra, pytorch, pytorch-lightning, semantic-segmentation, semantic-segmentation-pytorch, unet, wandb
- Language: Jupyter Notebook
- Homepage:
- Size: 6.78 MB
- Stars: 81
- Watchers: 3
- Forks: 18
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
______________________________________________________________________
# 👦 Human Head Semantic Segmentation
______________________________________________________________________
[](https://github.com/wiktorlazarski/head-segmentation/actions/workflows/ci-testing.yml)
[](https://colab.research.google.com/drive/1QScgxBRXWbGbQ3DmYIJ2Ja3sD_mJ_Efx?usp=sharing)
[](https://github.com/psf/black)## 💎 Installation with `pip`
Installation is as simple as running:
```bash
pip install git+https://github.com/wiktorlazarski/head-segmentation.git
```## 🔨 How to use
### 🤔 Inference
```python
import head_segmentation.segmentation_pipeline as seg_pipelinesegmentation_pipeline = seg_pipeline.HumanHeadSegmentationPipeline()
segmentation_map = segmentation_pipeline.predict(image)
```### 🎨 Visualizing
```python
import matplotlib.pyplot as pltimport head_segmentation.visualization as vis
visualizer = vis.VisualizationModule()
figure, _ = visualizer.visualize_prediction(image, segmentation_map)
plt.show()
```## ⚙️ Setup for development
```bash
# Clone repo
git clone https://github.com/wiktorlazarski/head-segmentation.git# Go to repo directory
cd head-segmentation# (Optional) Create virtual environment
python -m venv venv
source ./venv/bin/activate# Install project in editable mode
pip install -e .[dev]# (Optional but recommended) Install pre-commit hooks to preserve code format consistency
pre-commit install
```## 🐍 Setup for development with Anaconda or Miniconda
```bash
# Clone repo
git clone https://github.com/wiktorlazarski/head-segmentation.git# Go to repo directory
cd head-segmentation# Create and activate conda environment
conda env create -f ./conda_env.yml
conda activate head_segmentation# (Optional but recommended) Install pre-commit hooks to preserve code format consistency
pre-commit install
```## 🔬 Quantitative results
**Keep in mind** that we trained our model with CelebA dataset, which means that our model may not necessarily perform well on your data, since they may come from a different distribution than CelebA.
The table below presents results, computed on the full scale test set images, of three best models we trained. Model naming convention is as followed: `_`.
| Model | mobilenetv2_256 | mobilenetv2_512 | resnet34_512 |
|:--------------:|:---------------:|:---------------:|:------------:|
| head IoU | 0.967606 | 0.967337 | **0.968457** |
| background IoU | 0.942936 | 0.942160 | **0.944469** |
| mIoU | 0.955271 | 0.954749 | **0.956463** |## 🧐 Qualitative results



If you want to check predictions on some of your images, please feel free to use our Streamlit application.
```bash
cd head-segmentationstreamlit run ./scripts/apps/web_checking.py
```## ⏰ Inference time
If you are strict with time, you can use gpu to acclerate inference. Visualization also consume some time, you can just save the final result as below.
```python
import torch
from PIL import Image
import head_segmentation.segmentation_pipeline as seg_pipelinedevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
segmentation_pipeline = seg_pipeline.HumanHeadSegmentationPipeline(device=device)
segmentation_map = segmentation_pipeline.predict(image)
segmented_region = image * cv2.cvtColor(segmentation_map, cv2.COLOR_GRAY2RGB)
pil_image = Image.fromarray(segmented_region)
pil_image.save(save_path)
```The table below presents inference time which is tested on Tesla T4 (just for reference). The first image will take more time.
| | save figure | just save final result|
|:--------------:|:---------------------:|:---------------------:|
| cpu | around 2.1s | around 0.8s |
| gpu | around 1.4s | around 0.15s |### 🤗 Enjoy the model!