https://github.com/prbonn/pspa
Panoptic Segmentation with Partial Annotations for Agricultural Robots
https://github.com/prbonn/pspa
Last synced: 7 months ago
JSON representation
Panoptic Segmentation with Partial Annotations for Agricultural Robots
- Host: GitHub
- URL: https://github.com/prbonn/pspa
- Owner: PRBonn
- Created: 2023-12-18T13:52:40.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2024-05-17T09:34:56.000Z (over 1 year ago)
- Last Synced: 2024-05-17T10:38:49.758Z (over 1 year ago)
- Language: Python
- Size: 1.05 MB
- Stars: 2
- Watchers: 4
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# PSPA
*[Panoptic Segmentation with Partial Annotations for Agricultural Robots](https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/weyler2024ral.pdf)*
We present a novel approach to leverage partial annotations for panoptic segmentation.
These partial annotations contain
ground truth information only for a subset of pixels per image and are thus
much faster to obtain than dense annotations. We propose a novel set of losses
that exploit measures from vector fields used in physics, i.e., divergence and
curl, to effectively supervise predictions without ground truth annotations.

Exemplary, we show in the following figure a comparision between dense annotations (left)
and partial annotations (right). The former contains ground truth annotations
for all instances, i.e., *things*, and each pixel in the background, i.e., *stuff*.
In contrast, the latter annotations contain annotations for a few instances and
some blob-like labels for the background.

# Requirements
- We assume that ```wget``` is installed on your machine and that your CUDA Runtime version is >= 11.0
- We use ```conda``` to setup a virtual environment - in case ```conda``` is not installed on your machine follow the offical [instructions](https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html) (we recommend miniconda)
# Setup
The following script will setup a virtual environment denoted as *pspa*
```bash
./setup.sh
```
# Datasets
- We use the [PhenoBench](https://www.phenobench.org/) dataset
- However, as mentioned in the paper, we use a subset of images during training to ensure that each unique instance appears only once
- Consequently, we provide in ```phenobench_auxiliary/split.yaml``` the filenames of images used during training
- To download the dataset and organize it in the expected format run the following script (requires approx. 15GB)
```bash
cd ./phenobench_auxiliary
./get_phenobench.sh
cd ..
```
# Pretrained Models
We provide pretrained models using ERFNet as network architecture that are trained with different amount of partial annotations:
- [Model](https://www.ipb.uni-bonn.de/html/deeplearningmodels/weyler2024ral/model_100.ckpt) trained with all annotations
- [Model](https://www.ipb.uni-bonn.de/html/deeplearningmodels/weyler2024ral/model_050.ckpt) trained with 50% of all annotations
- [Model](https://www.ipb.uni-bonn.de/html/deeplearningmodels/weyler2024ral/model_025.ckpt) trained with 25% of all annotations
- [Model](https://www.ipb.uni-bonn.de/html/deeplearningmodels/weyler2024ral/model_010.ckpt) trained with 10% of all annotations
# Inference
Before you start the inference you need to specify the path to the dataset in the corresponding configuration file (```config/config-phenobench.yaml```), e.g.:
```yaml
data:
path:
```
Please change only `````` according to your previously specified path to download PhenoBench but keep ```test``` as a directory at the very end.
Next, you can run the model in inference mode
```python
conda activate pspa
python predict.py --config ./config/config-phenobench.yaml --export --ckpt
```
In the specified `````` you will find the predicted semantics and plant instances.
In case you want to apply our proposed *fusing procedure*, i.e., assign an unique semantic class to each instance, you need run the following subsequently
```python
conda activate pspa
python auxiliary/merge_plants_sem.py --semantics --plants --export
```
To be more specific about the paths:
- `````` should be ``````
- `````` shoulde be ``````
- Both paths contain png files with the previous semantic and instance predictions
# Evaluation
Since PhenoBench provides a hidden test set you need to register for the corresponding [CodaLab challenge](https://codalab.lisn.upsaclay.fr/competitions/14153) and upload your results
to run the evaluation.
# Training
Similiarly you can train a new model
```python
conda activate pspa
python train.py --config ./config/config-phenobench.yaml --export
```
- Before you start the training you need to specify the path to the dataset in the corresponding configuration file, e.g.:
```yaml
data:
path:
```
where `````` matches the path you specified to download the PhenoBench dataset (i.e. without the <.../test> at the very end).
In case you face ```CUDA out of memory``` you may change the batch size during training via the configuration file (```config/config-phenobench.yaml```), e.g.:
```yaml
train:
batch_size: 2
``````
# License
This software is released under a creative commons license which allows for personal and research use only. For a commercial license please contact the authors. You can view a license summary [here](https://creativecommons.org/licenses/by-nc/4.0/).