An open API service indexing awesome lists of open source software.

https://github.com/lartpang/ovcamo

(ECCV 2024) Open-Vocabulary Camouflaged Object Segmentation
https://github.com/lartpang/ovcamo

camouflage-detection camouflage-images camouflaged-object-detection camouflaged-target-detection open-vocabulary open-vocabulary-detection open-vocabulary-segmentation

Last synced: 20 days ago
JSON representation

(ECCV 2024) Open-Vocabulary Camouflaged Object Segmentation

Awesome Lists containing this project

README

          

# (ECCV 2024) Open-Vocabulary Camouflaged Object Segmentation



arXiv PDF




logo

```bibtex
@inproceedings{OVCOS_ECCV2024,
title={Open-Vocabulary Camouflaged Object Segmentation},
author={Pang, Youwei and Zhao, Xiaoqi and Zuo, Jiaming and Zhang, Lihe and Lu, Huchuan},
booktitle=ECCV,
year={2024},
}
```

> [!note]
> CAD dataset can be found at https://drive.google.com/file/d/1XhrC6NSekGOAAM7osLne3p46pj1tLFdI/view?usp=sharing
>
> Details of the proposed OVCamo dataset can be found in the document for [our dataset](https://github.com/lartpang/OVCamo/releases/download/dataset-v1.0/ovcamo.zip).

## Prepare Dataset

![image](https://github.com/lartpang/OVCamo/assets/26847524/92f5f7e8-55a9-4d7e-bc41-264d255af658)

> [!note]
> CAD subset can be found in

1. Prepare the training and testing splits: See the document in [our dataset](https://github.com/lartpang/OVCamo/releases/download/dataset-v1.0/ovcamo.zip) for details.
2. Set the training and testing splits in the yaml file `env/splitted_ovcamo.yaml`:
- `OVCamo_TR_IMAGE_DIR`: Image directory of the training set.
- `OVCamo_TR_MASK_DIR`: Mask directory of the training set.
- `OVCamo_TR_DEPTH_DIR`: Depth map directory of the training set. Depth maps of the training set which are generated by us, can be downloaded from
- `OVCamo_TE_IMAGE_DIR`: Image directory of the testing set.
- `OVCamo_TE_MASK_DIR`: Mask directory of the testing set.
- `OVCamo_CLASS_JSON_PATH`: Path of the json file `class_info.json` storing class information of the proposed OVCamo.
- `OVCamo_SAMPLE_JSON_PATH`: Path of the json file `sample_info.json` storing sample information of the proposed OVCamo.

## Training/Inference

1. Install dependencies: `pip install -r requirements.txt`.
1. The versions of `torch` and `torchvision` are listed in the comment of `requirements.txt`.
2. Run the script to:
1. train the model: `python .\main.py --config .\configs\ovcoser.py --model-name OVCoser`;
2. inference the model: `python .\main.py --config .\configs\ovcoser.py --model-name OVCoser --evaluate --load-from `.

## Evaluate the Pretrained Model

1. Download [the pretrained model](https://github.com/lartpang/OVCamo/releases/download/model-v1.0/model.pth).
2. Run the script: `python .\main.py --config .\configs\ovcoser.py --model-name OVCoser --evaluate --load-from model.pth`.

## Evaluate Our Results

1. Download [our results](https://github.com/lartpang/OVCamo/releases/download/model-v1.0/ovcoser-ovcamo-te.zip) and unzip it into `/ovcoser-ovcamo-te`.
2. Run the script: `python .\evaluate.py --pre /ovcoser-ovcamo-te`

## LICENSE

- Code: [MIT LICENSE](./LICENSE)
- Dataset:

OVCamo by Youwei Pang, Xiaoqi Zhao, Jiaming Zuo, Lihe Zhang, Huchuan Lu is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International