https://github.com/lartpang/ovcamo
(ECCV 2024) Open-Vocabulary Camouflaged Object Segmentation
https://github.com/lartpang/ovcamo
camouflage-detection camouflage-images camouflaged-object-detection camouflaged-target-detection open-vocabulary open-vocabulary-detection open-vocabulary-segmentation
Last synced: 20 days ago
JSON representation
(ECCV 2024) Open-Vocabulary Camouflaged Object Segmentation
- Host: GitHub
- URL: https://github.com/lartpang/ovcamo
- Owner: lartpang
- License: mit
- Created: 2023-11-30T15:25:39.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-10-24T06:58:04.000Z (12 months ago)
- Last Synced: 2024-12-28T23:32:34.427Z (9 months ago)
- Topics: camouflage-detection, camouflage-images, camouflaged-object-detection, camouflaged-target-detection, open-vocabulary, open-vocabulary-detection, open-vocabulary-segmentation
- Language: Python
- Homepage: https://lartpang.github.io/docs/ovcamo.html
- Size: 46.9 KB
- Stars: 20
- Watchers: 3
- Forks: 1
- Open Issues: 3
-
Metadata Files:
- Readme: readme.md
- License: LICENSE
Awesome Lists containing this project
README
# (ECCV 2024) Open-Vocabulary Camouflaged Object Segmentation
![]()
```bibtex
@inproceedings{OVCOS_ECCV2024,
title={Open-Vocabulary Camouflaged Object Segmentation},
author={Pang, Youwei and Zhao, Xiaoqi and Zuo, Jiaming and Zhang, Lihe and Lu, Huchuan},
booktitle=ECCV,
year={2024},
}
```> [!note]
> CAD dataset can be found at https://drive.google.com/file/d/1XhrC6NSekGOAAM7osLne3p46pj1tLFdI/view?usp=sharing
>
> Details of the proposed OVCamo dataset can be found in the document for [our dataset](https://github.com/lartpang/OVCamo/releases/download/dataset-v1.0/ovcamo.zip).## Prepare Dataset

> [!note]
> CAD subset can be found in1. Prepare the training and testing splits: See the document in [our dataset](https://github.com/lartpang/OVCamo/releases/download/dataset-v1.0/ovcamo.zip) for details.
2. Set the training and testing splits in the yaml file `env/splitted_ovcamo.yaml`:
- `OVCamo_TR_IMAGE_DIR`: Image directory of the training set.
- `OVCamo_TR_MASK_DIR`: Mask directory of the training set.
- `OVCamo_TR_DEPTH_DIR`: Depth map directory of the training set. Depth maps of the training set which are generated by us, can be downloaded from
- `OVCamo_TE_IMAGE_DIR`: Image directory of the testing set.
- `OVCamo_TE_MASK_DIR`: Mask directory of the testing set.
- `OVCamo_CLASS_JSON_PATH`: Path of the json file `class_info.json` storing class information of the proposed OVCamo.
- `OVCamo_SAMPLE_JSON_PATH`: Path of the json file `sample_info.json` storing sample information of the proposed OVCamo.## Training/Inference
1. Install dependencies: `pip install -r requirements.txt`.
1. The versions of `torch` and `torchvision` are listed in the comment of `requirements.txt`.
2. Run the script to:
1. train the model: `python .\main.py --config .\configs\ovcoser.py --model-name OVCoser`;
2. inference the model: `python .\main.py --config .\configs\ovcoser.py --model-name OVCoser --evaluate --load-from `.## Evaluate the Pretrained Model
1. Download [the pretrained model](https://github.com/lartpang/OVCamo/releases/download/model-v1.0/model.pth).
2. Run the script: `python .\main.py --config .\configs\ovcoser.py --model-name OVCoser --evaluate --load-from model.pth`.## Evaluate Our Results
1. Download [our results](https://github.com/lartpang/OVCamo/releases/download/model-v1.0/ovcoser-ovcamo-te.zip) and unzip it into `/ovcoser-ovcamo-te`.
2. Run the script: `python .\evaluate.py --pre /ovcoser-ovcamo-te`## LICENSE
- Code: [MIT LICENSE](./LICENSE)
- Dataset:OVCamo by Youwei Pang, Xiaoqi Zhao, Jiaming Zuo, Lihe Zhang, Huchuan Lu is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International