https://github.com/Karel911/TRACER
TRACER: Extreme Attention Guided Salient Object Tracing Network (AAAI 2022) implementation in PyTorch
https://github.com/Karel911/TRACER
aaai-2022 aaai2022 attention attention-mechanism background-removal image-segmentation pytorch pytorch-implementation salient-object-detection
Last synced: 3 months ago
JSON representation
TRACER: Extreme Attention Guided Salient Object Tracing Network (AAAI 2022) implementation in PyTorch
- Host: GitHub
- URL: https://github.com/Karel911/TRACER
- Owner: Karel911
- License: apache-2.0
- Created: 2021-12-15T07:56:23.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2024-09-11T00:18:49.000Z (10 months ago)
- Last Synced: 2024-11-02T13:34:19.554Z (8 months ago)
- Topics: aaai-2022, aaai2022, attention, attention-mechanism, background-removal, image-segmentation, pytorch, pytorch-implementation, salient-object-detection
- Language: Python
- Homepage:
- Size: 9.63 MB
- Stars: 195
- Watchers: 5
- Forks: 41
- Open Issues: 14
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# TRACER: Extreme Attention Guided Salient Object Tracing Network
This paper was accepted at AAAI 2022 SA poster session. [[pdf]](https://arxiv.org/abs/2112.07380)
[](https://paperswithcode.com/sota/salient-object-detection-on-duts-te?p=tracer-extreme-attention-guided-salient)
[](https://paperswithcode.com/sota/salient-object-detection-on-dut-omron?p=tracer-extreme-attention-guided-salient)
[](https://paperswithcode.com/sota/salient-object-detection-on-hku-is?p=tracer-extreme-attention-guided-salient)
[](https://paperswithcode.com/sota/salient-object-detection-on-ecssd?p=tracer-extreme-attention-guided-salient)
[](https://paperswithcode.com/sota/salient-object-detection-on-pascal-s?p=tracer-extreme-attention-guided-salient)
## Updates
[09/06/2022] Demo has been released on [](https://colab.research.google.com/drive/1ZGbxozNHsvnOiywYZARGXr_CvvE6jIFh?usp=sharing/) [Try it now!](https://colab.research.google.com/drive/1ZGbxozNHsvnOiywYZARGXr_CvvE6jIFh?usp=sharing/)[06/17/2022] Now, fast inference mode offers a salient object result with the mask.
We have improved a result quality of salient object as follows.
You can get the more clear salient object by tuning the [threshold](https://github.com/Karel911/TRACER/blob/main/inference.py/#L71).

We will release initializing TRACER with a version of pre-trained TE-x.[04/20/2022] We update a pipeline for custom dataset inference w/o measuring.
* Run **main.py** scripts.
TRACER
├── data
│ ├── custom_dataset
│ │ ├── sample_image1.png
│ │ ├── sample_image2.png
.
.
.# For testing TRACER with pre-trained model (e.g.)
python main.py inference --dataset custom_dataset/ --arch 7 --img_size 640 --save_map True## Datasets
All datasets are available in public.
* Download the DUTS-TR and DUTS-TE from [Here](http://saliencydetection.net/duts/#org3aad434)
* Download the DUT-OMRON from [Here](http://saliencydetection.net/dut-omron/#org96c3bab)
* Download the HKU-IS from [Here](https://sites.google.com/site/ligb86/hkuis)
* Download the ECSSD from [Here](https://www.cse.cuhk.edu.hk/leojia/projects/hsaliency/dataset.html)
* Download the PASCAL-S from [Here](http://cbs.ic.gatech.edu/salobj/)
* Download the edge GT from [Here](https://drive.google.com/file/d/1FX2RVeMxPgmSALQUSKhdiNrzf_HxA1o9/view?usp=sharing).## Data structure
TRACER
├── data
│ ├── DUTS
│ │ ├── Train
│ │ │ ├── images
│ │ │ ├── masks
│ │ │ ├── edges
│ │ ├── Test
│ │ │ ├── images
│ │ │ ├── masks
│ ├── DUT-O
│ │ ├── Test
│ │ │ ├── images
│ │ │ ├── masks
│ ├── HKU-IS
│ │ ├── Test
│ │ │ ├── images
│ │ │ ├── masks
.
.
.## Requirements
* Python >= 3.7.x
* Pytorch >= 1.8.0
* albumentations >= 0.5.1
* tqdm >=4.54.0
* scikit-learn >= 0.23.2## Run
* Run **main.py** scripts.
# For training TRACER-TE0 (e.g.)
python main.py train --arch 0 --img_size 320# For testing TRACER with pre-trained model (e.g.)
python main.py test --exp_num 0 --arch 0 --img_size 320
* Pre-trained models of TRACER are available at [here](https://github.com/Karel911/TRACER/releases/tag/v1.0)
* Change the model name as 'best_model.pth' and put the weights to the path 'results/DUTS/TEx_0/best_model.pth'
(here, the x means the model scale e.g., 0 to 7).
* Input image sizes for each model are listed belows.## Configurations
--arch: EfficientNet backbone scale: TE0 to TE7.
--frequency_radius: High-pass filter radius in the MEAM.
--gamma: channel confidence ratio \gamma in the UAM.
--denoise: Denoising ratio d in the OAM.
--RFB_aggregated_channel: # of channels in receptive field blocks.
--multi_gpu: Multi-GPU learning options.
--img_size: Input image resolution.
--save_map: Options saving predicted mask.
Model
Img size
TRACER-Efficient-0 ~ 1
320
TRACER-Efficient-2
352
TRACER-Efficient-3
384
TRACER-Efficient-4
448
TRACER-Efficient-5
512
TRACER-Efficient-6
576
TRACER-Efficient-7
640
## Citation
@article{lee2021tracer,
title={TRACER: Extreme Attention Guided Salient Object Tracing Network},
author={Lee, Min Seok and Shin, WooSeok and Han, Sung Won},
journal={arXiv preprint arXiv:2112.07380},
year={2021}
}