https://github.com/cherubicxn/hawp
Holistically-Attracted Wireframe Parsing [TPAMI'23] & [CVPR' 20]
https://github.com/cherubicxn/hawp
cvpr2020 deep lsd wireframe
Last synced: 3 months ago
JSON representation
Holistically-Attracted Wireframe Parsing [TPAMI'23] & [CVPR' 20]
- Host: GitHub
- URL: https://github.com/cherubicxn/hawp
- Owner: cherubicXN
- License: mit
- Created: 2020-03-16T15:51:02.000Z (about 5 years ago)
- Default Branch: main
- Last Pushed: 2024-02-26T03:46:40.000Z (about 1 year ago)
- Last Synced: 2024-09-26T03:54:45.067Z (7 months ago)
- Topics: cvpr2020, deep, lsd, wireframe
- Language: Python
- Homepage:
- Size: 6.69 MB
- Stars: 292
- Watchers: 10
- Forks: 52
- Open Issues: 19
-
Metadata Files:
- Readme: readme.md
- License: LICENSE
Awesome Lists containing this project
- awesome-edge-detection-papers - Wireframe - Attracted Wireframe Parsing](https://arxiv.org/pdf/2003.01663) | CVPR 2020 | Edges come from the annotated wireframe. | (0. Edge detection related dataset)
README
# Holistically-Attracted Wireframe Parsing: From Supervised Learning to Self-Supervised Learning
This is the official implementation of our [paper](https://arxiv.org/abs/2210.12971).
[**News**] Our journal version has been accepted in PAMI!
[**News**] The upgraded HAWPv2 and HAWPv3 are available now!
## Highlights- **HAT Fields**: A General and Robust Representation of Line Segments for Wireframe Parsing
- **HAWPv2**: A state-of-the-art fully-supervised wireframe parser. Please checkout [HAWPv2.md](docs/HAWPv2.md) for its details.
- **HAWPv3**: A state-of-the-art self-supervised wireframe parser. Please checkout [HAWPv3.md](docs/HAWPv3.md) for its details and [HAWPv3.train.md](docs/HAWPv3.train.md) for the training recipe.
- HAWPv3 can be used as a good wireframe parser for the out-of-distribution images.
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
- **We provide a running example on the images of [DTU dataset](https://roboimagedata.compute.dtu.dk/?page_id=36) (scene24) as below.**
```bash
python -m hawp.ssl.predict --ckpt checkpoints/hawpv3-imagenet-03a84.pth \
--threshold 0.05 \
--img ~/datasets/DTU/scan24/image/*.png \
--saveto docs/figures/dtu-24 --ext png \
```
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
## Data DownloadingTraining and Testing datasets for HAWPv2
- The training and testing data (including [Wireframe dataset](https://github.com/huangkuns/wireframe) and [YorkUrban dataset](http://www.elderlab.yorku.ca/resources/york-urban-line-segment-database-information/)) for **HAWPv2** can be downloaded via [Google Drive](https://drive.google.com/file/d/134L-u9pgGtnzw0auPv8ykHqMjjZ2claO/view?usp=sharing). *Many thanks to authors of these two excellent datasets!*
- You can also use the [gdown](https://pypi.org/project/gdown/) to download the data in the terminal by
```bash
gdown 134L-u9pgGtnzw0auPv8ykHqMjjZ2claO
unzip data.zip
```## Installation
Anaconda
- Clone the code repo: ``git clone https://github.com/cherubicXN/hawp.git``.
- Install ninja-build by ``sudo apt install ninja-build``.
- Create a conda environment by
```bash
conda create -n hawp python==3.9
conda activate hawp
pip install -e .
```
- Run the following command lines to install the dependencies of HAWP
```bash
# Install pytorch, please be careful for the version of CUDA on your machine
pip install torch==1.12.0+cu116 torchvision==0.13.0+cu116 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu116
# Install other dependencies
pip install -r requirement.txt
```
- Verify the installation.
```bash
python -c "import torch; print(torch.cuda.is_available())" # Check if the installed pytorch supports CUDA.
```
- Downloading the offically-trained checkpoints of both **HAWPv2** and **HAWPv3**.
```bash
sh downloads.sh
```Docker
We also provide a [Dockerfile](docker/Dockerfile). You could build the docker image by running the following command lines.
```bash
sudo docker build - < Dockerfile --tag hawp:latest
```## Citations
If you find our work useful in your research, please consider citing:
```
@article{HAWP-journal,
title = "Holistically-Attracted Wireframe Parsing: From Supervised to Self-Supervised Learning",
author = "Nan Xue and Tianfu Wu and Song Bai and Fu-Dong Wang and Gui-Song Xia and Liangpei Zhang and Philip H.S. Torr
journal = "IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI)",
year = {2023}
}
```
and
```
@inproceedings{HAWP,
title = "Holistically-Attracted Wireframe Parsing",
author = "Nan Xue and Tianfu Wu and Song Bai and Fu-Dong Wang and Gui-Song Xia and Liangpei Zhang and Philip H.S. Torr
",
booktitle = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
year = {2020},
}
```## Acknowledgment
We acknowledge the effort from the authors of the Wireframe dataset and the YorkUrban dataset. These datasets make accurate line segment detection and wireframe parsing possible. We also thank [Rémi Pautrat](https://rpautrat.github.io/) for helpful discussions.## TODO
- Documentations
- Google Colab Notebook