{"id":19544174,"url":"https://github.com/sunset1995/hohonet","last_synced_at":"2025-06-17T03:03:53.411Z","repository":{"id":42199462,"uuid":"344435876","full_name":"sunset1995/HoHoNet","owner":"sunset1995","description":"\"HoHoNet: 360 Indoor Holistic Understanding with Latent Horizontal Features\" official pytorch implementation.","archived":false,"fork":false,"pushed_at":"2023-02-05T06:50:50.000Z","size":9820,"stargazers_count":112,"open_issues_count":13,"forks_count":24,"subscribers_count":5,"default_branch":"master","last_synced_at":"2025-04-26T17:47:35.579Z","etag":null,"topics":["360-photo","computer-vision","cvpr2021","depth-estimation","hohonet","room-layout","semantic-segmentation"],"latest_commit_sha":null,"homepage":"https://sunset1995.github.io/HoHoNet/","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/sunset1995.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-03-04T10:30:33.000Z","updated_at":"2025-04-01T01:13:05.000Z","dependencies_parsed_at":"2024-11-11T03:35:28.309Z","dependency_job_id":null,"html_url":"https://github.com/sunset1995/HoHoNet","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/sunset1995/HoHoNet","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunset1995%2FHoHoNet","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunset1995%2FHoHoNet/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunset1995%2FHoHoNet/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunset1995%2FHoHoNet/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/sunset1995","download_url":"https://codeload.github.com/sunset1995/HoHoNet/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunset1995%2FHoHoNet/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":260281566,"owners_count":22985626,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["360-photo","computer-vision","cvpr2021","depth-estimation","hohonet","room-layout","semantic-segmentation"],"created_at":"2024-11-11T03:25:11.509Z","updated_at":"2025-06-17T03:03:53.384Z","avatar_url":"https://github.com/sunset1995.png","language":"Jupyter Notebook","readme":"# HoHoNet\n\nCode for our paper in CVPR 2021: **HoHoNet: 360 Indoor Holistic Understanding with Latent Horizontal Features** ([paper](https://arxiv.org/abs/2011.11498), [video](https://www.youtube.com/watch?v=xXtRaRKmMpA)).\n\n![teaser](./assets/repo_teaser.jpg)\n\n#### News\n- **April 3, 2021**: Release inference code, jupyter notebook and visualization tools. Guide for reproduction is also finished.\n- **March 4, 2021**: A new backbone **[HarDNet](https://github.com/PingoLH/Pytorch-HarDNet)** is included, which shows better speed and depth accuracy.\n\n\n## Pretrained weight\nLinks to trained weights `ckpt/`: [download on Google drive](https://drive.google.com/drive/folders/1raT3vRXnQXRAQuYq36dE-93xFc_hgkTQ?usp=sharing) or [download on Dropbox](https://www.dropbox.com/sh/b014nop5jrehpoq/AACWNTMMHEAbaKOO1drqGio4a?dl=0).\n\n\n## Inference\nIn below, we use an out-of-training-distribution 360 image from PanoContext as an example.\n\n### Jupyter notebook\nSee [infer_depth.ipynb](infer_depth.ipynb), [infer_layout.ipynb](infer_layout.ipynb), and [infer_sem.ipynb](infer_sem.ipynb) for interactive demo and visualization.\n\n### Batch inference\nRun `infer_depth.py`/`infer_layout.py` to inference depth/layout.\nUse `--cfg` and `--pth` to specify the path to config file and pretrained weight.\nSpecify input path with `--inp`. Glob pattern for a batch of files is avaiable.\nThe results are stored into `--out` directory with the same filename with extention set ot `.depth.png` and `.layout.txt`.\n\nExample for depth:\n```\npython infer_depth.py --cfg config/mp3d_depth/HOHO_depth_dct_efficienthc_TransEn1_hardnet.yaml --pth ckpt/mp3d_depth_HOHO_depth_dct_efficienthc_TransEn1_hardnet/ep60.pth --out assets/ --inp assets/pano_asmasuxybohhcj.png\n```\n\nExample for layout:\n```\npython infer_layout.py --cfg config/mp3d_layout/HOHO_layout_aug_efficienthc_Transen1_resnet34.yaml --pth ckpt/mp3d_layout_HOHO_layout_aug_efficienthc_Transen1_resnet34/ep300.pth --out assets/ --inp assets/pano_asmasuxybohhcj.png\n```\n\n### Visualization tools\nTo visualize layout as 3D mesh, run:\n```\npython vis_layout.py --img assets/pano_asmasuxybohhcj.png --layout assets/pano_asmasuxybohhcj.layout.txt\n```\nRendering options: `--show_ceiling`, `--ignore_floor`, `--ignore_wall`, `--ignore_wireframe` are available.\nSet `--out` to export the mesh to `ply` file.\nSet `--no_vis` to disable the visualization.\n\u003cp align=\"center\"\u003e\n    \u003cimg height=\"300\" src=\"./assets/snapshot_layout.jpg\"\u003e\n\u003c/p\u003e\n\n\nTo visualize depth as point cloud, run:\n```\npython vis_depth.py --img assets/pano_asmasuxybohhcj.png --depth assets/pano_asmasuxybohhcj.depth.png\n```\nRendering options: `--crop_ratio`, `--crop_z_above`.\n\u003cp align=\"center\"\u003e\n    \u003cimg height=\"300\" src=\"./assets/snapshot_depth.jpg\"\u003e\n\u003c/p\u003e\n\n\n\n## Reproduction\nPlease see [README_reproduction.md](README_reproduction.md) for the guide to:\n1. prepare the datasets for each task in our paper\n2. reproduce the training for each task\n3. reproduce the numerical results in our paper with the provided pretrained weights\n\n\n## Citation\n```\n@inproceedings{SunSC21,\n  author    = {Cheng Sun and\n               Min Sun and\n               Hwann{-}Tzong Chen},\n  title     = {HoHoNet: 360 Indoor Holistic Understanding With Latent Horizontal\n               Features},\n  booktitle = {CVPR},\n  year      = {2021},\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsunset1995%2Fhohonet","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsunset1995%2Fhohonet","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsunset1995%2Fhohonet/lists"}