{"id":25440802,"url":"https://github.com/zcemycl/tf2deepfloorplan","last_synced_at":"2025-04-09T16:17:10.861Z","repository":{"id":37831047,"uuid":"308479432","full_name":"zcemycl/TF2DeepFloorplan","owner":"zcemycl","description":"TF2 Deep FloorPlan Recognition using a Multi-task Network with Room-boundary-Guided Attention. Enable tensorboard, quantization, flask, tflite, docker, github actions and google colab.","archived":false,"fork":false,"pushed_at":"2023-07-17T22:54:31.000Z","size":8310,"stargazers_count":222,"open_issues_count":5,"forks_count":72,"subscribers_count":7,"default_branch":"main","last_synced_at":"2025-04-09T16:16:59.133Z","etag":null,"topics":["attention-network","curl","deep-learning","deep-neural-networks","docker","flask","github-actions","github-release","google-colab","image-processing","image-recognition","jupyter-notebook","keras-tensorflow","pygame","pypi-package","python3","quantization","tensorboard","tensorflow2","tflite"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/zcemycl.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-10-30T00:03:03.000Z","updated_at":"2025-04-08T09:22:31.000Z","dependencies_parsed_at":"2022-06-22T19:42:44.626Z","dependency_job_id":null,"html_url":"https://github.com/zcemycl/TF2DeepFloorplan","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zcemycl%2FTF2DeepFloorplan","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zcemycl%2FTF2DeepFloorplan/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zcemycl%2FTF2DeepFloorplan/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zcemycl%2FTF2DeepFloorplan/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/zcemycl","download_url":"https://codeload.github.com/zcemycl/TF2DeepFloorplan/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248065285,"owners_count":21041872,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["attention-network","curl","deep-learning","deep-neural-networks","docker","flask","github-actions","github-release","google-colab","image-processing","image-recognition","jupyter-notebook","keras-tensorflow","pygame","pypi-package","python3","quantization","tensorboard","tensorflow2","tflite"],"created_at":"2025-02-17T12:18:12.357Z","updated_at":"2025-04-09T16:17:10.830Z","avatar_url":"https://github.com/zcemycl.png","language":"Python","readme":"# TF2DeepFloorplan [![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) [\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" \u003e](https://colab.research.google.com/github/zcemycl/TF2DeepFloorplan/blob/master/deepfloorplan.ipynb) ![example workflow](https://github.com/zcemycl/TF2DeepFloorplan/actions/workflows/main.yml/badge.svg) [![Coverage Status](https://coveralls.io/repos/github/zcemycl/TF2DeepFloorplan/badge.svg?branch=main)](https://coveralls.io/github/zcemycl/TF2DeepFloorplan?branch=main)[![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2Fzcemycl%2FTF2DeepFloorplan\u0026count_bg=%2379C83D\u0026title_bg=%23555555\u0026icon=\u0026icon_color=%23E7E7E7\u0026title=hits\u0026edge_flat=false)](https://hits.seeyoufarm.com)\nThis repo contains a basic procedure to train and deploy the DNN model suggested by the paper ['Deep Floor Plan Recognition using a Multi-task Network with Room-boundary-Guided Attention'](https://arxiv.org/abs/1908.11025). It rewrites the original codes from [zlzeng/DeepFloorplan](https://github.com/zlzeng/DeepFloorplan) into newer versions of Tensorflow and Python.\n\u003cbr\u003e\nNetwork Architectures from the paper, \u003cbr\u003e\n\u003cimg src=\"resources/dfpmodel.png\" width=\"50%\"\u003e\u003cimg src=\"resources/features.png\" width=\"50%\"\u003e\n\n\n### Additional feature (pygame)\n![TF2DeepFloorplan_3dviz](resources/raycast.gif)\n\n## Requirements\nDepends on different applications, the following installation methods can\n\n|OS|Hardware|Application|Command|\n|---|---|---|---|\n|Ubuntu|CPU|Model Development|`pip install -e .[tfcpu,dev,testing,linting]`|\n|Ubuntu|GPU|Model Development|`pip install -e .[tfgpu,dev,testing,linting]`|\n|MacOS|M1 Chip|Model Development|`pip install -e .[tfmacm1,dev,testing,linting]`|\n|Ubuntu|GPU|Model Deployment API|`pip install -e .[tfgpu,api]`|\n|Ubuntu|GPU|Everything|`pip install -e .[tfgpu,api,dev,testing,linting,game]`|\n|Agnostic|...|Docker|(to be updated)|\n|Ubuntu|GPU|Notebook|`pip install -e .[tfgpu,jupyter]`|\n|Ubuntu|GPU|Game|`pip install -e .[tfgpu,game]`|\n\n## How to run?\n1. Install packages.\n```\n# Option 1\npython -m venv venv\nsource venv/bin/activate\npip install --upgrade pip setuptools wheel\n# Option 2 (Preferred)\nconda create -n venv python=3.8 cudatoolkit=10.1 cudnn=7.6.5\nconda activate venv\n# common install\npip install -e .[tfgpu,api,dev,testing,linting]\n```\n2. According to the original repo, please download r3d dataset and transform it to tfrecords `r3d.tfrecords`. Friendly reminder: there is another dataset r2v used to train their original repo's model, I did not use it here cos of limited access. Please see the link here [https://github.com/zlzeng/DeepFloorplan/issues/17](https://github.com/zlzeng/DeepFloorplan/issues/17).\n3. Run the `train.py` file  to initiate the training, model checkpoint is stored as `log/store/G` and weight is in `model/store`,\n```\npython -m dfp.train [--batchsize 2][--lr 1e-4][--epochs 1000]\n[--logdir 'log/store'][--modeldir 'model/store']\n[--save-tensor-interval 10][--save-model-interval 20]\n[--tfmodel 'subclass'/'func'][--feature-channels 256 128 64 32]\n[--backbone 'vgg16'/'mobilenetv1'/'mobilenetv2'/'resnet50']\n[--feature-names block1_pool block2_pool block3_pool block4_pool block5_pool]\n```\n- for example,\n```\npython -m dfp.train --batchsize=4 --lr=5e-4 --epochs=100\n--logdir=log/store --modeldir=model/store\n```\n4. Run Tensorboard to view the progress of loss and images via,\n```\ntensorboard --logdir=log/store\n```\n5. Convert model to tflite via `convert2tflite.py`.\n```\npython -m dfp.convert2tflite [--modeldir model/store]\n[--tflitedir model/store/model.tflite]\n[--loadmethod 'log'/'none'/'pb']\n[--quantize][--tfmodel 'subclass'/'func']\n[--feature-channels 256 128 64 32]\n[--backbone 'vgg16'/'mobilenetv1'/'mobilenetv2'/'resnet50']\n[--feature-names block1_pool block2_pool block3_pool block4_pool block5_pool]\n```\n6. Download and unzip model from google drive,\n```\ngdown https://drive.google.com/uc?id=1czUSFvk6Z49H-zRikTc67g2HUUz4imON # log files 112.5mb\nunzip log.zip\ngdown https://drive.google.com/uc?id=1tuqUPbiZnuubPFHMQqCo1_kFNKq4hU8i # pb files 107.3mb\nunzip model.zip\ngdown https://drive.google.com/uc?id=1B-Fw-zgufEqiLm00ec2WCMUo5E6RY2eO # tfilte file 37.1mb\nunzip tflite.zip\n```\n7. Deploy the model via `deploy.py`, please be aware that load method parameter should match with weight input.\n```\npython -m dfp.deploy [--image 'path/to/image']\n[--postprocess][--colorize][--save 'path/to/output_image']\n[--loadmethod 'log'/'pb'/'tflite']\n[--weight 'log/store/G'/'model/store'/'model/store/model.tflite']\n[--tfmodel 'subclass'/'func']\n[--feature-channels 256 128 64 32]\n[--backbone 'vgg16'/'mobilenetv1'/'mobilenetv2'/'resnet50']\n[--feature-names block1_pool block2_pool block3_pool block4_pool block5_pool]\n```\n- for example,\n```\npython -m dfp.deploy --image floorplan.jpg --weight log/store/G\n--postprocess --colorize --save output.jpg --loadmethod log\n```\n8. Play with pygame.\n```\npython -m dfp.game\n```\n\n## Docker for API\n1. Build and run docker container. (Please train your weight, google drive does not work currently due to its update.)\n```\ndocker build -t tf_docker -f Dockerfile .\ndocker run -d -p 1111:1111 tf_docker:latest\ndocker run --gpus all -d -p 1111:1111 tf_docker:latest\n\n# special for hot reloading flask\ndocker run -v ${PWD}/src/dfp/app.py:/src/dfp/app.py -v ${PWD}/src/dfp/deploy.py:/src/dfp/deploy.py -d -p 1111:1111 tf_docker:latest\ndocker logs `docker ps | grep \"tf_docker:latest\"  | awk '{ print $1 }'` --follow\n```\n2. Call the api for output.\n```\ncurl -H \"Content-Type: application/json\" --request POST  \\\n  -d '{\"uri\":\"https://cdn.cnn.com/cnnnext/dam/assets/200212132008-04-london-rental-market-intl-exlarge-169.jpg\",\"colorize\":1,\"postprocess\":0}' \\\n  http://0.0.0.0:1111/uri --output /tmp/tmp.jpg\n\n\ncurl --request POST -F \"file=@resources/30939153.jpg\" \\\n  -F \"postprocess=0\" -F \"colorize=0\" http://0.0.0.0:1111/upload --output out.jpg\n```\n3. If you run `app.py` without docker, the second curl for file upload will not work.\n\n\n## Google Colab\n1. Click on [\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" \u003e](https://colab.research.google.com/github/zcemycl/TF2DeepFloorplan/blob/master/deepfloorplan.ipynb) and authorize access.\n2. Run the first 2 code cells for installation.\n3. Go to Runtime Tab, click on Restart runtime. This ensures the packages installed are enabled.\n4. Run the rest of the notebook.\n\n## How to Contribute?\n1. Git clone this repo.\n2. Install required packages and pre-commit-hooks.\n```\npip install -e .[tfgpu,api,dev,testing,linting]\npre-commit install\npre-commit run\npre-commit run --all-files\n# pre-commit uninstall/ pip uninstall pre-commit\n```\n3. Create issues. Maintainer will decide if it requires branch. If so,\n```\ngit fetch origin\ngit checkout xx-features\n```\n4. Stage your files, Commit and Push to branch.\n5. After pull and merge requests, the issue is solved and the branch is deleted. You can,\n```\ngit checkout main\ngit pull\ngit remote prune origin\ngit branch -d xx-features\n```\n\n\n## Results\n- From `train.py` and `tensorboard`.\n\n|Compare Ground Truth (top)\u003cbr\u003e against Outputs (bottom)|Total Loss|\n|:-------------------------:|:-------------------------:|\n|\u003cimg src=\"resources/epoch60.png\" width=\"400\"\u003e|\u003cimg src=\"resources/Loss.png\" width=\"400\"\u003e|\n|Boundary Loss|Room Loss|\n|\u003cimg src=\"resources/LossB.png\" width=\"400\"\u003e|\u003cimg src=\"resources/LossR.png\" width=\"400\"\u003e|\n\n- From `deploy.py` and `utils/legend.py`.\n\n|Input|Legend|Output|\n|:-------------------------:|:-------------------------:|:-------------------------:|\n|\u003cimg src=\"resources/30939153.jpg\" width=\"250\"\u003e|\u003cimg src=\"resources/legend.png\" width=\"180\"\u003e|\u003cimg src=\"resources/output.jpg\" width=\"250\"\u003e|\n|`--colorize`|`--postprocess`|`--colorize`\u003cbr\u003e`--postprocess`|\n|\u003cimg src=\"resources/color.jpg\" width=\"250\"\u003e|\u003cimg src=\"resources/post.jpg\" width=\"250\"\u003e|\u003cimg src=\"resources/postcolor.jpg\" width=\"250\"\u003e|\n\n## Optimization\n- Backbone Comparison in Size\n\n|Backbone|log|pb|tflite|toml|\n|---|---|---|---|---|\n|VGG16|130.5Mb|119Mb|45.3Mb|[link](docs/experiments/vgg16/exp1)|\n|MobileNetV1|102.1Mb|86.7Mb|50.2Mb|[link](docs/experiments/mobilenetv1/exp1)|\n|MobileNetV2|129.3Mb|94.4Mb|57.9Mb|[link](docs/experiments/mobilenetv2/exp1)|\n|ResNet50|214Mb|216Mb|107.2Mb|[link](docs/experiments/resnet50/exp1)|\n\n- Feature Selection Comparison in Size\n\n|Backbone|Feature Names|log|pb|tflite|toml|\n|---|---|---|---|---|---|\n|MobileNetV1|\"conv_pw_1_relu\", \u003cbr\u003e\"conv_pw_3_relu\", \u003cbr\u003e\"conv_pw_5_relu\", \u003cbr\u003e\"conv_pw_7_relu\", \u003cbr\u003e\"conv_pw_13_relu\"|102.1Mb|86.7Mb|50.2Mb|[link](docs/experiments/mobilenetv1/exp1)|\n|MobileNetV1|\"conv_pw_1_relu\", \u003cbr\u003e\"conv_pw_3_relu\", \u003cbr\u003e\"conv_pw_5_relu\", \u003cbr\u003e\"conv_pw_7_relu\", \u003cbr\u003e\"conv_pw_12_relu\"|84.5Mb|82.3Mb|49.2Mb|[link](docs/experiments/mobilenetv1/exp2)|\n\n- Feature Channels Comparison in Size\n\n|Backbone|Channels|log|pb|tflite|toml|\n|---|---|---|---|---|---|\n|VGG16|[256,128,64,32]|130.5Mb|119Mb|45.3Mb|[link](docs/experiments/vgg16/exp1)|\n|VGG16|[128,64,32,16]|82.4Mb|81.6Mb|27.3Mb||\n|VGG16|[32,32,32,32]|73.2Mb|67.5Mb|18.1Mb|[link](docs/experiments/vgg16/exp2)|\n\n- tfmot\n  - Pruning (not working)\n  - Clustering (not working)\n  - Post training Quantization (work the best)\n  - Training aware Quantization (not supported by the version)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzcemycl%2Ftf2deepfloorplan","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fzcemycl%2Ftf2deepfloorplan","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzcemycl%2Ftf2deepfloorplan/lists"}