{"id":14036941,"url":"https://github.com/art-programmer/FloorNet","last_synced_at":"2025-07-27T04:33:31.123Z","repository":{"id":38361388,"uuid":"127219274","full_name":"art-programmer/FloorNet","owner":"art-programmer","description":null,"archived":false,"fork":false,"pushed_at":"2019-11-15T01:49:02.000Z","size":23863,"stargazers_count":202,"open_issues_count":15,"forks_count":51,"subscribers_count":13,"default_branch":"master","last_synced_at":"2024-04-27T23:32:26.719Z","etag":null,"topics":["eccv2018","floornet","floorplan","tensorflow"],"latest_commit_sha":null,"homepage":"http://art-programmer.github.io/floornet.html","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/art-programmer.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-03-29T01:27:52.000Z","updated_at":"2024-04-02T21:40:47.000Z","dependencies_parsed_at":"2022-08-09T03:15:28.170Z","dependency_job_id":null,"html_url":"https://github.com/art-programmer/FloorNet","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/art-programmer%2FFloorNet","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/art-programmer%2FFloorNet/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/art-programmer%2FFloorNet/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/art-programmer%2FFloorNet/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/art-programmer","download_url":"https://codeload.github.com/art-programmer/FloorNet/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":214982086,"owners_count":15811653,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["eccv2018","floornet","floorplan","tensorflow"],"created_at":"2024-08-12T03:02:20.663Z","updated_at":"2025-07-27T04:33:31.112Z","avatar_url":"https://github.com/art-programmer.png","language":"Python","readme":"# FloorNet: A Unified Framework for Floorplan Reconstruction from 3D Scans\nBy Chen Liu\u003csup\u003e\\*\u003c/sup\u003e, Jiaye Wu\u003csup\u003e\\*\u003c/sup\u003e, and Yasutaka Furukawa (\u003csup\u003e\\*\u003c/sup\u003e indicates equal contribution)\n\n## Introduction\n\nThis paper proposes FloorNet, a novel neural network, to turn RGBD videos of indoor spaces into vector-graphics floorplans. FloorNet consists of three branches, PointNet branch, Floorplan branch, and Image branch. For more details, please refer to our ECCV 2018 [paper](https://arxiv.org/abs/1804.00090) or visit our [project website](http://art-programmer.github.io/floornet.html). This is a follow-up work of our floorplan transformation project which you can find [here](https://github.com/art-programmer/FloorplanTransformation).\n\n## Updates\n[12/22/2018] We now provide a free IP solver (not relying on Gurobi) at IP.py. The functionality of IP.py should be similar to QP.py which uses Gurobi to solve the IP problem. You might want to consider the free solver if you don't have a Gurobi license.\n\n## Dependencies\nPython 2.7, TensorFlow (\u003e= 1.3), numpy, opencv 3, CUDA (\u003e= 8.0), Gurobi (free only for academic usages).\n\n## Data\n\n### Dataset used in the paper\n\nWe collect 155 scans of residential units and annotated corresponding floorplan information. Among 155 scans, 135 are used for training and 20 are for testing. We convert data to tfrecords files which can be downloaded [here](https://drive.google.com/open?id=16lyX_xTiALUzKyst86WJHlhpTDr8XPF_) (or [here](https://mega.nz/#F!5yQy0b5T!ykkR4dqwGO9J5EwnKT_GBw) if you cannot access the previous one). Please put the downloaded files under folder *data/*.\n\nHere are the links to the raw [point clouds](https://drive.google.com/open?id=1JJlD0qsgMpiU5Jq9TNm3uDPjvqi88aZn), [annotations](https://drive.google.com/open?id=1hYDE2SXLA8Cq7LEK67xO-UMeTSPJ5rcB), and their [associations](https://drive.google.com/open?id=125TAmYWk22EyzCdlbGIfX4Z4DRMhru_V). Please refer to **RecordWriterTango.py** to see how to convert the raw data and annotations to tfrecords files.\n\n\n### Using custom data\n\nTo generate training/testing data from other data source, the data should be converted to tfrecords as what we did in **RecordWriterTango.py** (an example of our raw data before processed by **RecordWriterTango.py** is provided [here](https://mega.nz/#!dnohjKZa!I3NJZ806vNK-UYp-ap7OynGnS5E-E5AK_z5WsX8n1Ls)). Please refer to [this guide](http://warmspringwinds.github.io/tensorflow/tf-slim/2016/12/21/tfrecords-guide/) for how to generate and read tfrecords. \n\nBasically, every data sample(tf.train.Example) should at least contain the following components:\n\n\n1. Inputs:\n\n\t- a point cloud (randomly sampled 50,000 points)\n\t- a mapping from point cloud's 3D space to 2D space of the 256x256 top-view density image.\n\t\t- It contains 50,000 indices, one for each point.\n\t\t- For point (x, y, z), index = round((y - min(Y) + padding) / (maxRange + 2 * padding) * 256) * 256 + round((x - min(X) + padding) / (maxRange + 2 * padding) * 256).\n\t\t\t- maxRange = max(max(X) - min(X), max(Y) - min(Y))\n\t\t\t- padding could be any small value, say 0.05 maxRange\n\t- optional: image features of the RGB video stream, if the image branch is enabled\n\n2. Labels:\n\n\t- Corners and their corresponding types\n\t- Total number of corners\n\t- A ground-truth icon segmentation map\n\t- A ground-truth room segmentation map\n\t\nAgain, please refer to \t**RecordWriterTango.py** for exact details.\n\n**NEW:** We added a template file, **RecordWriterCustom.py** for using custom data.\n\n\n## Annotator\nFor reference, a similar (but not the same) annotator written in Python is [here](https://github.com/art-programmer/FloorplanAnnotator). You need to make some changes to annotate your own data.\n\n## Training\nTo train the network from scratch, please run:\n```bash\npython train.py --restore=0\n```\n\n## Evaluation\nTo evaluate the performance of our trained model, please run:\n```bash\npython train.py --task=evaluate --separateIconLoss\n```\n\n## Generate 3D models\nWe can popup the reconstructed floorplan to generate 3D models. Please refer to our previous project, [FloorplanTransformation](https://github.com/art-programmer/FloorplanTransformation), for more details.\n\n## Contact\n\nIf you have any questions, please contact me at chenliu@wustl.edu.\n","funding_links":[],"categories":["Python","Datasets"],"sub_categories":["Scene level"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fart-programmer%2FFloorNet","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fart-programmer%2FFloorNet","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fart-programmer%2FFloorNet/lists"}