{"id":13444044,"url":"https://github.com/charlesq34/pointnet-autoencoder","last_synced_at":"2025-05-07T20:18:08.608Z","repository":{"id":43061308,"uuid":"133431126","full_name":"charlesq34/pointnet-autoencoder","owner":"charlesq34","description":"Autoencoder for Point Clouds","archived":false,"fork":false,"pushed_at":"2023-10-08T13:42:28.000Z","size":483,"stargazers_count":430,"open_issues_count":16,"forks_count":88,"subscribers_count":10,"default_branch":"master","last_synced_at":"2025-05-07T20:18:03.128Z","etag":null,"topics":["autoencoder","deep-learning","point-cloud","pointnet"],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/charlesq34.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2018-05-14T23:02:04.000Z","updated_at":"2025-03-11T12:50:30.000Z","dependencies_parsed_at":"2024-01-18T15:26:37.039Z","dependency_job_id":"1f978b91-26e6-402e-b7b0-588def40deea","html_url":"https://github.com/charlesq34/pointnet-autoencoder","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/charlesq34%2Fpointnet-autoencoder","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/charlesq34%2Fpointnet-autoencoder/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/charlesq34%2Fpointnet-autoencoder/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/charlesq34%2Fpointnet-autoencoder/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/charlesq34","download_url":"https://codeload.github.com/charlesq34/pointnet-autoencoder/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252949239,"owners_count":21830154,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["autoencoder","deep-learning","point-cloud","pointnet"],"created_at":"2024-07-31T03:02:17.534Z","updated_at":"2025-05-07T20:18:08.581Z","avatar_url":"https://github.com/charlesq34.png","language":"Python","readme":"# pointnet-autoencoder\n\n![prediction example](https://github.com/charlesq34/pointnet-autoencoder/blob/master/doc/teaser.jpg)\n\nHere we present code to build an autoencoder for point clouds, with \u003ca href=\"https://github.com/charlesq34/pointnet\"\u003ePointNet\u003c/a\u003e encoder and various kinds of decoders. We train and test our autoencoder on the \u003ca href=\"https://cs.stanford.edu/~ericyi/project_page/part_annotation/index.html\" target=\"_blank\"\u003eShapeNetPart dataset\u003c/a\u003e. This is a side project I played with recently -- you are welcomed to modify it for your own projects or research. Let me know if you discover something interesting!\n\n## LICENSE\nThis repository is under the MIT license. See the LICENSE file for detail.\n\n## Installation\nWe need \u003ca href=\"https://www.tensorflow.org/install/\" target=\"_blank\"\u003eTensorFlow\u003c/a\u003e (version\u003e=1.4).\n\nFor point cloud reconstruction loss function, we need to compile two custum TF operators under `tf_ops/nn_distance` (Chamfer's distance) and `tf_ops/approxmatch` (earth mover's distance). Check the `tf_compile_*.sh` script under these two folders, modify the TensorFlow and CUDA path accordingly before you run the shell script to compile the operators. Check this \u003ca href=\"https://arxiv.org/abs/1612.00603\" target=\"_blank\"\u003ePAPER\u003c/a\u003e for an introduction for these two point cloud losses.\n\nFor a visualization helper, go to `utils/` and run `sh compile_render_balls_so.sh` -- run `python show3d_balls.py` to test if you have successfully compiled it.\n\n## Download Data\nShapeNetPart dataset is available \u003ca href=\"https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0.zip\" target=\"_blank\"\u003eHERE (635MB)\u003c/a\u003e. Simply download the zip file and move the `shapenetcore_partanno_segmentation_benchmark_v0` folder to `data`.\n\nTo visualize the dataset, run (type `q` to go to the next shape, see `show3d_balls.py` for more detailed hot keys):\n\n    python part_dataset.py\n\n## Train an Autoencoder\nTo train the most basic autoencoder (fully connected layer decoder with Chamfer's distance loss) on chair models with aligned poses, simply run the following command:\n\n    python train.py --model model --log_dir log_chair_norotation --num_point 2048 --category Chair --no_rotation\n\nYou can check more options for training by:\n\n    python train.py -h\n\n## Visualize Reconstruction on Test Set\nTo test and visualize results of the trained autoencoder above, simply run:\n\n    python test.py --model model --model_path log_chair_norotation/model.ckpt --category Chair\n\nYou can check more options for testing by:\n    \n    python test.py -h\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcharlesq34%2Fpointnet-autoencoder","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcharlesq34%2Fpointnet-autoencoder","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcharlesq34%2Fpointnet-autoencoder/lists"}