{"id":20256585,"url":"https://github.com/interdigitalinc/doec","last_synced_at":"2025-10-16T15:25:11.582Z","repository":{"id":104788943,"uuid":"530386055","full_name":"InterDigitalInc/doec","owner":"InterDigitalInc","description":"Deep Octree Entropy Coding","archived":false,"fork":false,"pushed_at":"2022-08-30T17:25:06.000Z","size":144,"stargazers_count":3,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-01-14T03:44:56.265Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/InterDigitalInc.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-08-29T20:38:40.000Z","updated_at":"2023-01-12T07:22:45.000Z","dependencies_parsed_at":"2023-05-29T21:30:13.353Z","dependency_job_id":null,"html_url":"https://github.com/InterDigitalInc/doec","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InterDigitalInc%2Fdoec","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InterDigitalInc%2Fdoec/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InterDigitalInc%2Fdoec/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InterDigitalInc%2Fdoec/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/InterDigitalInc","download_url":"https://codeload.github.com/InterDigitalInc/doec/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":241715023,"owners_count":20007913,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-14T10:47:17.795Z","updated_at":"2025-10-16T15:25:11.526Z","avatar_url":"https://github.com/InterDigitalInc.png","language":"Python","funding_links":[],"categories":[],"sub_categories":[],"readme":" # DOEC: Deep Octree Entropy Coding\n\nThis repository contains the implementation of \"Point cloud geometry compression using learned octree entropy coding\" (m59528 \u0026 m59529) by InterDigital. It is implemented based on the [pccAI](https://github.com/InterDigitalInc/pccAI) (*pick-kai*) framework—a PyTorch-based framework for conducting AI-based Point Cloud Compression (PCC) experiments.\nAdditionally, currently this repository only contains the code for training and benchmarking of VoxelContextNet (VCN).\n\n## Installation\n\nWe tested our implementation on Python 3.6, PyTorch 1.7.0 and CUDA 10.1, under a conda virtual environment. For installation, please launch our installation script `install_torch-1.7.0+cu-10.1.sh` with the following command:\n```bash\necho y | conda create -n doec python=3.6 \u0026\u0026 conda activate doec \u0026\u0026 ./install_torch-1.7.0+cu-10.1.sh\n```\nIt is highly recommended to look at the installation script which describes the details of the necessary packages. After that, put the binary of `pc_error` (MPEG D1 \u0026 D2 computation) under the `third_party` folder.\n\n## Datasets\nCreate a `datasets` folder then put all the datasets below. One may create soft links to the existing datasets to save space.\n### Ford Sequences\n\nWe use the first *Ford* sequences for training and the other two sequences for benchmarking, arranged as follows:\n```bash\n${ROOT_OF_THE_REPO}/datasets/ford\n                               ├── ford_01_q1mm\n                               ├── ford_02_q1mm\n                               └── ford_03_q1mm\n                                       ├── Ford_03_vox1mm-0200.ply\n                                       ├── Ford_03_vox1mm-0201.ply\n                                       ├── Ford_03_vox1mm-0202.ply\n                                       ...\n                                       └── Ford_03_vox1mm-1699.ply\n```\n\n## Basic Usages\n\nThe core of the training and benchmarking code are below the `pccai/pipelines` folder. They are called by their wrappers below the `experiments` folder. The basic way to launch experiments with pccAI is:\n ```bash\n ./scripts/run.sh ./scripts/[filename].sh [launcher] [GPU ID(s)]\n ```\nwhere `launcher` can be `s` (slurm), `d` (direct, run in background) and `f` (direct, run in foreground). `GPU ID(s)` can be ignored when launched with slurm. The results (checkpoints, point cloud files, log, *etc.*) will be generated under the `results/[filename]` folder. Note that multi-GPU training/benchmarking is not supported for training networks using sparse convolutions (i.e., SparseVCN).\n\n ### Benchmarking\n\nOne can use the following command lines for benchmarking the selected rate points individually, followed by merging the generated CSV files for MPEG reporting:\n ```bash\nfor i in {1..4}\ndo\n   ./scripts/run.sh ./scripts/doec/bench_ford_vcn_r0$i.sh f 0\ndone\npython ./utils/merge_csv.py --input_files ./results/bench_ford_vcn_r01/mpeg_report.csv ./results/bench_ford_vcn_r02/mpeg_report.csv ./results/bench_ford_vcn_r03/mpeg_report.csv ./results/bench_ford_vcn_r04/mpeg_report.csv --output_file ./results/bench_ford_vcn/mpeg_report.csv\n ```\n\nBD metrics and R-D curves are generated via the [MPEG reporting template for AI-based PCC](http://mpegx.int-evry.fr/software/MPEG/PCC/ai/mpeg-pcc-ai-report) (also available publically via [GitHub](https://github.com/yydlmzyz/AI-PCC-Reporting-Template)). For example, run the following command right under the folder of its repository:\n```bash\npython test.py --csvdir1='csvfiles/reporting_template_lossy.csv' --csvdir2='/PATH/TO/mpeg_report.csv' --csvdir_stats='csvfiles/reporting_template_stats.csv' --xlabel='bppGeo' --ylabel='d1T'\n```\nIt can also generate the average results for a certain category:\n```bash\npython test_mean.py --category='am_frame' --csvdir1='csvfiles/reporting_template_lossy.csv' --csvdir2='/PATH/TO/mpeg_report.csv' --csvdir_stats='csvfiles/reporting_template_stats.csv' --xlabel='bppGeo' --ylabel='d1T'\n```\n\nReplace `d1T` with `d2T` for computing the D2 metrics. The benchmarking of surface point clouds can be done in the same way. All the scripts for benchmarking are put under the `scripts/doec` folder. Please refer to the related MPEG contributions for example R-D curves.\n\n### Training\n\nTake the training of the Ford sequences as example, one can directly run\n ```bash\n./scripts/run.sh ./scripts/doec/train_ford_vcn.sh d 0\n ```\nwhich trains the deep entropy model for Ford sequences. The trained model will be generated under the `results/train_ford_vcn` folder.\n\nTo understand the meanings of the options in the scripts for benchmarking/training, refer to `pccai/utils/option_handler.py` for details.\n\n## License\nDOEC code is released under the BSD License, see `LICENSE` for details.\n\n## Contacts\nPlease contact Muhammad Lodhi (muhammad.lodhi@interdigital.com), for any questions.\n\n## Related Resources\n * [pccAI](https://github.com/InterDigitalInc/pccAI)\n * [MinkowskiEngine](https://github.com/NVIDIA/MinkowskiEngine)\n * [mpeg-pcc-ai-report](http://mpegx.int-evry.fr/software/MPEG/PCC/ai/mpeg-pcc-ai-report) / [AI-PCC-Reporting-Template](https://github.com/yydlmzyz/AI-PCC-Reporting-Template)\n * [TMC13](https://github.com/MPEGGroup/mpeg-pcc-tmc13)\n * [DeepZip](https://github.com/mohit1997/DeepZip/blob/master/src/arithmeticcoding_fast.py) (used for arithemtic coding)\n * [PointNet++](https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/master/models/pointnet2_utils.py) (used for set abstraction module in PointContextNet)\n ","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Finterdigitalinc%2Fdoec","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Finterdigitalinc%2Fdoec","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Finterdigitalinc%2Fdoec/lists"}