{"id":13800353,"url":"https://github.com/CN-UPB/DeepCoMP","last_synced_at":"2025-05-13T09:31:40.158Z","repository":{"id":49172034,"uuid":"259868723","full_name":"CN-UPB/DeepCoMP","owner":"CN-UPB","description":"Dynamic multi-cell selection for cooperative multipoint (CoMP) using (multi-agent) deep reinforcement learning","archived":false,"fork":false,"pushed_at":"2023-07-30T18:25:21.000Z","size":662242,"stargazers_count":62,"open_issues_count":0,"forks_count":13,"subscribers_count":6,"default_branch":"master","last_synced_at":"2025-04-27T10:04:32.270Z","etag":null,"topics":["cell-selection","cellular","comp","mobile","multi-agent-reinforcement-learning","ppo","python","ray","reinforcement-learning","rllib","simulation","wireless"],"latest_commit_sha":null,"homepage":"https://cn-upb.github.io/DeepCoMP/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/CN-UPB.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2020-04-29T08:30:19.000Z","updated_at":"2025-04-22T05:05:10.000Z","dependencies_parsed_at":"2024-01-18T00:31:02.193Z","dependency_job_id":"a34880d2-2cf4-4d0c-8ca3-245dc121a795","html_url":"https://github.com/CN-UPB/DeepCoMP","commit_stats":{"total_commits":793,"total_committers":5,"mean_commits":158.6,"dds":"0.020176544766708715","last_synced_commit":"63fbbd38fcebbec35ee80e534481ffc0de3138e1"},"previous_names":[],"tags_count":23,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CN-UPB%2FDeepCoMP","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CN-UPB%2FDeepCoMP/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CN-UPB%2FDeepCoMP/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CN-UPB%2FDeepCoMP/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/CN-UPB","download_url":"https://codeload.github.com/CN-UPB/DeepCoMP/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253913128,"owners_count":21983262,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cell-selection","cellular","comp","mobile","multi-agent-reinforcement-learning","ppo","python","ray","reinforcement-learning","rllib","simulation","wireless"],"created_at":"2024-08-04T00:01:11.707Z","updated_at":"2025-05-13T09:31:35.146Z","avatar_url":"https://github.com/CN-UPB.png","language":"Python","readme":"[![CI](https://github.com/CN-UPB/DeepCoMP/actions/workflows/python-test.yml/badge.svg)](https://github.com/CN-UPB/DeepCoMP/actions/workflows/python-test.yml)\n[![PyPi](https://github.com/CN-UPB/DeepCoMP/actions/workflows/python-publish.yml/badge.svg?branch=v1.1.0)](https://github.com/CN-UPB/DeepCoMP/actions/workflows/python-publish.yml)\n[![Docker Pulls](https://img.shields.io/docker/pulls/stefanbschneider/deepcomp?label=Docker%20Pulls\u0026logo=docker)](https://hub.docker.com/r/stefanbschneider/deepcomp)\n[![DeepSource](https://deepsource.io/gh/CN-UPB/DeepCoMP.svg/?label=active+issues)](https://deepsource.io/gh/CN-UPB/DeepCoMP/?ref=repository-badge)\n\n\n# DeepCoMP: Self-Learning Dynamic Multi-Cell Selection for Coordinated Multipoint (CoMP)\n\nMulti-Agent Deep Reinforcement Learning for Coordinated Multipoint in Mobile Networks\n\nThree variants: DeepCoMP (central agent), DD-CoMP (distributed agents using central policy), D3-CoMP (distributed agents with separate policies).\nAll three approaches self-learn and adapt to various scenarios in mobile networks without expert knowledge, human intervention, or detailed assumptions about the underlying system.\nCompared to other approaches, they are more flexible and achieve higher Quality of Experience.\n\nFor a high-level overview of DeepCoMP, please refer to my [blog post](https://stefanbschneider.github.io/blog/deepcomp).\nMore details are available in our research paper presenting DeepCoMP ([preprint](https://ris.uni-paderborn.de/download/33854/33855/preprint.pdf)).\nI also talked about DeepCoMP at the Ray Summit 2021 ([YouTube](https://youtu.be/Qy4SzJKXlGE)).\n\nThe simulation environment used to train DeepCoMP is available separately as [mobile-env](https://github.com/stefanbschneider/mobile-env).\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"https://raw.githubusercontent.com/CN-UPB/DeepCoMP/master/docs/gifs/dashboard_lossy.gif?raw=true\"\u003e\u003cbr/\u003e\n    \u003cem\u003eVisualized cell selection policy of DeepCoMP after 2M training steps.\u003c/em\u003e\u003cbr\u003e\n    \u003csup\u003e\u003ca href=\"https://thenounproject.com/search/?q=base+station\u0026i=1286474\" target=\"_blank\"\u003eBase station icon\u003c/a\u003e by Clea Doltz from the Noun Project\u003c/sup\u003e\n\u003c/p\u003e\n\n## Citation\n\nIf you use this code, please cite our [paper (preprint; accepted at IEEE TNSM 2023)](https://ris.uni-paderborn.de/download/33854/33855):\n\n```\n@article{schneider2023deepcomp,\n\ttitle={Multi-Agent Deep Reinforcement Learning for Coordinated Multipoint in Mobile Networks},\n\tauthor={Schneider, Stefan and Karl, Holger and Khalili, Ramin and Hecker, Artur},\n\tjournal={IEEE Transactions on Network and Service Management (TNSM)},\n\tyear={2023},\n}\n```\n\n## Setup\n\nYou need Python 3.8+. You can install `deepcomp` either directly from [PyPi](https://pypi.org/project/deepcomp/) or manually after cloning this repository.\n\n### Simple Installation via PyPi\n\n```\nsudo apt update\nsudo apt upgrade\nsudo apt install cmake build-essential zlib1g-dev python3-dev\n\npip install deepcomp\n```\n\n### Manual Installation from Source\n\nFor adjusting or further developing DeepCoMP, it's better to install manually rather than from PyPi. \nClone the repository. Then install everything, following these steps:\n\n```\n# only on ubuntu\nsudo apt update\nsudo apt upgrade\nsudo apt install cmake build-essential zlib1g-dev python3-dev\n\n# clone\ngit clone git@github.com:CN-UPB/DeepCoMP.git\ncd DeepCoMP\n\n# install all python dependencies\npip install .\n# \"python setup.py install\" does not work for some reason: https://stackoverflow.com/a/66267232/2745116\n# for development install (when changing code): pip install -e .\n```\n\nTested on Ubuntu 20.04 and Windows 10 with Python 3.8.\n\nFor saving videos and gifs, you also need to install ffmpeg (not on Windows) and [ImageMagick](https://imagemagick.org/index.php). \nOn Ubuntu:\n\n```\nsudo apt install ffmpeg imagemagick\n```\n\n### Docker\n\nThere is a Docker image that comes with `deepcomp` preinstalled. \nTo use the Docker image, simply pull the latest version from [Docker Hub](https://hub.docker.com/r/stefanbschneider/deepcomp):\n\n```\ndocker pull stefanbschneider/deepcomp\n# tag image with just \"deepcomp\". alternatively, write out \"stefanbschneider/deepcomp\" in all following commands.\ndocker tag stefanbschneider/deepcomp:latest deepcomp\n```\n\nAlternatively, to build the Docker image manually from the `Dockerfile`, clone this repository and run\n```\ndocker build -t deepcomp .\n```\nUse the `--no-cache` option is to force a rebuild of the image, pulling the latest `deepcomp` version from PyPI.\n\n\n## Usage\n\n```\n# get an overview of all options\ndeepcomp -h\n```\n\nFor example: \n\n```\ndeepcomp --env medium --slow-ues 3 --agent central --workers 2 --train-steps 50000 --seed 42 --video both\n```\n\nTo run DeepCoMP, use `--alg ppo --agent central`.\nFor DD-CoMP, use `--alg ppo --agent multi`, and for D3-CoMP, use `--alg ppo --agent multi --separate-agent-nns`.\n\nBy default, training logs, results, videos, and trained agents are saved in `\u003cproject-root\u003e/results`,\nwhere `\u003cproject-root\u003e` is the root directory of DeepCoMP.\nIf you cloned the repo from GitHub, this is where the Readme is. \nIf you installed via PyPi, this is in your virtualenv's site packages.\nYou can choose a custom location with `--result-dir \u003ccustom-path\u003e`.\n\n### Docker\n\n**Note:** By default, results within the Docker container are not stored persistently. \nTo save them, copy them from the Docker container or use a Docker volume.\n\n#### Start the Container\n\nIf you want to use the `deepcomp` Docker container and pulled the corresponding image from Docker Hub,\nyou can use it as follows:\n```\ndocker run -d -p 6006:6006 -p 8000:8000 --rm --shm-size=3gb --name deepcomp deepcomp\n```\nThis starts the Docker container in the background, publishing port 6006 for TensorBoard and port 8000 for the\nHTTP server (described below).\nThe container automatically starts TensorBoard and the HTTP server, so this does not need to be done manually.\nThe `--rm` flag automatically removes the container once it is stopped.\nThe `--shm-size=3gb` sets the size of `/dev/shm` inside the Docker container to 3 GB, which is too small by default.\n\n#### Use DeepCoMP on the Container\n\nTo execute commands on the running Docker container, use `docker exec \u003ccontainer-name\u003e \u003ccommand\u003e` as follows:\n```\ndocker exec deepcomp deepcomp \u003cdeepcomp-args\u003e\n```\nHere, the arguments are identical with the ones described above.\nFor example, the following command lists all CLI options:\n```\ndocker exec deepcomp deepcomp -h\n```\nOr to train the central DeepCoMP agent for a short duration of 4000 steps:\n```\ndocker exec -t deepcomp deepcomp --approach deepcomp --train-steps 4000 --batch-size 200 --ues 2 --result-dir results\n```\n**Important:** Specify `--result-dir results` as argument. \nOtherwise, the results will be stored elsewhere and TensorFlow and the HTTP server will not find and display them.\n\nThe other `deepcomp` arguments can be set as desired.\nThe Docker `-t` flag ensures that the output is printed continuously during training, not just after completion.\n\nTo inspect training progress or view create files (e.g., rendered videos), use TensorBoard and the HTTP server,\nwhich are available via `localhost:6006` and `localhost:8000`.\n\n#### Terminate the Container\n\n**Important:** Stopping the container will remove any files and training progress within the container.\n\nStop the container with\n```\ndocker stop deepcomp\n```\n\n### Accessing results remotely\n\nWhen running remotely, you can serve the replay video by running:\n\n```\ncd results\npython -m http.server\n```\n\nThen access at `\u003cremote-ip\u003e:8000`.\n\n### Tensorboard\n\nTo view learning curves (and other metrics) when training an agent, use Tensorboard:\n\n```\ntensorboard --logdir results/train/ (--host 0.0.0.0)\n```\n\nTensorboard is available at http://localhost:6006 (or `\u003cremote-ip\u003e:6006` when running remotely).\n\n### Scaling Up: Running DeepCoMP on multiple cores or a multi-node cluster\n\nTo train DeepCoMP on multiple cores in parallel, configure the number of workers (corresponding to CPU cores) with `--workers`.\n\nTo scale training to a multi-node cluster, adjust `cluster.yaml` and follow the steps described [here](https://stefanbschneider.github.io/blog/rllib-private-cluster).\nSet `--workers` to the total number of CPU cores you want to use on the entire cluster.\n\n\n\n## Documentation\n\nAPI documentation is on [https://cn-upb.github.io/DeepCoMP/](https://cn-upb.github.io/DeepCoMP/).\n\nDocumentation is generated based on docstrings using [pdoc3](https://pdoc3.github.io/pdoc/):\n\n```\n# from project root\npip install pdoc3\npdoc --force --html --output-dir docs deepcomp\n# move files to be picked up by GitHub pages\nmv docs/deepcomp/ docs/\n# then manually adjust index.html to link to GitHub repo\n```\n\n## Contributions\n\nDevelopment: [@stefanbschneider](https://github.com/stefanbschneider/)\n\nFeature requests, questions, issues, and pull requests via GitHub are welcome.\n\n## Acknowledgement\n\nDeepCoMP is an outcome of a joint project between Paderborn University, Germany, and Huawei Germany.\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"https://raw.githubusercontent.com/CN-UPB/DeepCoMP/master/docs/logos/upb.png?raw=true\" width=\"200\" hspace=\"30\"/\u003e\n    \u003cimg src=\"https://raw.githubusercontent.com/CN-UPB/DeepCoMP/master/docs/logos/huawei_horizontal.png?raw=true\" width=\"250\" hspace=\"30\"/\u003e\n\u003c/p\u003e\n\n[Base station icon](https://thenounproject.com/search/?q=base+station\u0026i=1286474) (used in rendered videos) by Clea Doltz from the Noun Project.\n","funding_links":[],"categories":["Research"],"sub_categories":["Diameter"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FCN-UPB%2FDeepCoMP","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FCN-UPB%2FDeepCoMP","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FCN-UPB%2FDeepCoMP/lists"}