{"id":23256529,"url":"https://github.com/absadiki/ppo_pytorch","last_synced_at":"2026-04-10T23:52:30.651Z","repository":{"id":121433021,"uuid":"584035954","full_name":"absadiki/ppo_pytorch","owner":"absadiki","description":"A simple implementation of the Proximal Policy Optimization (PPO) algorithm using Pytorch.","archived":false,"fork":false,"pushed_at":"2023-01-01T04:10:44.000Z","size":166,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-03-29T09:34:22.483Z","etag":null,"topics":["ppo","proximal-policy-optimization","pytorch","reinforcement-learning","tensorboard"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/absadiki.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-01-01T02:48:36.000Z","updated_at":"2023-06-21T22:54:37.000Z","dependencies_parsed_at":null,"dependency_job_id":"3117f8c5-2345-476d-b321-1ff6e798240a","html_url":"https://github.com/absadiki/ppo_pytorch","commit_stats":null,"previous_names":["absadiki/ppo_pytorch"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/absadiki%2Fppo_pytorch","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/absadiki%2Fppo_pytorch/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/absadiki%2Fppo_pytorch/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/absadiki%2Fppo_pytorch/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/absadiki","download_url":"https://codeload.github.com/absadiki/ppo_pytorch/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247431634,"owners_count":20937999,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ppo","proximal-policy-optimization","pytorch","reinforcement-learning","tensorboard"],"created_at":"2024-12-19T12:15:49.004Z","updated_at":"2026-04-10T23:52:25.602Z","avatar_url":"https://github.com/absadiki.png","language":"Python","readme":"# ppo_pytorch\nA simple implementation of the [Proximal Policy Optimization (PPO)](https://arxiv.org/abs/1707.06347) Reinforcement Learning algorithm using Pytorch.\n\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"cartpole-demo.gif\"\u003e\n\u003c/p\u003e\n\n## Some features\n* A separate file for hyper-parameters for an easy, practical tuning.\n* You can stop/resume the training process any time as the trained models are saved after every epoch in the `models` directory.\n* [Tensorboard](https://github.com/tensorflow/tensorboard) support: if you have Tensorboard installed, you can run it to track the progress of the training in real-time using:\n```bash\ntensorboard --logdir runs\n```\n## How to use\n* Clone the repository to a local folder\n```bash \ngit clone https://github.com/abdeladim-s/ppo_pytorch \u0026\u0026 cd ppo_pytorch\n```\n* Install the dependencies\n```bash\npip install -r requirements.txt\n```\n* Run the main file\n```bash \npython main.py \n```\nThis will run the trained agent of the `CartPole-v0` environment (Similar to the image above).\n\n## How to train your own models\n\n* Add your environment to the `config` dictionary inside the `hyper_paramters` file.\nThis should be a [gymnasium](https://github.com/Farama-Foundation/Gymnasium) (formerly Gym) environment or any subclass of `gymnasium.Env`\n```python\nconfig = {\n    # ...\n    'Pendulum-v1': {\n        \n    }\n}\n```\n* Override the `defaults` ppo hyper-parameters or create another new set of hyper-parameters.\n\n```python\nconfig = {\n    # ...\n    'Pendulum-v1': {\n        'defaults': {\n            'epochs': 100,\n        },\n        'model-001':{\n            \"seed\": 10,\n            \"epochs\": 25,\n            \"steps_per_epoch\": 1500,\n            \"max_episode_steps\": 100,\n            # ...\n            \"reward_threshold\": None\n        }\n    }\n}\n```\n* Modify the `main` function with the new `env_name` and `model_id`\n```python\ndef main():\n\n    env_name = 'Pendulum-v1'\n    model_id = 'model-001'\n\n   # ...\n```\n_*if no `model_id` was given, the `defaults` parameters will be taken by default._ \n\n* Run the `main.py` file with `train=True` if you want to train the agent or `train=False` if you want to evaluate the trained model.\n```python\ndef main():\n\n    env_name = 'Pendulum-v1'\n    model_id = 'model-001'\n\n    train = True  # for training\n    #  train = False  # for evaluation\n    if train:\n        policy = Policy(env_name, model_id=model_id, render_mode=None)\n        policy.train()\n    else:\n        policy = Policy(env_name, model_id=model_id, render_mode='human')\n        policy.evaluate(1)\n\n```\n\n__Note__\n\n* You can test a simple random agent using the `test_env.py` script.\n\n\n## license\nGPLv3. See `LICENSE` for the full license text.\n\n\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fabsadiki%2Fppo_pytorch","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fabsadiki%2Fppo_pytorch","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fabsadiki%2Fppo_pytorch/lists"}