{"id":13793946,"url":"https://github.com/chris-chris/pysc2-examples","last_synced_at":"2025-04-12T21:28:13.486Z","repository":{"id":54308133,"uuid":"100801075","full_name":"chris-chris/pysc2-examples","owner":"chris-chris","description":"StarCraft II - pysc2 Deep Reinforcement Learning Examples","archived":false,"fork":false,"pushed_at":"2021-03-03T00:40:33.000Z","size":5196,"stargazers_count":755,"open_issues_count":26,"forks_count":354,"subscribers_count":36,"default_branch":"master","last_synced_at":"2025-04-04T01:07:07.154Z","etag":null,"topics":["ai","deep-q-network","deep-reinforcement-learning","deepmind","machine-learning","reinforcement-learning","startcraft2"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/chris-chris.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-08-19T14:50:06.000Z","updated_at":"2025-03-31T13:18:20.000Z","dependencies_parsed_at":"2022-08-13T11:40:13.612Z","dependency_job_id":null,"html_url":"https://github.com/chris-chris/pysc2-examples","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chris-chris%2Fpysc2-examples","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chris-chris%2Fpysc2-examples/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chris-chris%2Fpysc2-examples/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chris-chris%2Fpysc2-examples/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/chris-chris","download_url":"https://codeload.github.com/chris-chris/pysc2-examples/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248633908,"owners_count":21136937,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","deep-q-network","deep-reinforcement-learning","deepmind","machine-learning","reinforcement-learning","startcraft2"],"created_at":"2024-08-03T23:00:33.731Z","updated_at":"2025-04-12T21:28:13.439Z","avatar_url":"https://github.com/chris-chris.png","language":"Python","readme":"# StartCraft II Reinforcement Learning Examples\n\nThis example program was built on \n- pysc2 (Deepmind) [https://github.com/deepmind/pysc2]\n- baselines (OpenAI) [https://github.com/openai/baselines]\n- s2client-proto (Blizzard) [https://github.com/Blizzard/s2client-proto]\n- Tensorflow 1.3 (Google) [https://github.com/tensorflow/tensorflow]\n\n# Current examples\n\n## Minimaps\n- CollectMineralShards with Deep Q Network\n\n![CollectMineralShards](https://media.giphy.com/media/UrgVK9TFfv2AE/giphy.gif \"Collect Mineral\")\n\n# Quick Start Guide\n\n## 1. Get PySC2\n\n### PyPI\n\nThe easiest way to get PySC2 is to use pip:\n\n```shell\n$ pip install git+https://github.com/deepmind/pysc2\n```\n\nAlso, you have to install `baselines` library.\n\n```shell\n$ pip install git+https://github.com/openai/baselines\n```\n\n## 2. Install StarCraft II\n\n### Mac / Win\n\nYou have to purchase StarCraft II and install it. Or even the Starter Edition will work.\n\nhttp://us.battle.net/sc2/en/legacy-of-the-void/\n\n### Linux Packages\n\nFollow Blizzard's [documentation](https://github.com/Blizzard/s2client-proto#downloads) to\nget the linux version. By default, PySC2 expects the game to live in\n`~/StarCraftII/`.\n\n* [3.16.1](http://blzdistsc2-a.akamaihd.net/Linux/SC2.3.16.1.zip)\n\n## 3. Download Maps\n\nDownload the [ladder maps](https://github.com/Blizzard/s2client-proto#downloads)\nand the [mini games](https://github.com/deepmind/pysc2/releases/download/v1.2/mini_games.zip)\nand extract them to your `StarcraftII/Maps/` directory.\n\n## 4. Train it!\n\n```shell\n$ python train_mineral_shards.py --algorithm=a2c\n```\n\n## 5. Enjoy it!\n\n```shell\n$ python enjoy_mineral_shards.py\n```\n\n## 4-1. Train it with DQN\n\n```shell\n$ python train_mineral_shards.py --algorithm=deepq --prioritized=True --dueling=True --timesteps=2000000 --exploration_fraction=0.2\n```\n\n\n## 4-2. Train it with A2C(A3C)\n\n```shell\n$ python train_mineral_shards.py --algorithm=a2c --num_agents=2 --num_scripts=2 --timesteps=2000000\n```\n\n\n|                      | Description                                     | Default                         | Parameter Type |\n|----------------------|-------------------------------------------------|---------------------------------|----------------|\n| map                  | Gym Environment                                 | CollectMineralShards            | string         |\n| log                  | logging type  : tensorboard, stdout             | tensorboard                     | string         |\n| algorithm            | Currently, support 2 algorithms  : deepq, a2c   | a2c                             | string         |\n| timesteps            | Total training steps                            | 2000000                         | int            |\n| exploration_fraction | exploration fraction                            | 0.5                             | float          |\n| prioritized          | Whether using prioritized replay for DQN        | False                           | boolean        |\n| dueling              | Whether using dueling network for DQN           | False                           | boolean        |\n| lr                   | learning rate (if 0 set random e-5 ~ e-3)       | 0.0005                          | float          |\n| num_agents           | number of agents for A2C                        | 4                               | int            |\n| num_scripts          | number of scripted agents for A2C               | 4                               | int            |\n| nsteps               | number of steps for update policy               | 20                              | int            |\n\n","funding_links":[],"categories":["Project","Open-Source Projects"],"sub_categories":["Implementation of Algorithms","Starcraft Projects"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchris-chris%2Fpysc2-examples","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fchris-chris%2Fpysc2-examples","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchris-chris%2Fpysc2-examples/lists"}