{"id":16658793,"url":"https://github.com/0xnineteen/hyper-alpha-zero","last_synced_at":"2025-05-13T01:30:37.264Z","repository":{"id":65285444,"uuid":"589276845","full_name":"0xNineteen/hyper-alpha-zero","owner":"0xNineteen","description":"hyper optimized alpha zero implementation to play gomoku (distributed training with ray, mcts with cython) ","archived":false,"fork":false,"pushed_at":"2023-01-15T17:29:03.000Z","size":885,"stargazers_count":8,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-05-07T00:12:16.288Z","etag":null,"topics":["alpha-go","alpha-zero","cpp","cython","deepmind","mcts","monte-carlo-tree-search","multi-core","python","ray"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/0xNineteen.png","metadata":{"files":{"readme":"readme.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2023-01-15T16:52:53.000Z","updated_at":"2025-02-12T05:44:11.000Z","dependencies_parsed_at":"2023-01-16T06:31:07.140Z","dependency_job_id":null,"html_url":"https://github.com/0xNineteen/hyper-alpha-zero","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/0xNineteen%2Fhyper-alpha-zero","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/0xNineteen%2Fhyper-alpha-zero/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/0xNineteen%2Fhyper-alpha-zero/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/0xNineteen%2Fhyper-alpha-zero/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/0xNineteen","download_url":"https://codeload.github.com/0xNineteen/hyper-alpha-zero/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253853618,"owners_count":21974143,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["alpha-go","alpha-zero","cpp","cython","deepmind","mcts","monte-carlo-tree-search","multi-core","python","ray"],"created_at":"2024-10-12T10:06:44.284Z","updated_at":"2025-05-13T01:30:36.891Z","avatar_url":"https://github.com/0xNineteen.png","language":"Python","readme":"# hyper-alpha-zero\n\nhyper-optimized alpha-zero implementation with ray + cython for speed\n\ntrain an agent that beats random actions and pure MCTS in 2 minutes\n\n### file structure \n\n- `train.py`: distributed training with ray \n- `ctree/`: mcts nodes in cython (node.py = pure python)\n- `mcts.py`: mcts playouts \n- `network.py`: neural net stuff \n- `board.py`: gomoku board \n\n## system design \n- ray distributed parts (`train.py`):\n  - one distributed replay buffer \n  - N actors with the 'best model' weights which self-play games and store data in replay buffer \n  - M 'candidate models' which pull from the replay buffer and train \n    - each iteration they play against the 'best model' and if they win the 'best model' weights is updated \n    - include write/evaluation locks on 'best weights'\n  - 1 best model weights store (PS / parameter server)\n    - stores the best weights which are retrived by self-play and updated when candidates win \n\n![](imgs/2023-01-15-09-18-19.png)\n\n- cython impl\n  - `ctree/`: c++/cython mcts \n  - `node.py`: pure python mcts\n\n-- todos -- \n\n- jax network impl \n- tpu + gpu support \n- saved model weights\n\n### references \n- based off: https://github.com/junxiaosong/AlphaZero_Gomoku\n- distributed rl: http://rail.eecs.berkeley.edu/deeprlcourse-fa18/static/slides/lec-21.pdf \n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2F0xnineteen%2Fhyper-alpha-zero","html_url":"https://awesome.ecosyste.ms/projects/github.com%2F0xnineteen%2Fhyper-alpha-zero","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2F0xnineteen%2Fhyper-alpha-zero/lists"}