Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/manyoso/allie
Allie: A UCI compliant chess engine
https://github.com/manyoso/allie
alphabeta alphazero chess chess-engine deepmind mcts neural-network
Last synced: 12 days ago
JSON representation
Allie: A UCI compliant chess engine
- Host: GitHub
- URL: https://github.com/manyoso/allie
- Owner: manyoso
- License: gpl-3.0
- Created: 2019-02-13T14:30:18.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2021-04-08T23:35:45.000Z (over 3 years ago)
- Last Synced: 2024-10-14T12:49:19.522Z (25 days ago)
- Topics: alphabeta, alphazero, chess, chess-engine, deepmind, mcts, neural-network
- Language: C++
- Homepage:
- Size: 700 KB
- Stars: 104
- Watchers: 13
- Forks: 21
- Open Issues: 16
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# FAQ
## What is Allie?
Allie is a chess engine heavily inspired by the seminal AlphaZero paper and the [Lc0](https://lczero.org "Lc0") project.
## How is she related to Leela?
Like Leela, Allie is based off of the same concepts and algorithms that were introduced by Deepmind in the AlphaZero paper(s), but her code is original and contains an alternative implementation of those ideas. You can think of Allie as a young cousin of Leela that utilizes the same networks produced by the Lc0 project.
## Ok, so details. How is she different?
Well, I was inspired during the original CCC to see if you could pair traditional Minimax/AlphaBeta search with an NN. This is still her main purpose and the focus going forward. However, the initial versions were using a similar pure MCTS algorithm as Lc0 and AlphaZero. The current versions of Allie use a modified hybrid search of Minimax and Monte Carlo.
Here is a non-exhaustive list of differences:
- UCI protocol code
- Input/Output
- Time managment
- Board representation
- Move generation
- Zobrist keys
- Hash implementation
- The threading model
- Search algorithm
- Tree structure
- The multi-gpu scaling code
- Fpu-reduction
- Mate distance eval
- Testing framework and tests
- Debugging code## What bits are used from the Lc0 project?
Here is what Allie uses from the Lc0 codebase:
- Protocol buffers for an NN weights file
- Code for discovering/loading the NN weights file
- Backend code for GPU to get evaluations given an NN weights file## All right, brass tacks how strong is she?
She is among the strongest chess engines in the world as of November, 2020.
## Why did you develop her rather than just help out Leela?
A couple reasons. First, my original inspiration was to see if I could implement an alternative search using Minimax/AlphaBeta rather than MCTS. Second, I wanted to teach myself the AlphaZero concepts and algorithms and this was the best way to do it. Allie is now using a hybrid Minimax Monte Carlo search.
Also, I am contributing back some patches to Leela where appropriate.
## Ok, so she uses Lc0's networks. Why don't you make your own?
Two reasons. First, I couldn't hope to compete with the machine learning and AI experts who are contributing to the Lc0 project. Second, it would take a lot of compute power to train a new network and I just don't have that.
## Supporting further development of Allie
I've set up a patreon page here: https://www.patreon.com/gonzochess75 and would greatly appreciate any support from the community to be used to test Allie on more multi-gpu systems.
## Anything else?
Yes, I'd like to wholeheartedly thank the developers of the Lc0 project which make Allie even possible. Only by standing on their shoulders could Allie exist. I'd also like to thank Deepmind for their work on AlphaZero which started the whole NN for chess era. Finally, I'd like to thank Andrew Grant who wrote the Ethereal engine whose code was a big inspiration for writing Allie and to CCC and TCEC for including the engine in their tournaments.
# Quick start guide (Linux)
## Prerequisites
You need:
- working build environment: compiler (e.g., GCC or CLANG), binutils, and the rest of the usual stuff.
- qmake (part of Qt SDK)
- CUDA and cuDNN. If you can build and run Lc0 with the cuDNN backend, you're probably good.## Building
The basic steps:
- `qmake` --- Generates the makefiles. Usually only needed to run once.
- `make -j` --- Build Allie.If everything went well, you now have `bin/allie` .
If the build system cannot locate CUDA or you want to use a specific CUDA version, you can use NVCC, CUDA_INC_DIR and CUDA_LIB_DIR. For instance:
`make -j CUDA_INC_DIR=/opt/cuda-10.1.168/include CUDA_LIB_DIR=/opt/cuda-10.1.168/lib64 NVCC=/opt/cuda-10.1.168/bin/nvcc`
Similarly, you can specify the C++ compiler:
`make -j CXX=clang++-7`
To clean up all the build temporaries:
`make clean`
There's also `make distclean` to clean up everything, including the targets and the generated makefiles. You need to re-run `qmake` after this command. If in doubt whether changing compiler/CUDA/whatever is in effect, use `make distclean`.
## Running Allie
- Add the network weights file in the same directory with the Allie binary. Symlink (`ln -s`) can be used
- Launch `allie`. If everything went well, you'll see "allie" in stylished ASCII art with the version information
- You should now be ready to use Allie with your favorite UCI-compatible chess GUI.