Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/khazhyk/narow
naroawasdasfdasfasdfgas
https://github.com/khazhyk/narow
Last synced: 2 days ago
JSON representation
naroawasdasfdasfasdfgas
- Host: GitHub
- URL: https://github.com/khazhyk/narow
- Owner: khazhyk
- Created: 2015-01-19T20:37:23.000Z (about 10 years ago)
- Default Branch: master
- Last Pushed: 2015-01-28T03:33:45.000Z (almost 10 years ago)
- Last Synced: 2023-08-02T02:42:35.518Z (over 1 year ago)
- Language: Java
- Size: 1.16 MB
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README
Awesome Lists containing this project
README
CS4341 - Project 1 Games
Professor Neil Heffernan
Project Report
===========================
* Lillian Walker
* Khazhismel KumykovBinary Information
===========================
Binary Name: lgwalker_kkumykov_player.jarHow To Run:
"java -jar lgwalker_kkumykov_player.jar"Source Layout:
===========================
Packages:
narow.formats: includes info about the formats of referee communications and our data structures
narow.heuristics: includes our evaluation functions
narow includes: The core stuffImportant Files:
App: is the main() function. Handles figuring out if we go first and the config, passes received moves to PlayerState
BoardState: is our knowledge representation for the board and who's turn it is
ID_DFS: is our implementation of iterative deepening depth-first-search using MiniMax and Alpha-Beta pruning. It will continue to search until you kill it, storing it's best result along the way. We run the actual search on it's own thread then mercilessly kill it when we're out of time.
PlayerState: Keeps info about if popouts were used, what our name is, the config, etc. Updates BoardState upon receiving moves and making moves.
CountNARowsCalculator: the Evaluation function we ended up going with as the default. This was the best performer. You can still try out the other evaluation functions, but not in the tournament please :)We also have some tests that rely on JUnit. JUnit comes with Eclipse, but if you don't have it you don't have to compile the files in "test"
Experiment:
===========================
We tested our different heuristics against each other in a little mini tournament. (In other testing we also testing modifications to heuristics in a similar manner)
We set up the jar so you can specify a eval function to use and see which one picks better moves.
(java -jar player.jar [0,1,2]). CountN was 0, PossibleN is 1, HVal2 is 2. We're pretty sure on our ID-DFS, minimax, and AB pruning, and any efficiency increases in there *might* help, but the heuristic is what we wanted to test because it has a huge role in the early game, and has the potentially highest effect.The contenders:
CountN: This counts the number of 1,2,3...,ns in a row for each player, then weighs them, giving longer rows cubicly more weight. It also takes into account who's turn it is when calculating the value.
PossibleN: This counts the number of "possible" n in a rows left in the board available to us.
HVal2: This counts the number of open rows/columns/diagonals.The Results:
Testing with 1 second timeout (so most placements are heuristic):CountN dominates HVal2
CountN dominates PossibleN
PossibleN dominates HVal2Also noticed an interesting thing where PossibleN and HVal2 both tend to use their pop out really early? I guess since this is a side effect of the order we determine moves - since in their evaluations a piece for US and an empty space are equal, it finds that all moves are equal. Popping out the bottom right piece is the last move we test, so it makes sense that if all moves are found to be "equal" that is the move it picks.
Outside of this little tournament, for each of our different evaluation functions and modifications we played them against each other and saw if they could get dominance, and how they won/lost/tied. (For example "java -jar player.jar 0" vs "java -jar player.jar 1"). An interesting thing we saw was that if a player knew they were going to lose (if the opponent played perfectly), it would give up and ended up always placing in the last possible move (all moves being equally bad, the last column is the last move we evaluate). We modified the evaluation just slightly to favor losing in more turns over losing immediately, in case the opposing player _doesn't_ play perfectly. Worst case scenario, we're exactly where we were before. Best case, we won when we shouldn't have. :)