Google
 
Back to Darren's Computer Go Pages Top

9x9 Go Experiments

What is the strongest opening move on the 9x9 board?
and
What is the correct komi to play 9x9 with?

Curiosity is a strange thing. I've been asking myself these two closely related questions for at least 10 years. Curiosity has turned to obsession. Over the years I've built up a huge database of 9x9 games between strong players, and I think from analyzing these we can learn something useful.

(Note: if you want to be notified of updates to the data please send me an email with "compgo" in the subject, and I'll add you to an announce mailing list. And if you are actually using any of this data please let me know which files and how: knowing that the data is of use to someone is a great motivator!)

License

All the data and scripts here are supplied under an open source MIT license. What that means is you are free to use any of it in both open source and commercial programs. Attribution is nice but not legally required. Supplying bug fixes and more game data (must be under same license) is also encouraged, but again not legally required. I'm a strong believer in the importance of this kind of license, but won't bore you with the arguments here.

Game Database

input_positions.tgz (13.5 MB)
input_positions.zip (13.1 MB)
Last Update: 2010-10-21

A set of games, mainly between strong players. When it was games analyzed by me all the variations are included. Also when a program has analyzed it and suggested a move, those are included too.

To avoid any possible copyright issues, these are supplied as a set of positions. (I also want to supply them as one huge sgf file with all the variations merged, but I've not managed to track down any software that will do this; I have a utility that almost does what I need, so at some point I will adapt that and get the big sgf file made.)

Note: the positions are supplied with duplicates removed. As of October 2010 there are just over 900,000 positions.

What is this data useful for? The main use is in conjunction with the analyzed positions data (described next): these are the board positions, the analyzed positions is an estimate of the board score of each. But you may simply want to analyze them to find common patterns (but note that next moves are not marked). Any other ideas? Let me know.

Analyzed Positions

analyzed_positions.tgz (9.5MB)
analyzed_positions.zip (9.5MB)
Last Update: 2010-10-21

What I have been doing is using a strong MCTS program, Fuego, to analyze and score each position in the above-mentioned game database.

All positions are included, whether it is in the opening, a complex middle game fight, or an endgame position where even a moron can work out the winner. At the time of writing 900,000 positions have been analyzed.

What is this data useful for? These scores can be put in a minimax tree to get a score and prime variation (or, more likely, a set of prime variations). It won't be a proof, but it will give a strong indication of the best move(s) in any given position. And because it can be turned into an opening book, people can try and refute the main lines. Those refutation attempts (successful or not) can then be analyzed in the same way. Please submit game records for any win against a program using the opening book.

As an aside, I think it would be very interesting to publish the names of people who discover refutations, along with the position and the move they discovered. If popular we could even set up a high-score table of people who have discovered the most. (Thanks to Jeff Nowakowski for that idea.)

The data is also useful for pattern extraction: for each position it will tell you moves that win and moves that lose. (It is not ideal as there will also be a host of other moves that are not considered; but for any positions that have say 10+ occurences you could lump unconsidered moves in with the known bad moves.)

File Format

The file format of both the input positions and the analyzed positions is binary, and is described in the file format page.

Analyzed Positions: The Algorithm

I do this at each plausible komi and look for the point where the position switches between being a win for black and a win for white. For instance if the MCTS program gives these black win rates for black to play next in a position at these komi:

  • 4.5: 80%
  • 5.5: 35%
  • 6.5: 30%
  • 7.5: 30%
Then it tells me the score for that position is between 4.5 and 5.5, i.e. 5. I also get a confidence measure as distance on each side from 50%. I.e. here it is 30+15 = 45 (which is quite a high confidence).

As another example:

  • 4.5: 53%
  • 5.5: 55%
  • 6.5: 51%
  • 7.5: 54%
This is telling me black wins even at 7.5 komi, so we say this position is worth 8+. But the confidence level is only 8 (54-50, times 2), meaning we are not very confident about this result, and any suggested moves would go in the high urgency queue described next.

I also have the programs suggest moves at each komi (in fact sometimes more than one move - see below), and they are added to the queue to be analyzed. So if my game collection is missing a key move there is a chance it will be discovered. I split moves into three queues: high urgency (i.e. low confidence positions), mid urgency and low urgency. The move suggestions in the high urgency queue get analyzed first. (4 weeks of analysis generates a high urgency queue that takes 3 weeks to analyze; so I hardly ever even get to the mid-urgency queue; if you have spare cores and want to help out, let me know!)

Fuego is given 3,000 to 63,000 playouts on each position. I did a lot of tests and found in many positions it either understood it at 300 playouts, or it still didn't get it at 10,000 or sometimes even 100,000 playouts. Obviously it gives accurate results the closer we get to the endgame, so I decided to rely on the min-max tree to bring good values up from the leaf nodes.

Initially it gets just 1000 playouts. Then it tries again with 2000 playouts. If confidence is over 85% (or less than 15%) and same move suggested both times it stops there. Otherwise it keeps doubling to 32,000 playouts (with some other early termination criteria). Final result is a weighted average (i.e. for low confidence positions, it will have done 63,000 playouts), and move suggestions is the set of moves suggested at each playout level. The main point of this complexity was to allow early termination, and therefore save some CPU on clear win/loss positions. But another benefit is it gives a greater number of move suggestions.

Analyzed Positions: Future Work

I have ManyFaces set up to repeat the experiment in Japanese rules, and will soon be ready to start Mogo and Valkyria. All I need is more spare cores. If you are interested in running experiments on your hardware please get in touch. It is also possible to build Fuego in a way that is weaker overall but better at seki and nakade; repeating the analysis with that version would also be of interest.

I have a minimax script running over the scored positions. But I think I have a bug as it gives some strange results. I've not had time to get back to it,but I'm hoping someone will be interested enough to take the data and put it into a tree.

Bot Comparison

I've a script that asks a set of programs for their thoughts about a position. Just one komi, one ruleset, but some serious thinking time. The programs frequently disagree. E.g. Many Faces and Valkyria may say it is a win for white, with 60% confidence, Mogo may think it is 49% to white (i.e. it thinks good for black, but only just), and Fuego might think 24% to white.

TODO: explain how position played out until all agree; TODO: give number of positions I have. TODO: give why this information is useful.

Scripts

analysis_scripts.tgz (coming soon)
analysis_scripts.zip (coming soon)
Last Update: N/A

The PHP scripts that do all this analysis are available too. They are a bit rough, with some hard-coded paths, and maybe quite complex to get working. I'll improve this situation when I have time (and someone is serious about helping with the analysis).


FAQ

#1a: How Do I View The Evaluation (e.g. in gogui)?

Short Answer: You can't easily; this is one of the tools I'm hoping people will write, now the data is out there and open source.

Long Answer. (This is just one approach.) Load the scores (e.g. fuego_chinese.dat) into memory, making a mapping from PositionID to PositionScore. Now put a front-end on this, that takes a query position. Make the PositionID code for that query position, and look up its PositionScore object, and return that score. (Optionally, also load the input positions file into memory, make a mapping from PositionID to PositionDB. Then validate the query position matches the board position we have; if not there is either a bug or a zobrist hash clash.)

Note: this will just give you the raw analysis value; the same value fuego (with a mere 50,000 playouts) will give you. The real value of the database comes from making a minimax tree of all the scored positions, and returning that minimax value.

Note: the "php analyze_position_dat.php view" command can turn the input positions into ascii board positions. The "php read_scored_positions.php" command turns the score file into human-readable output. But these commands are really just for troubleshooting.

#1b: How Do I View The Score Of All Followup Moves For A Position?

The published data contains no links between positions. The previous answer explains how to get the score for a position. So, to get all next moves, first make the base PositionID, then modify that PositionID for playing each empty point on the board, and look each up to see if they are in the database. This gives you a set of moves and scores you can display.

(The PHP scripts do not have any go logic, so cannot make followup position codes on the fly; so I have another script that makes all the possible followup codes for each input position. The analysis scripts can then use this. That file is about 500MB, so is not being published!)

#2: How To Make The Minimax Of Scores?

As mentioned in answer #1a above, making a minimax of the scores is the whole point of the project. One way (the only way?) is to do the approach of #1b, recursively from the root until you reach the leaves, then pass the values up as minimax (or alpha-beta). I'd suggest caching the result of each node after all its childrens have been considered, but if your code to make zobrist codes and do lookups is quick enough then there may be no need.

I started doing this in PHP, but the code is a bit ugly and I got some strange results. I have not had chance to get back to that script and find out where the bugs are.


© Copyright 2010 Darren Cook <darren@dcook.org>