Darren's Computer Go Pages: sm9 Experiment
This page contains miscellaneous information about the sm9 experiment. See a forthcoming paper for more details. (Currently this page is just information that didn't fit in the paper; it will be organized more coherently soon.)
The human-computer team nature of the account has not been revealed, except to a couple of players ("Gregario", "nao"), who were asked to keep quiet the team aspect of sm9. This was to avoid complaints, and keep the experimental data as unbiased as possible.
The connection to Little Golem has been from a constant static IP address for most of the experimental period, which could be verified by the webmaster. All played game records, with comments of thought process, and all variations considered, have been kept.
As an example of an early game, see this win against Bela Nagy from 2003 (a European 5-dan, and 4 ranks higher than sm9 at the time of this game): http://www.littlegolem.net/jsp/game/game.jsp?gid=59204 According to comments in our sgf file (http://dcook.org/compgo/59204.sgf), white 8 seemed good, "though a definite win cannot be found". We thought B6 was better for black 15. Before playing white 28 sm9 considered six different black responses. sm9's optimization swung after black 33, thinking white was killable, but in the actual game white got to live. Black 51 would have given black a half point win, except it left behind a seki. Study of our sgf file will give good insight into the number and depth of variations that are considered.
For the first few moves we generally choose the move that has the most games played. Once we get to about move 4 I open the most recent previous studied game record and study the move comments. In particular if the game ended up as a loss we will look for a comment stating where we suspect the mistake was.
After playing in various tournaments in 2002, and winning the 2nd championship the sm9 player then concentrated on just the championship. The reasons were for this were: 1. The playing process is time-consuming; 2. It is the top division so the eight opponents are very strong, and to have made their way to the top division means they are well-versed in 9x9 openings and 9x9 tactics; 3. The attached prestige of the championship should mean all players are trying as hard as they can. Taken together these three reasons can be rephrased as: fewer games studied intensively against the strongest available opponents satisfies the experiment's goals best.
A 2.4GHz 4 core Linux machine has been used for the past couple of years (Many Faces runs under Wine); however one core is being used for another project, and average load when not analyzing is 0.8. Many Faces uses the actual elapsed time, rather than CPU time, so if Mogo and Fuego think at the same time it makes Many Faces weaker. Therefore we try to stagger the thinking time where possible.
A voting system is deliberately not used. It would be tempting to go with the majority view when two programs think one thing, and one thinks the opposite. But this is hiding one's head in the sand: a disagreement means there is something not clear. It is much more effective to play out the prime variations until the side with the misunderstanding is discovered. Especially with disagreements in the early middle game it is possible for two, or even all three, programs to be wrong.
See http://senseis.xmp.net/?RankWorldwideComparison for how ranks on KGS (as used for the Fuego/Many Faces estimate) compare with European and Japanese dans.
The final Little Golem 22.1.1 tournament results. The LG ranks can be misleading - those are not kyu-level players. E.g. the player who came last, with a 0-8 score, (duvelman, aka Tom Croonenborghs) is actually a European 1-dan (see http://www.gofed.be/rating). Corrin Lakeland (3-5 score) is a KGS 3-dan (See http://www.gokgs.com/graphPage.jsp?user=corrin)
We also tried "uct_param_policy nakade_heuristic 1" with Fuego. But at move 20 (90s, 3 cores) it still chooses D2 (64% to black), and at move 31, it still thinks 30% to white (though it chooses the correct move).
Future work: try and find an alternative move for move 21, that does clearly win. It may turn out that no move is correct here, and therefore that the programs cannot be berated for choosing D2. However the author thinks that is unlikely, and that if black plays deeply in white's corner it will lead to a win. Incidentally Gnugo choose G3 in this position, and in self-play says it is 7.5pts to black (i.e. 9 points bigger than D2)!
Komi and Opening Move
sm9's last loss against the (3,4) opening was in October 2007. Nao favours it, and despite being a Japanese 7-dan, struggles in the championship first division. Xaver, winning of the 22.1.1 championship, plays the (5,5) opening whenever he has black.
OGS uses 6.5pt komi
Another 9x9 opening study [*] also shows evidence that 7.5pts komi gives white an advantage, and gives the explanation that any black move except tengen allows white to play mirror go, with success. However this study only uses Mogo, with no special-handling of Mogo's blindspots.
*: Audouard, P., Chaslot, G., Hoock, J., Perez, J., Rimmel, A., and Teytaud, O. 2009. Grid Coevolution for Adaptive Simulations: Application to the Building of Opening Books in the Game of Go. In Proceedings of the Evoworkshops 2009 on Applications of Evolutionary Computing: Evocomnet, Evoenvironment, Evofin, Evogames, Evohot, Evoiasp, Evointeraction, Evomusart, Evonum, Evostoc, EvoTRANSLOG (Tübingen, Germany, April 15 - 17, 2009). M. Giacobini, A. Brabazon, S. Cagnoni, G. A. Caro, A. Ekárt, A. I. Esparcia-Alcázar, M. Farooq, A. Fink, and P. Machado, Eds. Lecture Notes In Computer Science, vol. 5484. Springer-Verlag, Berlin, Heidelberg
Formalizing The ProcessA similar approach for chess is reported to be 150 elo stronger than its strongest member: http://computer-go.org/pipermail/computer-go/2006-April/005257.html
© Copyright 2010 Darren Cook <email@example.com>