I’m still tracking number of games won during the tournament, but as those results are relative to each other it’s hard to see if any absolute progress is being made. So, as previously reported, I started playing each model against the Smart Random player for 100 games and seeing how often the NN won. This will allow me to see absolute improvements, whereas the tournament only lets me know relative rankings.
(Numbers above are the averages; for instance, the group of single layered networks won, on average, 26 games out of 100 on the 11th round, and then 29 out of 100 on the 15th)
The previous trend of favoring deep networks is completely reversed here. At every step, the shallower networks were the clear favorites. Also, we see an interesting peak in quality after the 13th round, and the 14th and 15th actually regressed.
Here, again, we see a different trend from before. Where before the wider networks did less poorly, here the wider networks developed a clear advantage. There does seem to be a sweet spot between 500 and 1000 nodes wide, but it doesn’t seem terribly pronounced.
Now, let’s see how this compares to games won, not against the Smart player, but against other AI:
|Depth||Round 11||Round 12||Round 13||Round 14||Round 15|
|Width||Round 11||Round 12||Round 13||Round 14||Round 15|
Again, this is number of games won out of 100. What we see here is interesting… Again, the bias towards deeper, narrower models.
Now this is perplexing. The networks that do well against the Smart Random player do worse against other networks. That’s… fascinating.