Nothing. Larry, the neural network with a foundation of convolutional layers, has not progressed beyond winning 80% of games against the SRP. I've varied the size of the training set (100,000 - 500,000 games), as well as the ratio of wins to losses (from Larry winning 30% to winning 70% against the SRP). In theory, … Continue reading After two weeks of training…
So I ran a few dozen more rounds, and the models pretty much stopped learning. The best of them would win around 80% of their games against the expert system (SRP). Here's how each model performed against the SRP (5,000 as black, 5,000 as red, for a total of 10,000 games each): The results are … Continue reading Out with the old, and in with the new
It's been about a month since my last update, so first, a refresher. I've been iteratively training a number of Neural Networks to play Connect 4. Every round, I would have them play a hundred thousand games, study those games to learn from them, and then play them in a Round Robin style tournament. The … Continue reading Resetting the Models
So, I ran into a bit of a problem. Basically, the neural networks stopped making any progress whatsoever. After several rounds of that, I even tried increasing the size of the data set (from 100,000 games played to 500,000) and increasing the number of training epochs (up to 100 epochs per round). Nothing. So, I … Continue reading Progress Stalled