So, I ran into a bit of a problem. Basically, the neural networks stopped making any progress whatsoever. After several rounds of that, I even tried increasing the size of the data set (from 100,000 games played to 500,000) and increasing the number of training epochs (up to 100 epochs per round).
So, I took a few weeks to stop thinking about it, and I’ve decided I’m going to archive the current models and try something else. Every round, I’ll create a larger data set (I’m thinking 1,000,000 games played), and create completely new neural networks that train from scratch. This will also avoid situations where specific models get stuck in ruts of losing (like happened to my 4x models).
I’ll have to examine the number of training epochs though. If I’m training from scratch, then 15 epochs probably won’t be enough. But training as many as 100 epochs means a full round would take roughly 2 months.
I’ll try one round like this and, based on the initial results, decide from there. It might be time to scrap this set of models and start training a single complex model (with a layer or two of convolutions at the base).