So my current neural network I'm training, Larry, had previously increased his effectiveness to 90% against the SRP. Unfortunately, a few more dozen rounds decreased his effectiveness to the point that reverted to an 80% rating. That's unfortunate. On a lark, I started a new model on the side. This one was trained entirely by … Continue reading Reversion to the mean, and a new method
As I wrote previously, I had run into the limit of how much my neural networks could improve by playing against purely random opponents (or even random opponents with shortcuts, such as always taking winning moves if available) because the signal to noise ratio of those players was simply too low. If a Neural Network … Continue reading Dramatic improvements
CNTK is a great library for using Neural Networks, but the power and flexibility come at a cost of complexity. Sometimes you just want to hand some data to a trained model and get back an answer. I wrote a little wrapper that I use for just such a purpose and thought I'd share it … Continue reading C# Wrapper for CNTK Evaluation
Nothing. Larry, the neural network with a foundation of convolutional layers, has not progressed beyond winning 80% of games against the SRP. I've varied the size of the training set (100,000 - 500,000 games), as well as the ratio of wins to losses (from Larry winning 30% to winning 70% against the SRP). In theory, … Continue reading After two weeks of training…