The model I was training by only playing against me simply wasn't improving enough. A few thousand games in I realized how hopeless the idea of hand generating training data was, and gave up. I tried a new model, named Curly, which consisted of a single convolutional layer followed by a fully connected layer of … Continue reading I haven’t lost… quite.
So my current neural network I'm training, Larry, had previously increased his effectiveness to 90% against the SRP. Unfortunately, a few more dozen rounds decreased his effectiveness to the point that reverted to an 80% rating. That's unfortunate. On a lark, I started a new model on the side. This one was trained entirely by … Continue reading Reversion to the mean, and a new method
As I wrote previously, I had run into the limit of how much my neural networks could improve by playing against purely random opponents (or even random opponents with shortcuts, such as always taking winning moves if available) because the signal to noise ratio of those players was simply too low. If a Neural Network … Continue reading Dramatic improvements
CNTK is a great library for using Neural Networks, but the power and flexibility come at a cost of complexity. Sometimes you just want to hand some data to a trained model and get back an answer. I wrote a little wrapper that I use for just such a purpose and thought I'd share it … Continue reading C# Wrapper for CNTK Evaluation