Quick update

So, I fixed the determinism problem. When ranking different networks in the tournament, the best move will always be chosen. When generating training data, moves are randomized, with the best move being the most likely. I also replaced the two random players with a new, single randomized player. This player will first look for winning … Continue reading Quick update

The problem with predestination

As I watched my networks being trained, I noticed how quickly even the simplest learned to exactly match the training data. With a data set of 100,000 items, this ought not to happen. Then it struck me: The networks are deterministic. For a given set of inputs, any given network will generate one exact set … Continue reading The problem with predestination

Test run

So, the holidays got busy, and then I got sick. Overall, I hadn't done much on this, but that change now. Today, I'm trying a test run, with several rounds of a mid-sized Neural Network (3 layers, 500 nodes each) against a completely random opponent. This was mainly to lay out the framework in my … Continue reading Test run