So, I fixed the determinism problem. When ranking different networks in the tournament, the best move will always be chosen. When generating training data, moves are randomized, with the best move being the most likely. I also replaced the two random players with a new, single randomized player. This player will first look for winning … Continue reading Quick update
As I watched my networks being trained, I noticed how quickly even the simplest learned to exactly match the training data. With a data set of 100,000 items, this ought not to happen. Then it struck me: The networks are deterministic. For a given set of inputs, any given network will generate one exact set … Continue reading The problem with predestination
Round 1 is complete! I generated a small test set of 1,000 items of training data, created some brand new Neural Networks to train, and then ran the first round-robin tourney. Here are the results: (Edit: the results look much better in NotePad or NotePad++ than they do on this webpage) Loading models... Loading model … Continue reading Round 1 Results
So, the holidays got busy, and then I got sick. Overall, I hadn't done much on this, but that change now. Today, I'm trying a test run, with several rounds of a mid-sized Neural Network (3 layers, 500 nodes each) against a completely random opponent. This was mainly to lay out the framework in my … Continue reading Test run