Training data matters… and it’s hard to make

So I tried an experiment: For a few rounds, I generated my training data by only picking the best performing neural network, playing it against the Smart Random player, and only adding those games where the NN lost to the data set.

It was, quite bluntly, an unmitigated disaster.

Every NN began performing worse by training against this data. Over the course of a few rounds, some of them lost more than 25% of their effectiveness.

I’m going back to playing a set number of games, and looking to ALL of them. This will teach my networks new skills (looking at the games they lost), while also reinforcing what they have already learned (the games they won).



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s