Well, so much for easy grouping. As opposed to before, when I saw my models clustered, there's instead a smooth progression from the worst to the best. There is an interesting group of 4 at the bottom, so I'll use a group of 5 at the top to balance them, and then split the remaining … Continue reading Round 30 Update
OK, I haven't been running as many training rounds as I should, in fact, I'd only managed a half dozen between the Great Training Data Fiasco of March 1st and the graphical/multithreaded rewrite I finished at the end of April. At that point, the effectiveness of my model networks still hadn't recovered, and some of … Continue reading Updates on methods, equipment, and… data!
At least, in a program making extensive use of a compute-heavy library. The effort is better spent on the library itself, rather than the calling program. I wrote in my previous post that I had better optimizations in mind. Well, up until now, I've been running individual games and then collating the results. This week, I've … Continue reading Optimize libraries, not programs
My laptop and my HTPC are very different machines. My laptop has a fast dual core processor, my HTPC has a slower quad core. My laptop has a mid-range mobile GPU (NVidia 860m), whereas the HTPC has a much better desktop GPU (NVidia 1060). I bring this up because as I add to the graphical … Continue reading Multi-threading can help, but it’s no unicorn latte