My laptop is not an elite machine. Nevertheless, it has a decent processor (Core I5, dual core, running at 2.8gHz) and a very decent (for mobile) GPU (GeForce 860M).
My media server has an older CPU, a quad-core AMD Phenom II running at 2.6gHz, and a brand new GeForce 1060.
So, how do they compare?
First, it turns out that you MUST close Steam out before running a heavy workload. About two minutes in I was assaulted with the loudest fan noise (though, it should be noted, no alarms) I’d ever heard, and I had to reboot the system to get it back to a stable state.
Now, then, my largest Neural Network is five layers deep and 5,000 nodes wide. On my laptop, it would take around 56 seconds to train one epoch on a dataset of 100,000 items.
I just cut my run time in half.
True, I could have gotten around an extra 40% performance by going for the GTX1070, but that would cost literally twice as much (and the GTX1080 even more on top of that). Also, as it has more memory, I should be able to load larger models in the future.
I’m quite pleased with it. Oh, and I can play The Witcher 3 with all the settings on maximum. Take that, PS4.