Does GPU speed up the optimization process

I’m trying to optimize very simple functions like quadratic, absolute value etc using pytorch, and i was wondering if GPU will speedup that process. I am using no neural net or anything just a 1D tensor and using the above functions and loss functions and trying to find the minima.

At a very basic level, GPUs work best when doing “the same thing to a lot of datapoints” (à la SIMD - Single Instruction Multiple Data), even if the details are a bit more intricate than for typical CPU SIMD units. This is applicable for convolutions, matrix multiplications (of reasonably large matrices), and other expressions.

Thus, if you look at your computation, if you only have a few values and very little computation with them, you won’t see much speedup. If, on the other hand, you have so many datapoints that you are dealing with large matrices, it might be worth a try.

Best regards

Thomas

1 Like