In wiki, it is reported that the matrix multiplication and matrix inverse similar time complexity, Optimized CW-like algorithms method. Pytorch does some optimization on matrix manipulations. So in practice, are matrix multiplication and inverse consumed similar time or multiplication is much cheaper? is it different between CPU and GPU?
Did you find the answer to this? I’m interested as well.
If the matrices are square with same dimensions, yes the time complexity of matrix manipulation’s are similar.
Coming to GPU or CPU is faster. From my personal experience if you are using very very small networks and you have a higher core processor which can fit your data, you can go for CPU, in this case the device to host and host to device data transfer overhead seems higher compared to calculations. But when you have a larger model like 2D/3D CNNs etc,. You choose to GPU for parallel tensor computations.