I am running PyTorch 1.2.0 with Conda on a remote Linux machine, but I built PyTorch from source using the CUDA toolkit 9.1. I checked whether the GPU is available with
torch.cuda.is_available() and it is.
Using the Conda pre-built PyTorch was not a option for me, as the GPU drivers on the remote machine are too old for the CUDA toolkit 9.2 or 10, for which PyTorch binaries are provided.
I have a
t on GPU, with
100000. I call
torch.svd(t) and the GPU is idle, while the CPU starts running at 100% and more. At the end of the computation, the resulting decomposition is on GPU. In addition, trying to run on GPU is far slower than on CPU, basically unusable.
The weird thing is that, before to call
torch.svd(t), I do computations on the GPU without any issue. It seems that the SVD computation is moved to the CPU and then the results moved back to the GPU.
My final goal is to compute the nuclear norm of each
NxM matrix in my
BxNxM tensor, therefore I tried to use also
torch.norm(t, p='nuc', dim=(1, 2)), but the same scenario described above takes place.
Any guess? Thanks.