RuntimeError: CUDA out of memory. Tried to allocate 104.00 MiB (GPU 0; 31.72 GiB total capacity; 48.83 MiB already allocated; 30.61 GiB free; 68.00 MiB reserved in total by PyTorch)
This does not make any sense? I’m using pytorch 1.4.0.
RuntimeError: CUDA out of memory. Tried to allocate 104.00 MiB (GPU 0; 31.72 GiB total capacity; 48.83 MiB already allocated; 30.61 GiB free; 68.00 MiB reserved in total by PyTorch)
This does not make any sense? I’m using pytorch 1.4.0.
Could you check the memory usage via nvidia-smi
?
Other processes might use the device (or some zombie processes still occupy the GPU).
I’ve checked that using watch and that’s not it.
I think it’s related to some sort of cuda version mismatch. Do you know if it’s possible to use pytorch with cuda9.2 while nvidia-smi
shows cuda10.0 ?
edit: It runs on a 1080ti and v100 but hangs on p40, version 1.3.1 cuda9.2
edit2: Nevermind, seems like the grid where I am is not setup correctly.