When using the Python interpreter, I evaluated following code:
import torch torch.cuda.is_available() with torch.cuda.device(0): b = torch.randn(1000, dtype=torch.float) b.cuda()
It gives me as an output:
True Traceback (most recent call last): File "<stdin>", line 3, in <module> RuntimeError: CUDA error: out of memory
I don’t really understand why I get this error. I am new to GPU’s and at the moment the only thing I can think of, is that the GPU is used by two other users at the same time.
Maybe this causes the problem.
Every help is appreciated. Thank you.