Not able to send torch.Tensor to GPU

When using the Python interpreter, I evaluated following code:

import torch
torch.cuda.is_available()
with torch.cuda.device(0):
  b = torch.randn(1000, dtype=torch.float)
  b.cuda()

It gives me as an output:

True
Traceback (most recent call last):
  File "<stdin>", line 3, in <module>
RuntimeError: CUDA error: out of memory

I don’t really understand why I get this error. I am new to GPU’s and at the moment the only thing I can think of, is that the GPU is used by two other users at the same time.
Maybe this causes the problem.

Every help is appreciated. Thank you.

use nvidia-smi to check the load on the gpu.

1 Like

Hello,

thank you for your help. Using : watch -n 0.5 nvidia-smi i was able to see, that the load of the GPU is very high and therefore the loading to the GPU would not work :slight_smile:

Thanks!

1 Like