CUDA out of memory error when allocating one number to GPU memory

I am getting an out of memory error for CUDA when running the following code:

import torch

assert torch.cuda.is_available() == 1

x = torch.randn(1)

x.cuda()    # RuntimeError: CUDA error: out of memory

Running on GeForce GTX 750, Ubuntu 18.04. How can there not be enough memory for one float?

1 Like

Could you check the current memory usage on the device via nvidia-smi and make sure that no other processes are running?
Note that besides the tensor you would need to allocate the CUDA context on the device, which might take a few hundred MBs.

Thank you for the response. Here is the nvidia-smi output. It seems that I only have ~300 MB left while idle. I probably do not have enough vRAM for the CUDA context that you mention.

Yeah, your GPU doesn’t seem to have enough memory, so you could try to run the code on e.g. Colab.

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)