I just bought a PC with an NVidia GTX1660 Super with 6GB of VRAM for CUDA.
Never used CUDA before
When I run my code, I get the error message:
RuntimeError: CUDA out of memory. Tried to allocate 38.00 MiB (GPU 0; 5.80 GiB total capacity; 1.41 GiB already allocated; 14.06 MiB free; **1.54 GiB reserved in total by PyTorch**)
I don’t get it, I’ve got 6GB of VRAM, but PyTorch has only reserved 1.54 GB out of it.
What am I doing wrong here?
I’ve tried torch.cuda.empty_cache() - no effect.
I run Ubuntu Linux on an AMD Ryzen 3400G. Are the other 4.5GB of GPU VRAM reserved for graphics output? I killed the xserver, no improvement. I do have another Ryzen built-in GPU but there doesn’t seem to be any driver, so as of now, I’m still outputting video signal through my GTX1660 Super.
Would appreciate some help here, I’m new to CUDA.