CUDA memory issue

I just bought a PC with an NVidia GTX1660 Super with 6GB of VRAM for CUDA.
Never used CUDA before

When I run my code, I get the error message:

RuntimeError: CUDA out of memory. Tried to allocate 38.00 MiB (GPU 0; 5.80 GiB total capacity; 1.41 GiB already allocated; 14.06 MiB free; **1.54 GiB reserved in total by PyTorch**)

I don’t get it, I’ve got 6GB of VRAM, but PyTorch has only reserved 1.54 GB out of it.
What am I doing wrong here?

I’ve tried torch.cuda.empty_cache() - no effect.
I run Ubuntu Linux on an AMD Ryzen 3400G. Are the other 4.5GB of GPU VRAM reserved for graphics output? I killed the xserver, no improvement. I do have another Ryzen built-in GPU but there doesn’t seem to be any driver, so as of now, I’m still outputting video signal through my GTX1660 Super.

Would appreciate some help here, I’m new to CUDA.

Could you check the memory usage via nvidia-smi?
Also, what kind of model are you using and how large is the input?
Does this out of memory error occur in the first iteration or after a while?

You can run watch -n 2 nvidia-smi in side-tab.
If the model has a bigger size than available gpu memory , memory consumption will rise uptil a point and go back to zero.

Thanks for the nvidia-smi hint, guys! That did the job.
There were some other apps running in the background, sucking all the GPU VRAM away.