Using more gpu memory (virtual memory)

I am training a cnn model on Nividia RTX 2080 with a memory of 8 GB on Cuda.
And my ram is 32 GP.
I always get this error : “CUDA out of memory. Tried to allocate 5.96 GiB (GPU 0; 8.00 GiB total capacity; 596.29 MiB already allocated; 5.70 GiB free; 600.00 MiB reserved in total by PyTorch)”
is there a way to use a kind of virtual memory or something like that so I can train it on the GPU?

Perhaps decrease your batch size?

The patch size is already 5