CUDA out of memory although enough memory is available

I am training a transformer based model on a GPU and get an out of Memory error. This is the error message:

CUDA out of memory. Tried to allocate 578.00 MiB (GPU 2; 10.76 GiB total capacity; 2.79 GiB already allocated; 6.55 GiB free; 3.24 GiB reserved in total by PyTorch)

The model runs fine on a different GPU.
I would truly appreciate any help!
Thank you all

you should open your terminal and execute nvidia-smi to see the actual use of your GPU. If you still have free memory include torch.cuda.empty_cache() in your code before loading the model on GPU. Finally reducing batch_size could be a way to solve it.