Getting Cuda OOM error while training coreference resolution model through spacy-experimental

Hi everyone,
I have been trying to train my own coreference resolution model on google colab but I keep on getting torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 23.32 GiB (GPU 0; 39.59 GiB total capacity; 23.95 GiB already allocated; 12.29 GiB free; 24.00 GiB reserved in total by PyTorch)

I tried solving it by setting the environment variable os.environ["PYTORCH_CUDA_ALLOC_CONF"] = 'max_split_size_mb: 256' as recommended by others but I am still getting this error.

Can anyone help me how can I fix it?