Upgrading to PyTorch 2.01 gives torch.cuda.OutOfMemoryError

I have a Nvidia GTX 1080. I was using PyTorch 1.8 and my application was working fine. After upgrading to PyTorch 2.01 and Cuda 11.7, my application gives the following error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 7.93 GiB total capacity; 6.98 GiB already allocated; 32.38 MiB free; 7.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Setting the environmental variable to garbage_collection_threshold:0.6,max_split_size_mb:256 gives the following error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 7.93 GiB total capacity; 6.79 GiB already allocated; 37.06 MiB free; 6.83 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I have tried various combinations of max_split_size_mb but I still get an error. Is it possible to fix this error, or I should go back to PyTorch 1.8?