CUDA error out of memory

every time when I am running my code its run smoothly for the first time but when I am running it for second or 3rd time its give me this error I don’t think its memory problem otherwise it couldn’t run for the first time too maybe its cache problem or something like that i am not sure any ways how i can clear my cache after every run

Normally torch.cuda.empty_cache() clears cache as stated in documentation. The documentation also stated that it doesn’t increase the amount of GPU memory available for PyTorch. I’ve been dealing with same problem on colab, the problem can be related with its garbage collector or something like that. Removing variables (e.g. del model) or a factory reset would be a precise solution.