RuntimeError: CUDA error: out of memory happens randomly

Hi, I am getting RuntimeError: CUDA error: out of memory error randomly while running my testing code. Sometime when I add print(torch.cuda.device_count()) it is fixed but happens again.

Some people suggest decreasing batch size but mine is already 1.

Are you using variable input shapes in your script?
If so, do you know the largest expected shape for your use case?
If not, what kind of model are you using and are you using the latest stable PyTorch release?