Getting "RuntimeError: CUDA error: out of memory" when memory is free

It’s hard to tell what might be the root cause, as it was working before and I don’t know what has changed.
I don’t believe there is a PyTorch-related fix, as the error points towards a setup issue.
You could try to test other environments (e.g. create a new conda env, reinstall PyTorch, and check if it’s working), try to run CUDA samples by rebuilding them and executing etc.

1 Like