From GPU to CPU Torch not compiled with Cuda enabled

Hello folks, I have a model that trained beautifully using cuda, the predictions look good.
When I try to run on my local PC I get the error ‘AssertionError: Torch not compiled with CUDA enabled’ .
I’ve tried with device = torch.device(‘cpu’), but still get the same error. Any ideas?

Which line of code raises this issue? PyTorch does not automatically assume you have a GPU so I assume you are explicitly trying to execute device code somewhere.

Thank you for your reply. I managed to solve it. The code was referring to a file that had cuda, fixed it to cpu and all working fine now.