It seems that whenever I try to perform inference in plain Pytorch style (Using Pytorch Lightning
)
inf_device = torch.device('cpu')
dummy, _ = next(iter(validation_loader))
#move dummy to inf_device
dummy = dummy[0].reshape(1, 3, 256, 256).to(inf_device)
dummy.to(inf_device)
# print which device dummy is on
print('\n\nDUMMY:',dummy.device)
print(dummy.shape)
Predator_model.to(inf_device)
Predator_model.eval()
with torch.no_grad():
out = Predator_model(dummy)
I obtain errors complaining the input being on CUDA
, but the weights succesfully placed on CPU
. From the print statements placed, I can confirm it shows device as CPU
This is a weird bug I’ve faced many times. Any idea how to solve this?