A bug when I do inference on a saved model

Hello Guys, I’m having a small isuue ith a saved model. I trained the model on colab (GPU) then I download the model(pth format) and then I tried to use it locally and this error occured

Any help is much appreciated.

Saving the “entire” model via torch.save(model) and loading it afterwards might easily break, since you would have to restore the same file structure again and make sure the code is also unchanged.
Based on the error it seems that your local setup diverges from this assumption, which could be caused by manual changes to (some) files or different library versions.
The current implementation of AnchorGenerator doesn’t provide a _cache attribute, so I’m unsure where this is coming from (maybe an older release did?).

Check this tutorial which explains how to save and load the state_dict instead.