Load GPU trained model to CPU

I trained a model on google colab with PyTorch version 1.12. trying to load it on a cpu machine. I got this error:

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

I used this:

torch.load('model.pth', map_location=torch.device('cpu'))

I used also pickle:

class CPU_Unpickler(pickle.Unpickler):
    def find_class(self, module, name):
        if module == 'torch.storage' and name == '_load_from_bytes':
            return lambda b: torch.load(io.BytesIO(b), map_location='cpu')
        else:
            return super().find_class(module, name)
model = CPU_Unpickler(f).load()

now new error:

    model = CPU_Unpickler(f).load()
_pickle.UnpicklingError: A load persistent id instruction was encountered,
but no persistent_load function was specified.

Thank you.

The error can be raised if you are trying to torch.load an unsupported type, such as a PNG image:

torch.load("./image.png")
# UnpicklingError: A load persistent id instruction was encountered,
but no persistent_load function was specified.

The problem was solved, I did the mapping in model.load_state_dict.

model.load_state_dict(torch.load(f"{path}), map_location=torch.device(device=device))

it works with:

model.load_state_dict(torch.load(f"{path}", map_location=torch.device(device=device)))