Import torch file with trained model in CUDA, in a CPU machine


I have trained my Agent and saved it in a torch file, using Colab and its CUDA resources.

I want to import the torch file locally in a machine with a CPU and use it to test the trained agent in a set of test environments. However, I get the following Error:

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device(‘cpu’) to map your storages to the CPU.

You don’t set that as a global variable, but pass it as a keyword argument (after the file) to torch.load, i.e. torch.load('', map_location=torch.device('cpu')).

Best regards



Thank you Tom. That worked!

I have same problem and tried this solution.
My PyTorch version is 1.12.

I also tried to use pickle as follow:

class CPU_Unpickler(pickle.Unpickler):
    def find_class(self, module, name):
        if module == '' and name == '_load_from_bytes':
            return lambda b: torch.load(io.BytesIO(b), map_location='cpu')
            return super().find_class(module, name)
model = CPU_Unpickler(f).load()

I get another error:

    model = CPU_Unpickler(f).load()
_pickle.UnpicklingError: A load persistent id instruction was encountered,
but no persistent_load function was specified.

I trained the model on google colab and using on cpu computer. both version are same.