I am trying to load a model with trained parameters. I trained the model on ‘cuda:1’ and now I want to load them to ‘cuda:0’. So I tried to using -
model.load_state_dict(torch.load('model_trained.ckpt'), map_location={'cuda:1': 'cuda:0'})
However I encounter an error
Traceback (most recent call last):
File "main_with_noise.py", line 176, in <module>
model.load_state_dict(torch.load('model_trained.ckpt'), map_location={'cuda:1': 'cuda:0'})
File "/home/hetro/v3/lib/python3.5/site-packages/torch/serialization.py", line 387, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/home/hetro/v3/lib/python3.5/site-packages/torch/serialization.py", line 574, in _load
result = unpickler.load()
File "/home/hetro/v3/lib/python3.5/site-packages/torch/serialization.py", line 537, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/home/hetro/v3/lib/python3.5/site-packages/torch/serialization.py", line 119, in default_restore_location
result = fn(storage, location)
File "/home/hetro/v3/lib/python3.5/site-packages/torch/serialization.py", line 99, in _cuda_deserialize
return storage_type(obj.size())
File "/home/hetro/v3/lib/python3.5/site-packages/torch/cuda/__init__.py", line 599, in _lazy_new
return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
RuntimeError: CUDA error: out of memory
As its trying to use ‘cuda:1’ which at the moment is already full. I dont understand why its trying to use ‘cuda:1’ When I have specified it to map to ‘cuda:0’ and then load.