Loading model to another GPU

I am trying to load a model with trained parameters. I trained the model on ‘cuda:1’ and now I want to load them to ‘cuda:0’. So I tried to using -

model.load_state_dict(torch.load('model_trained.ckpt'), map_location={'cuda:1': 'cuda:0'})

However I encounter an error

Traceback (most recent call last):
  File "main_with_noise.py", line 176, in <module>
    model.load_state_dict(torch.load('model_trained.ckpt'), map_location={'cuda:1': 'cuda:0'})
  File "/home/hetro/v3/lib/python3.5/site-packages/torch/serialization.py", line 387, in load
    return _load(f, map_location, pickle_module, **pickle_load_args)
  File "/home/hetro/v3/lib/python3.5/site-packages/torch/serialization.py", line 574, in _load
    result = unpickler.load()
  File "/home/hetro/v3/lib/python3.5/site-packages/torch/serialization.py", line 537, in persistent_load
    deserialized_objects[root_key] = restore_location(obj, location)
  File "/home/hetro/v3/lib/python3.5/site-packages/torch/serialization.py", line 119, in default_restore_location
    result = fn(storage, location)
  File "/home/hetro/v3/lib/python3.5/site-packages/torch/serialization.py", line 99, in _cuda_deserialize
    return storage_type(obj.size())
  File "/home/hetro/v3/lib/python3.5/site-packages/torch/cuda/__init__.py", line 599, in _lazy_new
    return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
RuntimeError: CUDA error: out of memory

As its trying to use ‘cuda:1’ which at the moment is already full. I dont understand why its trying to use ‘cuda:1’ When I have specified it to map to ‘cuda:0’ and then load.

I think you can simply use the environment variable CUDA_VISIBLE_DEVICES=0 prefixing your python command to make your process use only gpu 0

I found the cause of the error. I was passing map_location as an argument to model.load_state_dict() whereas the correct usage is passing it as an argument to torch.load() like this -

model.load_state_dict(torch.load('model_trained.ckpt', map_location={'cuda:1': 'cuda:0'}))

On a side note, here

map_location={'cuda:1': 'cuda:0'}

means to take state_dict saved in cuda:1 format and transform it to cuda:0 format right?