Is it possible to load a pre-trained model on CPU which was trained on GPU?

I have a pre-trained NN model which was trained on GPU and now I want to demonstrate some result but I need to do that using CPU (because of resource limitation). I tried to load model states using CPU but getting UNKNOWN error. Everything works perfectly if I use GPU.

Not sure if this information is important - I have used data parallel while training in GPU.

Let’s say your model’s name is net to train on gpu you must have written net. cuda().
After training transfer the model to cpu
Save the model using
Load the model using torch.load

Problem is, I have already trained and saved the model. Is there anyway, I can load the states of the model already trained on GPU?

yes, there are threads already covering this, with solutions.

1 Like

I have tried this.

state_dict = torch.load(f, map_location=lambda storage, loc: storage)

Still getting error.

Function to load model states in CPU:

def load_model_states_without_dataparallel(model, filename):
    """Load a previously saved model states."""
    filepath = os.path.join(args.save_path, filename)
    with open(filepath, 'rb') as f:
        state_dict = torch.load(f, map_location=lambda storage, loc: storage)
    new_state_dict = OrderedDict()
    for k, v in state_dict.items():
        name = k[7:]  # remove `module.`
        new_state_dict[name] = v

I am getting the following error.

RuntimeError                              Traceback (most recent call last)
RuntimeError: cuda runtime error (30) : unknown error at /py/conda-bld/pytorch_1490983232023/work/torch/lib/THC/THCGeneral.c:66

During handling of the above exception, another exception occurred:

SystemError                               Traceback (most recent call last)
<ipython-input-12-facf4f3e448a> in <module>()
----> 1 helper.load_model_states_without_dataparallel(model, '')
      2 model.eval()
      3 print('Model, embedding index and dictionary loaded.')

/net/if5/wua4nw/wasi/academic/research_with_prof_wang/projects/seq2seq_cover_query_generation/source_code/ in load_model_states_without_dataparallel(model, filename)
     71     filepath = os.path.join(args.save_path, filename)
     72     with open(filepath, 'rb') as f:
---> 73         state_dict = torch.load(f)
     74     new_state_dict = OrderedDict()
     75     for k, v in state_dict.items():

/if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/ in load(f, map_location, pickle_module)
    227         f = open(f, 'rb')
    228     try:
--> 229         return _load(f, map_location, pickle_module)
    230     finally:
    231         if new_fd:

/if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/ in _load(f, map_location, pickle_module)
    375     unpickler = pickle_module.Unpickler(f)
    376     unpickler.persistent_load = persistent_load
--> 377     result = unpickler.load()
    379     deserialized_storage_keys = pickle_module.load(f)

/if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/ in persistent_load(saved_id)
    346             if root_key not in deserialized_objects:
    347                 deserialized_objects[root_key] = restore_location(
--> 348                     data_type(size), location)
    349             storage = deserialized_objects[root_key]
    350             if view_metadata is not None:

/if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/ in default_restore_location(storage, location)
     83 def default_restore_location(storage, location):
     84     for _, _, fn in _package_registry:
---> 85         result = fn(storage, location)
     86         if result is not None:
     87             return result

/if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/ in _cuda_deserialize(obj, location)
     65     if location.startswith('cuda'):
     66         device_id = max(int(location[5:]), 0)
---> 67         return obj.cuda(device_id)

/if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/ in _cuda(self, device, async)
     55         if device is None:
     56             device = -1
---> 57     with torch.cuda.device(device):
     58         if self.is_sparse:
     59             new_type = getattr(torch.cuda.sparse, self.__class__.__name__)

/if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/cuda/ in __enter__(self)
    127         if self.idx is -1:
    128             return
--> 129         _lazy_init()
    130         self.prev_idx = torch._C._cuda_getDevice()
    131         if self.prev_idx != self.idx:

/if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/cuda/ in _lazy_init()
     88             "Cannot re-initialize CUDA in forked subprocess. " + msg)
     89     _check_driver()
---> 90     assert torch._C._cuda_init()
     91     assert torch._C._cuda_sparse_init()
     92     _cudart = _load_cudart()

SystemError: <built-in function _cuda_init> returned a result with an error set