ModuleNotFoundError: No module named 'network'

I am trying to load a model, but I am getting this error… I am working on windows, I searched the web and this forum but I count not find anything…

Thanks for the help.

gpu = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
gpu

model = torch.load("./faceforensics_models/faceforensics++_models_subset/full/xception/full_raw.p", map_location=gpu)

----------------------------------------------------------------------------------

ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-5-a95c0d9b8209> in <module>
----> 1 model = torch.load("./faceforensics_models/faceforensics++_models_subset/full/xception/full_raw.p", map_location=gpu)

c:\users\oscar\appdata\local\programs\python\python36\lib\site-packages\torch\serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
    527             with _open_zipfile_reader(f) as opened_zipfile:
    528                 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
--> 529         return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
    530 
    531 

c:\users\oscar\appdata\local\programs\python\python36\lib\site-packages\torch\serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
    700     unpickler = pickle_module.Unpickler(f, **pickle_load_args)
    701     unpickler.persistent_load = persistent_load
--> 702     result = unpickler.load()
    703 
    704     deserialized_storage_keys = pickle_module.load(f, **pickle_load_args)

ModuleNotFoundError: No module named 'network'

If you store a model directly via torch.save(model, file_path), you would need to restore the file and folder structure in order to load this model again as explained here.

Based on the error message it seems that some files with the network definition are missing.

Hey amigo @ptrblck, thanks for the response!

are you talking about this ?

“./faceforensics_models/faceforensics++_models_subset/full/xception/full_raw.p”

it need to be the same ?

or you talking about this?
the_model = TheModelClass() # declare the class and then try to load it ?

Thanks.

If you are using this approach: model = torch.load(path), you would need to make sure that all necessary files are in the corresponding folders as they were while storing the model.

The other approach of creating the model first and load the state_dict is more flexible, as you might change the actual file and folder structure and would just have to make sure to create a model with matching parameters.

@Oscar_Rangel I encountered the same problem a few days ago. You have to replicate the model github directory structure to be able to open the file. To avoid that in the future, you can save the model’s state_dict instead. Below is how I solved the error:

Hi @ayrts, I have tried your code to load the model, however it returns the same error as below…

ModuleNotFoundError: No module named ‘network’

Could you help me to fix it? or could you provide me the .pth files? Thank you.

@Oscar_Rangel I fix the error. You cannot load the model directly since the state_dict contains a structure like

network
   |--- __init__.py
   |--- models.py
   |--- xception.py

Therefore, you need to build a folder named network and then copy models.py and xception.py from FF++, then you can import the model file. Remember to add init.py then you can import these two files without an absolute path.

hey thank you for this! but did you get also this error? and how did you solve it?