ModuleNotFoundError: No module named 'modeling'

I am trying to load a PyTorch model:

import torch
import io

blob_service_client_instance = BlobServiceClient(account_url=STORAGEACCOUNTURL, credential=STORAGEACCOUNTKEY)

container_client = blob_service_client_instance.get_container_client(container=CONTAINERNAME)

blob_url = f"{account_url}/{CONTAINERNAME}/{BLOBNAME}/model.pt"

blob_client = BlobClient.from_blob_url(blob_url=blob_url, credential=STORAGEACCOUNTKEY)
with io.BytesIO() as model_file:
    print("Writing Model")
    model_file.write(blob_client.download_blob().readall())
    print(model_file)
    model_file.seek(0)
    print(model_file)
    print("loading pytorch model")
    adv_model = torch.load(model_file, map_location=torch.device('cpu'))

The error is as follows:

<_io.BytesIO object at 0x0000020132C29D60>
<_io.BytesIO object at 0x0000020132C29D60>
loading pytorch model
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In [12], line 22
     20 print(model_file)
     21 print("loading pytorch model")
---> 22 adv_model = torch.load(model_file, map_location=torch.device('cpu'))

File ~\Anaconda3\envs\modelmesh\lib\site-packages\torch\serialization.py:607, in load(f, map_location, pickle_module, **pickle_load_args)
    605             opened_file.seek(orig_position)
    606             return torch.jit.load(opened_file)
--> 607         return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
    608 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)

File ~\Anaconda3\envs\modelmesh\lib\site-packages\torch\serialization.py:882, in _load(zip_file, map_location, pickle_module, pickle_file, **pickle_load_args)
    880 unpickler = UnpicklerWrapper(data_file, **pickle_load_args)
    881 unpickler.persistent_load = persistent_load
--> 882 result = unpickler.load()
    884 torch._utils._validate_loaded_sparse_tensors()
    886 return result

File ~\Anaconda3\envs\modelmesh\lib\site-packages\torch\serialization.py:875, in _load.<locals>.UnpicklerWrapper.find_class(self, mod_name, name)
    873 def find_class(self, mod_name, name):
    874     mod_name = load_module_mapping.get(mod_name, mod_name)
--> 875     return super().find_class(mod_name, name)

ModuleNotFoundError: No module named 'modeling'

Why am I getting this ModuleNotFoundError? and What is the solution?

It seems you are trying to directly load a model instead of the recommended way to create a model instance and load its state_dict only.
In this case, you would have to make sure the source file structure is equal in your working environment compared to the environment the model was saved from.

I just started with PyTorch and see that this problem plagued many other users. There are a lot of questions like this one with answers assuming the knowledge beginner users don’t have.

Documentation at torch.load — PyTorch 1.13 documentation doesn’t mention creating model instance. Neither does the tutorial at: Saving and loading models for inference in PyTorch — PyTorch Tutorials 1.12.1+cu102 documentation.

Book “Machine Learning With PyTorch and Scikit-Learn” implies that .pt file can hold both “the model architecture and the weights”.

Is there a more comprehensive discussion of this subject somewhere?

“Creating a model instance” refers to the initialization of any model e.g. via:

model = MyModel()

or

net = Net()

as done in the linked tutorial.

For more information about the disadvantage of saving the model directly, you could check this tutorial.