How to load finetuned torch hub models

Hello all,
I am currently working on a kaggle competition for which I have to finetune an efficientnet model.

I have a finetuned efficientnet model which loaded from torch hub and saved it using torch.save. But when I use torch.load to load the model and weights in a different notebook there is a ModuleNotFoundError saying 'Pytorch' Module not found.

Kindly suggest a good way to save and load torch hub model which does not require active internet connection as internet connection is disabled in the submission notebook on Kaggle

Store the trained state_dict of the model, create the model object in the other notebook, and load the state_dict afterwards. Based on the error message it seems as if PyTorch is not installed in the other environment so I’m unsure if your issue is really related to loading the model only.

Hey @ptrblck,
I do understand that but I would like to know how to create model object for a pretrained model.

For example

model=models.vgg16()

It would be really helpful if you can tell me how to build model object for models built in this manner.

Thank you for the reply.

I’m not sure I understand the question correctly.
You could initialize a model with random parameters via your code snippet or generally as:

model = MyModel()

where MyModel is the model class. Afterwards, you could load the state_dict via model.load_state_dict(state_dict).
The difference between a pretrained and a randomly initialized model are the parameters and buffers which are stored in the state_dict.

1 Like

Oh I understand it now. Thanks for the clarification