Unable to load model state_dict using torch.utils.model_zoo.load_url()

Hello !
I save a PyTorch model state dict saved in github. Whenever I try to load .pth file using torch.utils.model_zoo.load_url(), I get this error :

File "/Users/ayushman/Desktop/retinanet_pet_detector/utils.py", line 17, in get_model
    state_dict = model_zoo.load_url(url, map_location="cpu", progress=True)
  File "/Users/ayushman/opt/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/hub.py", line 504, in load_state_dict_from_url
    raise RuntimeError('Only one file(not dir) is allowed in the zipfile')
RuntimeError: Only one file(not dir) is allowed in the zipfile

My .pth file is saved as a zip file with the following structure:

resnet34-retinanet-weights.zip
  |- resnet34-retinanet-weights.pth

I can see the zip file is get’s downloaded without any problem. And according to the docs the downloaded file can be a zip file.
And also I manualy tried downloading and loading in the checkpoint file resnet34-retinanet-weights.pth it works .

This my code bdw:

url = "https://github.com/benihime91/retinanet_pet_detector/releases/download/retinanet_v1.0/resnet34-retinanet-weights.zip"

def get_model():
    model = Retinanet(num_classes=37, backbone_kind="resnet34")
    state_dict = model_zoo.load_url(url, map_location="cpu", progress=True)
    model.load_state_dict(state_dict)
    return model

I even tried:

def get_model():
    model = Retinanet(num_classes=37, backbone_kind="resnet34")
    state_dict = torch.hub.load_state_dict_from_url(
        url, map_location="cpu", progress=True
    )
    model.load_state_dict(state_dict)
    return model

Same error !

I suggest u save ur model state dict as a “pth.tar” file instead of compressing the .pth file to a zip file
Also I think u can get away with renaming the pth ext to pth.tar and not zipping it

@Henry_Chibueze
So my file should look like this right random_name.pth.tar . Is this what you mean ?
I’ll give it a try…

@Henry_Chibueze
Didn’t work same error:

Downloading: "https://github.com/benihime91/retinanet_pet_detector/releases/download/retinanet_v1.0/resnet34-retinanet-weights.pth.tar" to /Users/ayushman/.cache/torch/checkpoints/resnet34-retinanet-weights.pth.tar
100%|██████████| 117M/117M [01:36<00:00, 1.27MB/s]
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
 in 
      1 url = 'https://github.com/benihime91/retinanet_pet_detector/releases/download/retinanet_v1.0/resnet34-retinanet-weights.pth.tar'
----> 2 state_dict = torch.hub.load_state_dict_from_url(url, map_location='cpu')

~/opt/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/hub.py in load_state_dict_from_url(url, model_dir, map_location, progress, check_hash)
    502             members = cached_zipfile.infolist()
    503             if len(members) != 1:
--> 504                 raise RuntimeError('Only one file(not dir) is allowed in the zipfile')
    505             cached_zipfile.extractall(model_dir)
    506             extraced_name = members[0].filename

RuntimeError: Only one file(not dir) is allowed in the zipfile

What is ur pytorch version? is it v1.6

my pytorch version is 1.5.1

If that’s the case I’ve I think u should set the zip file serialization to false in torch.save( ) and re-save ur model again.

Let’s hope this works

zip file serialization _use_new_zipfile_serialization is set to False by deafult in torch.save( ). I did not set it as True.

Could it be possible then that the torch.hub.load_state_dict( ) is using the new serialization mode to load the model state?
If what I’m thinking is the case u might want to upgrade to v1.6

The error is obviously a serialization error b4 u update to v1.6 try setting the serialization to true in the torch.save() and re-save ur model let’s see if it’s the torch.hub.load… that is causing the issue

Maybe that’s what I trained in Google Collab and I just noticed collab uses torch version1.6 where _use_new_zipfile_serialization is True by default

Ok cool it’s nice to be of help :ok_hand:t2:

@benihime91 how can we fix this issue . I try to use pretrained weights,so i dont have control.