I am getting the following error while using pickle


AttributeError Traceback (most recent call last)
in ()
1
2 with open(fname, ‘rb’) as fid:
----> 3 model = pickle.load(fid)

AttributeError: Can’t get attribute ‘_load_from_bytes’ on <module ‘torch.storage’ from ‘/home/np9207/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/storage.py’>

The Pickel file is pre-trained model as .pkl

Can you try loading the model using the pytorch function torch.load()? Here is an example:

model = MyAutoEncoderClass()
model.load_state_dict(torch.load('./model.pkl'))

More examples can also be found at https://pytorch.org/tutorials/beginner/saving_loading_models.html

That win’t work for him, it is the pickle error. Pickle for encoding used is different from the one used in pytorch.

I tried, I am getting error with pytorch 0.4.1 and 1.00

**---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in ()
----> 1 torch.load(’./Downloads/Human3.6M-17J-ResNet50.pkl’)

~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/serialization.py in load(f, map_location, pickle_module)
356 f = open(f, ‘rb’)
357 try:
–> 358 return _load(f, map_location, pickle_module)
359 finally:
360 if new_fd:

~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/serialization.py in _load(f, map_location, pickle_module)
530 f.seek(0)
531
–> 532 magic_number = pickle_module.load(f)
533 if magic_number != MAGIC_NUMBER:
534 raise RuntimeError(“Invalid magic number; corrupt file?”)

ModuleNotFoundError: No module named ‘utils’

**

With pytorch 0.4.0, the error is
**AttributeError: Can’t get attribute ‘_load_from_bytes’ on <module ‘torch.storage’ from ‘/home/user/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/storage.py’>
**

link to the data if anyone wants to help http://pascal.inrialpes.fr/data2/grogez/LCR-Net/pthmodels/Human3.6M-17J-ResNet50.pkl

link to the data if anyone wants to help http://pascal.inrialpes.fr/data2/grogez/LCR-Net/pthmodels/Human3.6M-17J-ResNet50.pkl

I don’t recommend anyone to load pickle files from the internet – it’s a huge security risk. Also, I don’t think anyone would be able to load the file if the person doesn’t have your model code … which leads me to the point: Make sure you also defined all the necessary classes and functions in the script where you are loading the file. In future, I recommend using state_dict approach, which is a bit more robust regarding dependencies and file structures.

Any idea or information on how can I extract the dict data from the pkl file with or without python? I am using the out of box code from the team behind LCR NET++.

I don’t think this is easily possible because pickle is a binary format, so you have to load the file as a whole as far as I know. The easiest way to accomplish that is to use the exam same functions and classes you defined in the script that you used to generate the pickle file and then try to load it. Once you have loaded the model, you can then access the dict via model.state_dict().

(Also, like vahid suggested, are you sure that this is a regular pickle file and not one created via pytorch?)

Also note that you need to use the exact same PyTorch version etc.

1 Like

This definitely created by pytorch, because of the errors. The pytorch version used for creation of the file is 0.4.1, but I am not able to figure out utils module needed for it. Thanks, I will try contacting the authors.

Yes, restoring pkl files on some other machine is usually tricky. If the authors provide a model.pkl file, it could technically be at least any of the three

a) a model pickled via regular pickle use
b) a model saved via pickle using torch.save
c) a model state_dict saved via pickle using torch.save

So, it’s maybe best to ask the authors for recommendation for loading it. Also, you may look through their other source code and see if you can find a file that created this pickle file, which could give you additional clues.

Hey, did you find the solution for this? I am running the LCRNet code too, and facing this exact same issue!