Problem happens when I load the pytorch_pretrained_bert

ssh://zpf@10.12.41.181:22/home/zpf/.virtualenvs/pytorch/bin/python -u /home/zpf/Desktop/zpf/ED_NEEL/train.py
Traceback (most recent call last):
File “/home/zpf/Desktop/zpf/ED_NEEL/train.py”, line 146, in
train(logger, False)
File “/home/zpf/Desktop/zpf/ED_NEEL/train.py”, line 66, in train
model = gcn_bert0(t=0.4, adj_file=‘data/god_adj.pkl’).cuda()
File “/data/zpf/ED_NEEL/model.py”, line 178, in gcn_bert0
model = BertModel.from_pretrained(’./bert_pretrain’)
File “/home/zpf/.virtualenvs/pytorch/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py”, line 603, in from_pretrained
state_dict = torch.load(weights_path, map_location=‘cpu’)
File “/home/zpf/.virtualenvs/pytorch/lib/python3.7/site-packages/torch/serialization.py”, line 585, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File “/home/zpf/.virtualenvs/pytorch/lib/python3.7/site-packages/torch/serialization.py”, line 772, in _legacy_load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: storage has wrong size: expected 3981035933985103798 got 768

I download the pretrained model from the web and put the model in ./bert_pretrained.How can I solve it?Thanks a lot!

this is because i use the ftp to directly push the pretrained model to the GPU server.
Now I compressed it firstly,then push it.The problem is fixed. :grinning: