nmxnql
(Nmxnql)
June 5, 2018, 12:44am
1
I am a pytorch learner ang i want to finetune xception network .When i run the following :
import pretrainedmodels
model_name = 'xception'
model_ft=pretrainedmodels.__dict__[model_name]\
(num_classes=1000, pretrained='imagenet')
i get the follwing error:
RuntimeError: Error(s) in loading state_dict for Xception:
While copying the parameter named "block1.rep.0.pointwise.weight", whose dimensions in the model are torch.Size([128, 64, 1, 1]) and whose dimensions in the checkpoint are torch.Size([128, 64]).
Could you please tell me how to deal with it?
Could you tell where the pretrained model is defined? It doesn’t seem to be included in the torchvision
models.
PS: I’ve edited your post to add some code formatting, since it was quite hard to read.
nmxnql
(Nmxnql)
June 5, 2018, 12:16pm
3
I use the pretrainedmodels ,this is the link:
pretrained-models.pytorch - Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.
Thanks for the link. It’s a known issue in the repo with a posted workaround. The author, @Cadene , is apparently working on it.
nmxnql
(Nmxnql)
June 5, 2018, 12:28pm
5
thank you very much for your help.I am going to try it.