Weird error in finetuning the pretrained Resnet model

Hi all,

I am finetuning the resnet50 model as follows:

classifier = resnet50(pretrained=False)
classifier.fc = nn.Linear(2048, 256)
for param in classifier.parameters(): #for NOT freezing model
param.requires_grad = True

classifier_criterion = nn.CrossEntropyLoss() # loss function of classifier
classifier_criterion.requires_grad=True
classifier_optimizer = optim.SGD(classifier.parameters(), lr=1e-2, momentum=0.9) # classifier optimizer

input_classifier = torch.FloatTensor(opt.batchSize, opt.output_nc,256,256)
classifier_output = torch.FloatTensor(256)
classifier_criterion=classifier_criterion.cuda()

classifier_output=Variable(classifier_output,requires_grad=True)

img is a image of dimension 1x3x256x256

input_classifier = transform_classifier(img)

classifier_optimizer.zero_grad()

classifier_output.data = classifier(input_classifier)
#label.requires_grad=True
#classifier_output.requires_grad=True
classifier_loss = classifier_criterion(classifier_output, label)
classifier_loss.requires_grad = True
classifier_loss.backward() # this loss has to backproped to G as well
classifier_optimizer.step() # Does the update

NOW, i AM GETTING THE FOLLOWING ERROR,

Traceback (most recent call last):
File “train_finetune_cscrv.py”, line 285, in
train(epoch)
File “train_finetune_cscrv.py”, line 203, in train
classifier_output.data = classifier(input_classifier)
File “/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 224, in call
result = self.forward(*input, **kwargs)
File “build/bdist.linux-x86_64/egg/torchvision/models/resnet.py”, line 139, in forward
File “/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 224, in call
result = self.forward(*input, **kwargs)
File “/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torch/nn/modules/conv.py”, line 254, in forward
self.padding, self.dilation, self.groups)
File “/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torch/nn/functional.py”, line 52, in conv2d
return f(input, weight, bias)
TypeError: argument 0 is not a Variable

#---------------end------------------------------

I have tried a lot but I am not being able ti fix it. On this thread NN Tutorial: Argument 0 is not a Variable they are saying to update the torch.

As mentioned in the tutorial when I making the input to the classifier as a variable as follows:
input_classifier=Variable(input_classifier)

I AM GETTING THE FOLLOWING ERROR:

File “train_finetune_cscrv.py”, line 285, in
train(epoch)
File “train_finetune_cscrv.py”, line 203, in train
classifier_output.data = classifier(input_classifier)
RuntimeError: Variable data has to be a tensor, but got Variable

@Soumith_Chintala We need your expertise here.

KINDLY HELP !!!

What is transform_classifier doing?
You have to wrap your input Tensor into a Variable, but it seems that your Tensor has already a Variable inside?

Try the following with a random Tensor:

x = Variable(torch.randn(1, 3, 224, 224).float())
classifier(x)

Thanks @ptrblck for your reply.

I fixed it. The classifier needs both the input and the labels as variables. I was missing that.