Hello folks,
I have a small question to ask. I am currently, running image recognition on CIFAR-10 dataset. I have trained the model and have saved the model’s state_dict. The accuracy i am getting is 92.2 percent. This is trained without the bias and using batch normalization. The architecture followed is the VGG architecture.
However, when i load the dictionary of the trained model separately and run the following code my accuracies are dropping to 45 percent. Not use why there is a drop by 50 percent.
The code is as follows:
correct=0
total=0
for images,labels in valid_loader:
images=images.to(device)
labels=labels.to(device)
op=F.conv2d(images,model['layer1.0.weight'],padding=1)
op=obj1(op)
op=F.relu(op)
op=F.conv2d(op,model['layer1.3.weight'],padding=1)
op=obj1(op)
op=F.relu(op)
op=obj(op)
op=F.conv2d(op,model['layer2.0.weight'],padding=1)
op=obj2(op)
op=F.relu(op)
op=F.conv2d(op,model['layer2.3.weight'],padding=1)
op=obj2(op)
op=F.relu(op)
op=obj(op)
op=F.conv2d(op,model['layer3.0.weight'],padding=1)
op=obj3(op)
op=F.relu(op)
op=F.conv2d(op,model['layer3.3.weight'],padding=1)
op=obj3(op)
op=F.relu(op)
op=obj(op)
op=op.reshape(op.size(0),-1)
op=F.dropout(op)
op=F.linear(op,model['layer4.1.weight'])
op=F.relu(op)
op=F.dropout(op)
op=F.linear(op,model['layer4.4.weight'])
_,predicted=torch.max(op,1)
total+=labels.size(0)
correct+=(predicted==labels).sum().item()
print('Validation Accuracy of the model on the images:{}%'.format((correct/total)*100))
obj1,obj2,obj3 are instances of class involving batch normalization and obj is an instance of another class performing maxpooling (2 by 2 filter size with stride 2)