I am working on Dog Breed Classification problem. I am using pretrained model for training.
# Creating a model
model = models.resnet50(pretrained=True)
for param in model.parameters():
param.requires_grad = False
num_features = model.fc.in_features
fc_layers = nn.Sequential(
nn.Linear(num_features, 4096),
nn.ReLU(inplace=True),
nn.Dropout(p=0.1),
nn.Linear(4096, num_classes),
nn.ReLU(inplace=True),
nn.Dropout(p=0.1),
)
model.fc = fc_layers
This is how I have modified the pretrained model. I have used Adam as optimizer. And using a batchsize of 32.
optimizer = optim.Adam(model.parameters(), lr=0.001)
def criterion(yhat, y):
label = yhat.gather(1, y.view(-1, 1)).squeeze()
softmax_output = torch.exp(label)/torch.sum(torch.exp(yhat), axis = 1)
loss = -torch.log(softmax_output)
return loss.sum()
As the dataset is too large so in order to keep track of the loss I am printing loss after iteration.
In first epoch only I am getting same loss value after 15 to 20 iterations. The loss value is 153.109 so I don’t think this may be the convergence point. So I am not getting how to get out of this situation