I’m using a pretrained VGG19 and fine-tuning (freezing the conv layers) its classifier parts (i.e. 3x linear layers).
loss = torch.nn.MSELoss(output, target)
where
output = [13.7210, 1.6992, -0.1286, -0.9545, -0.9148, 2.3547]
, and
target = [ 0., 0., 0., 0., 14., 1.])
(each element is a count of respective class)
The calculated loss is 169.3941
that is completely useless since overall loss tends to increase as the model sees more and more images. Why I’m not getting the predictions closer to the targets?