Suppose we have a CNN with five convolutional layers
defined as (in_features, out_features, kernel, stride,padding) and no linear layers. Consider a dataset say Cifar10.
Conv1: (3, 32, 5, 1, 0)
Conv2: (32, 64, 5, 1, 0)
Conv3: (64, 128, 5, 1, 0)
Conv4: (128, 256, 5, 1, 0)
And output layer as convolutional layer itself.
Conv5: (256, 10, *, 1, 0)
Is it possible to get feature maps of one convolutional layer to be far different then its following convolutional layer? If so, how can we do it?
I want to calculate the loss of feature maps with respect to original image for each convolutional layer. And add it up to as usual cross entropy loss. i.e.
Loss1 = my_Loss_func (feature maps, images)
Loss2 = torch.nn.CrossEntropyLoss(outputs,labels)
loss = Loss1 + Loss2
But I think there’ll be error of size mismatch in Loss1 because now for 1 image we have feature maps= out_features(say 32 if it’s conv1).
- Does, the way I am defining Loss1, make it a discriminative loss? To be honest, I am not really sure about it. But I have to use discriminative loss, and I didn’t find much resources on how it can be implemented in context of feature maps.