Cross entropy with softmax (4 outputs) with target being multichannel continuous values

Hi,

As understood from the topic, I want to implement cross entropy with softmax on logits with target of format [batchsize,C,H,W] where values are in the range [0,1). Values across axes 1 does not sum to 1.

torch.nn.functional. cross_entropy does not allow 4D targets.

How do I do this?

U cannot do this with crossEntropy

The crossEntropy is used for classification problems where the length of output of ur network is the number of classes u have
So if u pass an image through the NN it outputs a length of C values where C=classes. So the shape of this output is (N, C) N= batch size, C is number of classes what crossentropy does it to then find the probability score of the outputs with softmax and then compute the negative log likelyhood between the scored output and the Target.

So in other words crossEntropy only take values with shape(N, C).

@Henry_Chibueze, The tensorflow version of cross entropy: tf.nn.softmax_cross_entropy_with_logits allows me to pass 4D target with continuous values by passing axes argument representing class. I am trying to find pytorch version of this.

Hi Srinath!

Pytorch’s cross_entropy() takes targets that are integer class labels.
You can’t use so-called soft labels that are probabilities.

However, you can easily write your own version that does take soft
labels. See this thread:

Best.

K. Frank

Hi @KFrank, my target values do not sum to 1, that is, they are not soft-labels. Values along class axis are normalized gaussian pulses with values between 0 to 1.

Well I haven’t used tensorflow b4 and I haven’t seen what u r looking for on pytorch

Maybe u should try flattening the data to be of shape (N, C) when parsing to the Cross Entropy loss function

Hi Srinath!

That’s okay. You can still use the “soft-label cross-entropy” I linked to
above – the math still goes through. It won’t – precisely speaking –
be a true cross-entropy (which compares two true probability
distributions), but it will give you a reasonable loss function.

(And, to be sure, if you pass your targets that don’t sum to one to
tf.nn.softmax_cross_entropy_with_logits(), nothing changes,
and you will still have a the same issue.)

Best.

K. Frank

1 Like