Weird documentation for k-dimensional CrossEntropyLoss?

Hello, I’m trying to use 3d tensor as input for CrossEntropyLoss. While I’m reading docs, I found it’s weird. Basically, documentation for nn.CrossEntropyLoss says K >= 2 in the case of k-dimensional loss. (https://pytorch.org/docs/master/nn.html#torch.nn.CrossEntropyLoss). However, F.cross_entropy_loss argues that K > 1 for input and K >= 1 for target .(https://pytorch.org/docs/master/nn.html#torch.nn.functional.cross_entropy)

I don’t understand that why others require K >= 2 and it seems to work with K = 1 in pytorch 0.4.0. I think it needs a fix.

Thank you

1 Like