CrossEntropyLoss LongTensor error


I am doing the following :
Both being variable but I end up with the following error which I am not understanding :

File “”, line 93, in
loss = criterion(input, target)
File “/home/lelouedec/.local/lib/python2.7/site-packages/torch/nn/modules/”, line 202, in call
result = self.forward(*input, **kwargs)
File “/home/lelouedec/.local/lib/python2.7/site-packages/torch/nn/modules/”, line 316, in forward
self.weight, self.size_average)
File “/home/lelouedec/.local/lib/python2.7/site-packages/torch/nn/”, line 452, in cross_entropy
return nll_loss(log_softmax(input), target, weight, size_average)
File “/home/lelouedec/.local/lib/python2.7/site-packages/torch/nn/”, line 367, in log_softmax
return _functions.thnn.LogSoftmax()(input)
File “/home/lelouedec/.local/lib/python2.7/site-packages/torch/nn/_functions/thnn/”, line 110, in forward
self._backend = type2backend[type(input)]
File “/home/lelouedec/.local/lib/python2.7/site-packages/torch/_thnn/”, line 15, in getitem
return self.backends[name].load()
KeyError: <class ‘torch.LongTensor’

Any Idea why ?

Is the input Variable containing a torch.LongTensor? I think CrossEntropyLoss is only implemented for FloatTensor and DoubleTensor.

I tried with float tensor for the input and I got the following :

TypeError: FloatClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.FloatTensor, bool, NoneType, torch.FloatTensor), but expected (int state, torch.FloatTensor input, torch.LongTensor target, torch.FloatTensor output, bool sizeAverage, [torch.FloatTensor weights or None], torch.FloatTensor total_weight)

That is why I used long Tensor :confused:

1 Like

input should be a FloatTensor and target should be a LongTensor.


OHH I mixed the order of the parameters in the error.
Thank you
Sorry for that

I am just adding this last question :
I have this assertion :

Assertion `THIndexTensor_(size)(target, 0) == batch_size’ failed
Does target and input need to have the same size ?

input should be (batch_size, n_label) and target should be (batch_size) with values in [0, n_label-1].