Classification without onehot encoding

Hi,

Can we code a classification without onehot encoding, for example;

x, y = dataloader() ##Load input x and label y(integer scalar)
out = model()
z = round(out) ##Rounding scalar "out" to integer scalar "z"
loss = criteria(z, y)

If this is possible, how we treat back prop on the rounding?

  • You cannot backprop through round.
  • You could take the (square, or something) difference of z and y, i.e. solving a regression problem.
  • Regression on class numbers doesn’t work particularly well for classification in general. (We would not be making the distinction if it did.)

Best regards

Thomas

Hi NaN!

If you’re asking specifically about whether we can avoid
one-hot encoding the target labels, the answer is yes.

Let’s say we have a classification problem with 5 classes.
The most common approach (probably) is to build a network
whose output (for a single sample) is a length-5 vector (of
real numbers, not integers). But the label (for a single sample)
is a single integer class label (so not one-hot encoded).

Then we run the output and label through, for example,
nn.CrossEntropyLoss as the loss criterion.

Again, in this scheme, the output of your network is a vector of
length number-of-classes, but your labels are single numbers.

Best.

K. Frank

@KFrank -san,

Thank you, yes I want to do is so.
Just, making output layer as single out (float scalar) and doing train with the value and label (as float scalar). After that cap the rounding to the output to make precise integer number at inference service.