How do I one-hot encode my outputs in a simple neural network?

I have a dataset of 21 input features and 1 class label ranging from 0-3.
I’m quite new to PyTorch and I simply don’t see how I can have multiple fully-connected layers with the OH-Encoded target list at the end. Here is what I have so far, after a day of experimentation -

class NeuralNetwork(nn.Module):
    def __init__(self):
        super(NeuralNetwork, self).__init__()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(21, 42),
            nn.ReLU(),
            nn.Linear(42, 21),
            nn.ReLU(),
            nn.Linear(21, 10),
            nn.ReLU(),            
            nn.Linear(10, 1),
            nn.ReLU()
        )

    def forward(self, x):
        logits = self.linear_relu_stack(x)
        return logits

If nothing else, I was planning on normalizing the target column as well, but of course that is a very crude idea. I would like to know how I can use one-hot encoded labels as my target.
I’m using the Quickstart tutorial as a reference.

Hi Soumik!

It sounds like you wish to build a four-class classifier.

The most common approach would be to have the final layer of
your model be a Linear layer with four output features (and no
“activation” layer after it, so no final ReLU).

(As it stands, your final Linear layer has only one output feature
so your model is not making predictions for four classes.)

You will likely use CrossEntropyLoss as your loss criterion.

The output of your model will have shape [nBatch, 4] and will be
the input to CrossEntropyLoss. The target passed to
CrossEntropyLoss will have shape [nBatch] (no class dimension)
and is not one-hot encoded. Rather, target consists of integer
class labels whose values are in {0, 1, 2, 3}.

So neither the output of your model nor your target will be one-hot
encoded.

Best.

K. Frank