Multi-Class Classifier

I’m facing immense confusion in the construction of my neural network.
My input is of (140000 x 200) dimension. My output is of (140000) dimension.

My input is basically a bunch of 200 dimensional floats. Its denoted by x_train_regressors
My output consists of the following classes represented by numbers: 1, 2, 3, 4, 5, its denoted by y_train_exact_target .

I am trying to build and train a network to properly classify my output, but I am not sure how to classify to handle/format the output values. My code is below:

class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, num_classes):
        super(NeuralNet, self).__init__()
        self.layer1 = nn.Linear(input_size, hidden_size)
        self.relu = nn.ReLU()
        self.layer2 = nn.Linear(hidden_size, num_classes)
        
    def forward(self, x):
        out = self.layer1(x)
        out = self.relu(out)
        out = self.layer2(out)
        return out
exact_net = NeuralNet(200 , 1000, 5)
loss_function = nn.BCELoss()
exact_net_opt = torch.optim.SGD(exact_net.parameters(), lr = 0.02)

train_set = Variable(torch.from_numpy(np.array(x_train_regressors))).float()
test_set = Variable(torch.LongTensor(y_train_exact_target)).long()


epochs = 50
for epochs in range(epochs):
    exact_net_opt.zero_grad()
    out = exact_net(instance)
    loss = loss_function(out, label)
    loss.backward()
    exact_net_opt.step()
        
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-187-5e98211ac56e> in <module>
      4     exact_net_opt.zero_grad()
      5     out = exact_net(instance)
----> 6     loss = loss_function(out, label)
      7     loss.backward()
      8     exact_net_opt.step()

~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    475             result = self._slow_forward(*input, **kwargs)
    476         else:
--> 477             result = self.forward(*input, **kwargs)
    478         for hook in self._forward_hooks.values():
    479             hook_result = hook(self, input, result)

~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
    484 
    485     def forward(self, input, target):
--> 486         return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
    487 
    488 

~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
   1595     if input.nelement() != target.nelement():
   1596         raise ValueError("Target and input must have the same number of elements. target nelement ({}) "
-> 1597                          "!= input nelement ({})".format(target.nelement(), input.nelement()))
   1598 
   1599     if weight is not None:

ValueError: Target and input must have the same number of elements. target nelement (1) != input nelement (5)

If you want to classify each sample to one of your 5 classes, try to use nn.CrossEntropyLoss.
Also, shift the target values to [0, nb_classes-1], so [0, 4] in your case, as otherwise you’ll get an out of bounds error.

3 Likes

In this case, wouldn’t one hot encoding be necessary?

No, for multi-class classification (one target class for each sample), the targets should hold the class indices.
Other frameworks often use one-hot encoded target vectors, which is not necessary in PyTorch.
Have a look at the docs for more information.

5 Likes

Hey! shouldn’t we add a softmax activation at the end when we want to do a multi-class classification?

1 Like

If you are using nn.CrossentropyLoss, you should pass the logits without any non-linearity to the criterion, as internally F.log_softmax and nn.NLLLoss will be used.
Alternatively, you could apply F.log_softmax manually and pass it directly to nn.NLLLoss.

So usually you don’t use a softmax activation, unless e.g. you are implementing some custom criterion of course.

3 Likes