Multi-target not supported at ..\aten\src\THNN/generic/ClassNLLCriterion.c:20

Hello People,

I am newbie, working with adult dataset.

I would please appreciate any help here on how to fix this.

iter = 0
for epoch in range(num_epochs):
for i, (inputs, labels) in enumerate(train_loader):

    # Load inputs as Variable
    inputs = Variable(X_trainTensor)
    labels = Variable(y_trainTensor).type(torch.LongTensor)
    
    #y_tensor = torch.tensor(y_train, dtype=torch.long, device=device)
    # Clear gradients w.r.t. parameters
    optimizer.zero_grad()
    
    # Forward pass to get output/logits
    outputs = model(inputs.float())
    
    # Calclate Loss: softmax --> cross entropy loss
    loss = criterion(outputs, labels)
    
    # Getting gradients w.r.t. parameters
    loss.backward()
    
    # Updating parameters
    optimizer.step()

Error:

RuntimeError Traceback (most recent call last)
in
72
73 # Calclate Loss: softmax --> cross entropy loss
—> 74 loss = criterion(outputs, labels)
75
76 # Getting gradients w.r.t. parameters
.
.
.
.
RuntimeError: multi-target not supported at …\aten\src\THNN/generic/ClassNLLCriterion.c:20

What criterion are you using?
Can you check if the labels tensor is of size (N,) and outputs tensor is of size (N, C)?
N=batch_size, C=number of classes

Thank you very much Raghul.

I have the following:

]:


print(inputs.shape)
print(labels.shape)
print(outputs.shape)

torch.Size([100, 4])
torch.Size([100, 1])
torch.Size([100, 1])

Please any suggestion on how to for about this?

input_dim = 4
hidden_dim = 10
output_dim = 1

model = FeedforwardNeuralNetModel(input_dim, hidden_dim, output_dim)

criterion = nn.CrossEntropyLoss()

learning_rate = 0.01

optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

#return model, loss_fnc, optimizer

#load_model(0.01)

for epoch in range(num_epochs):
for i, (inputs, labels) in enumerate(train_loader):
# Load inputs as Variable
inputs = Variable(torch.Tensor(inputs.view(-1, 4).float()))
labels = Variable(torch.Tensor(labels.float()))#.type(torch.LongTensor)

    # Clear gradients w.r.t. parameters
    optimizer.zero_grad()
    
    # Forward pass to get output/logits
    outputs = model(inputs.float())
    
    # Calclate Loss: softmax --> cross entropy loss
    
    loss = criterion(outputs.view(-1, 1), labels.type(torch.LongTensor))
    
    # Getting gradients w.r.t. parameters
    loss.backward()
    #loss.backward()
    
    # Updating parameters
    optimizer.step()

RuntimeError Traceback (most recent call last)
in
97 # Calclate Loss: softmax → cross entropy loss
98
—> 99 loss = criterion(outputs.view(-1, 1), labels.type(torch.LongTensor))
100
101 # Getting gradients w.r.t. parameters

~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
→ 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)

~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
940 def forward(self, input, target):
941 return F.cross_entropy(input, target, weight=self.weight,
→ 942 ignore_index=self.ignore_index, reduction=self.reduction)
943
944

~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2054 if size_average is not None or reduce is not None:
2055 reduction = _Reduction.legacy_get_string(size_average, reduce)
→ 2056 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2057
2058

~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
1869 .format(input.size(0), target.size(0)))
1870 if dim == 2:
→ 1871 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
1872 elif dim == 4:
1873 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

RuntimeError: multi-target not supported at …\aten\src\THNN/generic/ClassNLLCriterion.c:20

Hi,

The labels should be of size (N,) not (N, 1). You can use labels.squeeze_(-1) to remove the extra dimension of size 1.

Hi AlbanD,

Thank you very much for the suggestion.

But this:

RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes’ failed. at …\aten\src\THNN/generic/ClassNLLCriterion.c:92

Any idea please?

@Samuel_Adu I think since the size of outputs is (N, 1) where n_classes or C = 1 here, whereas the labels tensor contains values like for eg. [1, 0, 1, 0, 0] i.e labels for class 0 and 1, this error is thrown.
The variable output_dim should be 2 and size of outputs tensor should be (N, 2).
Just to make sure, is this a binary classification task?

This is a different error. This one says that the values contained in your target tensor are larger than the number of classes (or negative).

Hi Mailcorahul,

Thank you very much for your kind response.Yes the labels tensor is like this [1, 0, 1, 0, 0] but the outputs tensor size is (N, 2) and Yes, this is a binary classification task…please below…

print(inputs.shape)
print(labels.shape)
print(outputs.shape)
print(labels)
print(outputs)

torch.Size([100, 4])
torch.Size([100])
torch.Size([100, 1])
tensor([1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0,
0, 0, 1, 0])
tensor([[ -0.6877],
[ -79.4583],
[ -0.6877],
[ -0.6877],
[ -0.6877],
[-231.3748],

Hi AlbanD,

Thank you very much for your kind response.

Yes it is. It came up after I used labels.squeeze_(-1)…still trying to figure out the solution.

I’d appreciate any further suggestion.

Hi,

So the problem is that your output is of size (100, 1), meaning that C = 1 for your loss: you have a single class. I think this should be 2 in your case no?

Wow…Thank you very much Raghul

This solves it.

1 Like

Wow…Thank you very much AlbanD for your suggestions

This is resolved.