Newbie question: Getting "multi-target not supported" on basic mlp

I have some pretty basic code, based on sample code, for an MLP. I’m using a dataset of my own data that I’ve created. 100 inputs (1x100 float tensor per instance), 1 output (1x1), four classes (0, 1,2,3).

I’m just trying to get a basic mlp to work so I’m only dealing with 20 rows in my training set…batch size at 1 to start.

And I get the above error.

Any ideas?

Other details
Pytorch version 0.1.12_2
python 2.7.13

Should I switch to python 3?

Code:


path = "%s/%s" % (base_dir, filename)
print('opening data set %s.' % path)
dataset =  torch.load(path)

long_target = dataset.target_tensor.long()
dataset.target_tensor = long_target


batch_size = 1
kwargs = {'num_workers': 8, 'pin_memory': True}
train_loader = torch.utils.data.DataLoader(
                 dataset=dataset,
                 batch_size=batch_size,
                 shuffle=True, **kwargs)

criterion = nn.CrossEntropyLoss()



class MLPNet(nn.Module):
    def __init__(self):
        super(MLPNet, self).__init__()
        self.fc1 = nn.Linear(100, 500)
        self.fc2 = nn.Linear(500, 256)
        self.fc3 = nn.Linear(256, 1)

    def forward(self, x):
        x = x.view(-1, 100)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        return x

model = MLPNet().cuda()

print(model)


optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
for epoch in xrange(100):
    # training
    for batch_idx, (x, target) in enumerate(train_loader):
        optimizer.zero_grad()
        x, target = Variable(x.cuda()), Variable(target.cuda())
        #pdb.set_trace()
        output = model(x)
        loss = criterion(output, target)  #error occurs here
        loss.backward()
        optimizer.step()

Could you post a minimal example that I could run? i.e. which criterion are you using?

It might be that you’re using an outdated version of pytorch (you can get 0.20 here: http://pytorch.org/ )

I just edited it to include the loss function. I prefer LogSoftmax but that caused an argument error on the forward method.

But I’m loading my own dataset. This was code for mnist.

It seems I should probably just upgrade everything to start.

“multi-target not supported” generally means that the target that you’re passing to the criterion function (in this case, CrossEntropyLoss) is 2 or more dimensions. CrossEntropyLoss requires a target that is 1 dimensional.

If target.size() is something like (1, N) you can make it one-dimensional by doing something like target = target.view(-1).

5 Likes

target is 1x1, applied the view. Misread the docs, but then I got a cuda runtime error device side assert, so I removed the cuda use temporarily and

RuntimeError: Assertion 'cur_target >=0 && cur_target < n_classes failed.

These are the targets of my actual dataset:

0
2
2
2
3
0
2
2
0
2
0
0
0
0
0
2
2
0
2
2
[torch.LongTensor of size 20]

I find this curiously difficult seeing as how little relative trouble I had in lua torch.

I’m going to try to upgrade.

just upgraded… now on 0.2.0_4, and got the same error. When I use LogSoftmax I get “forward() takes exactly 2 arguments (3 given)”.

http://pytorch.org/docs/master/nn.html#torch.nn.LogSoftmax
LogSoftmax takes 1 argument, it sounds like you’re calling it with 2.

If you’re getting “RuntimeError: Assertion 'cur_target >=0 && cur_target < n_classes failed.” --> I don’t know what’s in your dataset, but have you checked the number of classes you have?

1 Like

Ok… I got it… this confused me. I got it in my head that because the targets/labels are a one dimensional tensor, that the output dimension had to be size 1, which made no sense to me at the time. (I did not have to do this in lua.) So I set the output layer to 4 and it seems to be working. Still trying to sort out the logsoftmax input, but I’m able to meaningfully train with crossentropyloss for now.

I appreciate your help.

Thank you so much richard it just worked.THank you