ValueError: Expected target size (32, 7), got torch.Size([32])


(Salma Kasraoui) #1

Hi everyone,
I know this topic was previously discussed, however, the proposed solutions didn’t work for me.
I am trying to perform classification of precomputed features into 7 categories using logistic regression.
I got the following error when training the classifier:

ValueError: Expected target size (32, 7), got torch.Size([32])

My target shape was ([768,1]) and squeezing it didn’t solve the problem.
The input shape also is torch.Size([768, 1, 221])

By squeezing it, I got this error:

RuntimeError: Expected object of scalar type Long but got scalar type Int for argument #2 'target'

To train the logistic regression model, I used this piece of code which works steadily with another dataset:

#define classifier
num_input = trainingData.shape[-1]
num_classes = trainingLabels.cpu().unique().numel()
model = Sequential(Linear(num_input, num_classes), LogSoftmax(dim=1))
optimizer = Adam(model.parameters())
criterion = NLLLoss()

batch_size = 32
num_epochs = 50
#learning rate
lr = 1e-4

nsamples = trainingData.shape[0]
nbatches = nsamples // batch_size

for e in range(num_epochs):
    perm = torch.randperm(nsamples)

    
    for i in range(nbatches):
        idx = perm[i * batch_size : (i+1) * batch_size]
        model.zero_grad()
        resp = model.forward(trainingData[idx])
        trainingLabels = trainingLabels.squeeze()
        loss = criterion(resp, trainingLabels[idx])
        loss.backward()
        optimizer.step()
    
    resp = model.forward(trainingData)
    avg_loss = criterion(resp, trainingLabels)

Obviously, my problem is in the data shape but I can not fix it may be because I am new to pytorch.

Any help will be appreciated.


#2

Hi Salma,

Based on the error message, it seems you have an additional dimension in your model output.
If you just would like to classify the data in 7 classes (not-pixel wise classification etc.), your output should have the shape [batch_size, nb_classes], while your target should be a torch.LongTensor containing the class indices in the shape [batch_size].

Try to squeeze the model output and convert your target as:

target = target.long()

If that doesn’t help, could you post the input, output and target shapes, so that we could debug this issue further?

Also a small issue in your code:
You should call the model directly (output = model(data)), since this will make sure to register all hooks etc. Currently you are using model.forward(data) which might yield some issues in the future.

While your code should work fine using your manual batching approach, you could use a Dataset and DataLoader instead, which will make shuffling, preprocessing the data etc. a bit easier.


(Salma Kasraoui) #3

Yes there was an additional dimension in the input and the target.
I squeezed the input and I converted the target to target.long

And it worked. Thank you!