RuntimeError: The size of tensor a (1966080) must match the size of tensor b (655360) at non-singleton dimension 0

i am having this error when i run

criterion.forward(test_lb.cuda(),test_op)

print(len(test_op))

print(len(test_lb))
is 10 for both of them

Could you print the shapes of both tensors?

print(test_lb.shape)
print(test_op.shape)

Also, which criterion are you using?

The shapes are :
Test_op (10,1,128,128)
Test_lb (10,3,128,128)

Which criterion are you using?

class DiceCoeffLoss(nn.Module):
    def __init__(self,smooth=1):
        super(DiceCoeffLoss, self).__init__()
        self.smooth = smooth
    
    def forward(self,input, target):
        iflat = input.view(-1)
        tflat = target.view(-1)
        intersection = (iflat * tflat).sum()
        return 1 - ((2. * intersection + self.smooth) / (iflat.sum() + tflat.sum() + self.smooth))
criterion = DiceCoeffLoss()

It seems you are trying to calculate the dice loss for a multi-class prediction.
If that’s the case, you shouldn’t just flatten the prediction and target, as this will yield the size mismatch error (your prediction contains more values than the target).

You could try to pass your target as a one-hot encoded tensor and use @IssamLaradji’s approach.

sorry i do not understand how to use this approch for my code

Is my assumption correct that you are trying to predict three different classes?
If so, you could convert your target tensor containing the class indices into a one-hot encoded target using:

nb_classes = 3
target = torch.empty(1, 24, 24, dtype=torch.long).random_(nb_classes)
target = torch.zeros(1, nb_classes, 24, 24).scatter_(1, target.unsqueeze(1), 1)
type or pascriterion = DiceCoeffLoss()
optimizer = torch.optim.Adam(model.parameters(),lr=1e-3,weight_decay=1e-5)
target = model(images)
nb_classes = 2
target = target.to('cuda')
model = model.to('cuda')
target = torch.empty(1, 256, 256, dtype=torch.long).random_(nb_classes)
target = torch.zeros(1, nb_classes, 256, 256).scatter_(1, target.unsqueeze(1), 1)
print (target.shape)
print(labels.shape)
criterion.forward(labels.cuda(),target)te code here

now this error appears
RuntimeError: expected type torch.cuda.FloatTensor but got torch.FloatTensor
also is there something wrong with the size like below ?
torch.Size([1, 2, 256, 256])
torch.Size([2, 1, 256, 256])

The target seems to be still on the CPU, while the model output (labels?) are on the GPU.
Could you push the target also to the GPU and try the linked implementation?

hello i’m facing the following error when calculating accuracy against MSELoss function. Following is my code and shapes.
RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 1

    input_var = Variable(data.float())
    target_var = Variable(target.float())

    optimizer.zero_grad()

    output = model(input_var)

    loss = criterion(output, target_var)
    _, preds = torch.max(output.data, 1)
    dice =  dice_coef(output,target)

    loss.backward()


    optimizer.step()

    train_loss += loss.item()*data.size(0)
    train_metric =  dice*data.size(0)

    total_train += target_var.nelement()
    correct_train += preds.eq(target_var.data).sum().item()
    train_acc = 100 * correct_train / total_train
    print("train Acc ",train_acc)

Shapes are
target shape torch.Size([2, 3, 512, 512])
Output shape torch.Size([2, 3, 512, 512])
Preds shape torch.Size([2, 512, 512])

It seems your targets are one-hot encoded.
If that’s the case, use torch.argmax(targets, 1) to get the class indices.

hello im having error when i run
_, pred = torch.argmax(y.data, dim=1)

ValueError: too many values to unpack (expected 2)
i hope you can help me
my code

model_ft.eval()


count = 0
with torch.no_grad():
    for image, label in val_loader:
        image = image.cuda()
        label = label.cuda()
        
        y = model_ft(image)
        
        
        _, pred = torch.argmax(y.data, dim=1)
        count += torch.sum(preds == label.data)

    
val_acc = count.item() / len(val_data)
if best_acc < val_acc:
  best_acc = val_acc
print("val accuracy: {:.4f}".format(val_acc))

print(“best accuracy: {:.4f}”.format(best_acc))

torch.argmax returns only the indices for the max values.
If you want to get the values and indices returned, you could use torch.max().
However, since you are deleting the first return value, I assume you only care about the indices, so

pred = torch.argmax(y, dim=1)

should work.

PS: Don’t use the .data attribute, as it might yield unwanted side effects.