[ERROR] ValueError: not enough values to unpack (expected 2, got 0)

Hi there!

I’m facing a “simple” but not so easy to solve error.

I’m training a CNN (using a ResNet model) and I’m getting this error message:

<ipython-input-2-ddf1173466b9> in main()
    194             # Opening a 'loss' and 'acc' list, to save the data
    195             dictionary = {'acc-valid':[], 'acc-test':[], 'loss':[], 'dice score-valid':[], 'dice score-test':[], 'time taken':[]}
--> 196             acc_item_valid, loss_item, dice_score_valid = check_accuracy(valid_loader, model, loss_fn, device=device)
    197             acc_item_test, _, dice_score_test = check_accuracy(test_loader, model, loss_fn, device=device)
    198             dictionary['acc-valid'].append(acc_item_valid)

/content/gdrive/Shareddrives/Blood Cells AI/Algoritmos/ResNet/utils.py in check_accuracy(loader, model, loss_fn, device)
    381             # y = torch.permute(y, (0,3,1,2))
    382             pred = model(x)
--> 383             y = tf.center_crop(y, pred.shape[2:])
    384             pred = (pred > 0.5).float()
    385             loss = loss_fn(pred, y)

/usr/local/lib/python3.8/dist-packages/torchvision/transforms/functional.py in center_crop(img, output_size)
    573 
    574     _, image_height, image_width = get_dimensions(img)
--> 575     crop_height, crop_width = output_size
    576 
    577     if crop_width > image_width or crop_height > image_height:


ValueError: not enough values to unpack (expected 2, got 0)

my check_accuracy function is this:

def check_accuracy(loader, model, loss_fn, device='cuda' if torch.cuda.is_available() else 'cpu'):
    num_correct = 0
    num_pixels = 0
    dice_score = 0
    model.eval()
    
    loop = tqdm(loader, desc='Check acc')
    
    with torch.no_grad():
        for dictionary in loop:
            image, label = dictionary
            x, y = dictionary[image], dictionary[label]
            x, y = x.to(device=device), y.to(device=device)
            # LongTensor is just for numeric label classes (0, 1, 2,...)
            # y = y.type(torch.LongTensor)
            # Unsqueeze will add a demension, and is only important for grayscale
            # y = y.float().unsqueeze(1).to(device=device) # Grayscale
            y = y.float()
            # Permuting is necessary when we use the transformations available in
            # library, here we defined the transformations (pytorch functionals).
            # y = torch.permute(y, (0,3,1,2))            
            pred = model(x)
            y = tf.center_crop(y, pred.shape[2:])
            pred = (pred > 0.5).float()
            loss = loss_fn(pred, y)
            num_correct += (pred == y).sum()
            num_pixels += torch.numel(pred)
            # Calculating the 'dice score', summing when pixels are equals (preds*y=1)
            smooth = 1e-4
            dice_score += (2*100*(pred*y).sum()+smooth) / ((pred+y).sum()+smooth)
            loop.set_postfix(acc=str(round(100*num_correct.item()/int(num_pixels),4)))
            # deliting variables
            loss_item = loss.item()
            del loss, pred, x, y, image, label, dictionary
    # deleting variables
    num_correct_item = num_correct.item()
    num_pixels = int(num_pixels)
    dice_score_item = dice_score.item()
    len_loader = len(loader)
    del num_correct, dice_score, loader, loop
    
    print(f'\nGot an accuracy of {round(100*num_correct_item/int(num_pixels),4)}')
    
    print(f'Dice score: {round(dice_score_item/len_loader,4)}')
    model.train()
    return 100*num_correct_item/num_pixels, loss_item, dice_score_item/len_loader

And this error only happens when I use ResNet model. U-Net and U-ResNet runs perfectly.

Can anyone kindly help me on this?

Thanks

Could you print out the shape of pred by print(pred.shape)?

Sure, I got: torch.Size([16, 3])

It is because pred has only 2 dimension.
If 8 and 16 are the exact dimension of your image size, then use pred instead of pred[2:].

Or not, make pred has [8, 16, height, width]

Hi there

it’s still not working… I’ve tried y = tf.center_crop(y, pred), y = tf.center_crop(y, pred.shape) and the error continues… not the same error message, “ValueError: too many values to unpack (expected 2)”

Try x, y = tf.center_crop(y, pred.shape)