"ValueError: not enough values to unpack" occurring at regular intervals (every 41 images) when iterating through batches

Hi,

I’m new to PyTorch and this is my first segmentation attempt, so apologies in advance if there’s something basic I’m missing here.

I’m trying to segment the FloodNet dataset and am running into a strange error when I iterate over the images in a DataLoader. Specifically, I get “ValueError: not enough values to unpack (expected 2, got 1)” every 41 images for some reason… This doesn’t change when I change the batch size (e.g. with a batch size of 8 I run into trouble on the 6th batch, which includes the 41st image, etc.) and also doesn’t change when I shuffle the images; I’ve iterated through with a batch size of 1 and checked the image where I get the first error and can display the image/mask pair okay with matplotlib so it doesn’t look as though it’s the images themselves. I really have no idea why it might fail at regular intervals like this.

The code for the Dataset and DataLoader is here (IMG_SIZE = 256) and I’m running pytorch 2.1.0 on a colab notebook:

train_transforms = A.Compose([
    A.Resize(IMG_SIZE, IMG_SIZE),
    A.HorizontalFlip(p=0.5),
    A.VerticalFlip(p=0.5)
                    ], is_check_shapes=False)

class SegmentationDataset(Dataset):

    def __init__(self, df, augmentations):
        self.df = df
        self.augmentations = augmentations

    def __len__(self):
        return len(self.df)

    def __getitem__(self, idx):
        row = self.df.iloc[idx]

        image_path = row['images']
        mask_path = row['masks']

        image = cv2.imread(image_path)
        image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
        image = image.astype('uint8')

        mask = cv2.imread(mask_path, cv2.IMREAD_UNCHANGED)
        mask = np.expand_dims(mask, axis=-1)

        if self.augmentations:
            data = self.augmentations(image=image, mask=mask)
            image = data['image']
            mask = data['mask']

        image = np.transpose(image, (2,0,1)).astype(np.float32)
        mask = np.transpose(mask, (2,0,1))

        image = torch.Tensor(image) / 255.0
        mask = torch.Tensor(mask).long()

        return image, mask

train = SegmentationDataset(train_df, train_transforms)
train_loader = DataLoader(train, batch_size=BATCH_SIZE, shuffle=False)

Thanks a lot, and as I say, it’s my first post so feel free to let me know if this could be clearer or if I’ve missed anything that would help.

Just to clarify: you can properly index data, target = dataset[41] without any issues, but the code fails if Dataset.__getitem__ is called with index=41 from the DataLoader?
Could you post the entire stacktrace, please?

Thanks for getting back to me @ptrblck .

I should have started from the dataset rather than the data loader but didn’t… Now that I’ve checked I see I do get the error there for the 41st image, i.e. if I try the below then it errors out when i = 40:

for i, (im, ta) in enumerate(train):
  print(f"Index {i}; im shape: {im.shape}; target shape: {ta.shape}")

The full error is:

ValueError                                Traceback (most recent call last)
<ipython-input-28-62da28657fb5> in <cell line: 1>()
----> 1 for i, (im, ta) in enumerate(train):
      2   print(f"Index {i}; im shape: {im.shape}; target shape: {ta.shape}")

7 frames
<ipython-input-9-98a368f9a60d> in __getitem__(self, idx)
     23 
     24         if self.augmentations:
---> 25             data = self.augmentations(image=image, mask=mask)
     26             image = data['image']
     27             mask = data['mask']

/usr/local/lib/python3.10/dist-packages/albumentations/core/composition.py in __call__(self, force_apply, *args, **data)
    208 
    209         for idx, t in enumerate(transforms):
--> 210             data = t(**data)
    211 
    212             if check_each_transform:

/usr/local/lib/python3.10/dist-packages/albumentations/core/transforms_interface.py in __call__(self, force_apply, *args, **kwargs)
    116                     )
    117                 kwargs[self.save_key][id(self)] = deepcopy(params)
--> 118             return self.apply_with_params(params, **kwargs)
    119 
    120         return kwargs

/usr/local/lib/python3.10/dist-packages/albumentations/core/transforms_interface.py in apply_with_params(self, params, **kwargs)
    129                 target_function = self._get_target_function(key)
    130                 target_dependencies = {k: kwargs[k] for k in self.target_dependence.get(key, [])}
--> 131                 res[key] = target_function(arg, **dict(params, **target_dependencies))
    132             else:
    133                 res[key] = None

/usr/local/lib/python3.10/dist-packages/albumentations/core/transforms_interface.py in apply_to_mask(self, img, **params)
    261 
    262     def apply_to_mask(self, img: np.ndarray, **params) -> np.ndarray:
--> 263         return self.apply(img, **{k: cv2.INTER_NEAREST if k == "interpolation" else v for k, v in params.items()})
    264 
    265     def apply_to_masks(self, masks: Sequence[np.ndarray], **params) -> List[np.ndarray]:

/usr/local/lib/python3.10/dist-packages/albumentations/augmentations/geometric/resize.py in apply(self, img, interpolation, **params)
    182 
    183     def apply(self, img, interpolation=cv2.INTER_LINEAR, **params):
--> 184         return F.resize(img, height=self.height, width=self.width, interpolation=interpolation)
    185 
    186     def apply_to_bbox(self, bbox, **params):

/usr/local/lib/python3.10/dist-packages/albumentations/augmentations/utils.py in wrapped_function(img, *args, **kwargs)
    120     def wrapped_function(img: np.ndarray, *args: P.args, **kwargs: P.kwargs) -> np.ndarray:
    121         shape = img.shape
--> 122         result = func(img, *args, **kwargs)
    123         if len(shape) == 3 and shape[-1] == 1 and len(result.shape) == 2:
    124             result = np.expand_dims(result, axis=-1)

/usr/local/lib/python3.10/dist-packages/albumentations/augmentations/geometric/functional.py in resize(img, height, width, interpolation)
    386 @preserve_channel_dim
    387 def resize(img, height, width, interpolation=cv2.INTER_LINEAR):
--> 388     img_height, img_width = img.shape[:2]
    389     if height == img_height and width == img_width:
    390         return img

ValueError: not enough values to unpack (expected 2, got 1)

Also, while I was trying to debug I tried the following and there was no error at index 40, do you know why that might be? I expected to see the same problem.

i = 0
while i < 45:
  images, labels = next(iter(train_loader))
  print(f"Index {i}", images.shape, labels.shape)
  i += 1

Hi @ptrblck , really sorry but think I must have made a mistake with indexing and checking the image at that location – I just double checked and can’t display the mask at index = 40 so I think this is likely to be a problem with the data itself rather than my code.

Yes, I think the same and I understand you are now able to reproduce the issue via data, mask = dataset[40]?
If so, add debug print statements into your code and check the shape of image and mask right before calling into:

data = self.augmentations(image=image, mask=mask)

for a working and failing index as it seems the mask might be missing a dimension.

Excellent, will do. Thanks for your help!