AttributeError: 'tuple' object has no attribute 'size'

Hi, I am working on omniglot images and I am struggling with this error. I believe the problem is with my dataset generation. Please help me figure it out. Here is the dataset code:

import os
from PIL import Image
import numpy as np
from torch.utils.data import Dataset
import torch
from torchvision import datasets, transforms
class Dataset(Dataset):
    def __init__(self,data_txt,transformm):
        location_file = open(data_txt, 'r')
        locations = location_file.read().split()
        self.filenames = locations
        self.labels = []
        self.images = []
        self.transform = transformm
        self.path = os.getcwd()
        for address in locations:
            #label creation
            label = address.split('/')[0]
            self.labels.append(label)


    def __len__(self):
        return len(self.filenames)

    def __getitem__(self, idx):
        image = Image.open(os.path.join(self.path,'images_background', self.filenames[idx]))
        image = self.transform(image)
        return image, self.labels[idx]

Here is the full traceback
Traceback (most recent call last):
File “-/Documents/Code/AI/AI_Learning/Omniglot2/omniglot.py”, line 79, in
loss = crtieria(outputs, labels)
File “-AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py”, line 541, in call
result = self.forward(*input, **kwargs)
File “-\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\loss.py”, line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File “-\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\functional.py”, line 2009, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File “-\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\functional.py”, line 1834, in nll_loss
if input.size(0) != target.size(0):
AttributeError: ‘tuple’ object has no attribute ‘size’

Could you please post the stack trace so that we can have a better look at your issue?
I cannot find anything obviously wrong in the current code snippet.

I edited the original post you should be able to see it. Do you need the Network code aswell?

Could you check the type of outputs and labels?
I guess labels is passed as a tuple instead of a tensor.
If so, you could return labels in your __getitem__ as:

torch.tensor(self.labels[idx])

I still have a question, when I make my labels to a tensor like that it gives me error about labels being a string(I know that they are) and I am suppose to make them integers right? Labels must be integers?
Do you have any suggestions for it?

Yes, nn.CrossEntropyLoss expects the target to be a LongTensor containing the class indices in the range [0, nb_classes-1].
If your current labels are stored as string, you might want to use a dict and map these strings to the corresponding indices.

1 Like

I have a similar issue to this @ptrblck @idontknow
I have a custom image dataset modeled closely after this official pytorch tutorial.

The dataset passes a PIL image to a transform function:

class Resize(object):
    """Resize the image in a sample to a given size.
    Args:
        output_size (int): Desired size in pixels. Yields output_size by output_size image
    """

    def __init__(self, output_size):
        assert isinstance(output_size, int)
        self.output_size = output_size

    def __call__(self, sample):
        image = sample

        resized_image = image.resize((self.output_size, self.output_size))

        return resized_image

At the point where PIL Image.resize() is called, I get this error:

File “test.py”, line 52, in
for samples in train_dataloader:
File “/Users/nick/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py”, line 345, in next
data = self._next_data()
File “/Users/nick/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py”, line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File “/Users/nick/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py”, line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File “/Users/nick/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py”, line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File “/Users/nick/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataset.py”, line 207, in getitem
return self.datasets[dataset_idx][sample_idx]
File “/Users/nick/Projects/Personal_Projects/CNN/datasets/LDataset.py”, line 52, in getitem
sample = self.transform(sample)
File “/Users/nick/opt/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py”, line 70, in call
img = t(img)
File “/Users/nick/Projects/Personal_Projects/CNN/datasets/LTransforms.py”, line 24, in call
resized_image = image.resize((self.output_size, self.output_size))
AttributeError: ‘tuple’ object has no attribute ‘resize’

OK, this was the problem. Following the pytorch tutorial for making a custom dataset, my dataset would pass a sample as a dictionary: {'image': image, 'label': label}.
Then the custom transforms would unpack this and access the image as sample[‘image’].

Interestingly, before being put in the dictionary, type(image) would be <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=341x341 at 0x12D907210> as expected. However, when retrieving it from sample[‘image’] it becomes: (<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=341x341 at 0x12D907210>,).

So it got converted into a tuple for some reason. Accessing the 0th element of the tuple and passing it along to transform fixes the issue.