IndexError: too many indices for tensor of dimension 3. Unable to apply a custom transform in transforms.Compose

This is my custom transform function that I want to apply over the whole dataset.

class normalization(object):
    def __call__(self, sample):
          image, label = sample['image'], sample['labels']
          fmin= torch.min (image)
          fm = image- fmin
          image = 255*fm/torch.max(fm)  
          return {'Normalized image': image, 'Labels':label

I am adding the custom transform in transforms.Compose using code below.

batch_size= 100
size= 299, 299
data_transforms = transforms.Compose([transforms.Resize(size),
                                       transforms.Normalize([0.485, 0.456, 0.406],
                                                            [0.229, 0.224, 0.225]),

Data loading is done using:

data_set = torchvision.datasets.ImageFolder(root= data_dir, transform=data_transforms)

train_set_size = int(len(data_set) * 0.6)
test_set_size = len(data_set) - train_set_size

train_set, test_set =
    data_set, [train_set_size, test_set_size], 

data_loader = DataLoader(data_set, batch_size=batch_size,shuffle=False)

train_data_loader = DataLoader(train_set, batch_size=batch_size,shuffle=True)

test_data_loader = DataLoader(test_set, batch_size=batch_size,shuffle=False)

The below code block is causing the error:

    image, _ = data_set[i]
    print('types:', type(image))
    print(i, image.size())
    ax = plt.subplot(1, 4, i+1 )
    ax.set_title('Sample #{}'.format(i))

    if i == 3:

pytorch 1

I don’t understand what’s causing an issue in my custom transform function. Also, the ‘image’ should be tensor type but currently it’s type is ‘dict’.

Any help is appreciated!

The used transformations expect a single tensor while your custom normalization transformation expects a dict and thus tries to index the tensor with the "image" and "labels" keys, which breaks:

x = torch.randn(3, 224, 224)
# IndexError: too many indices for tensor of dimension 3

Ok. So, should I apply the custom transform separately or change the custom transform function somehow? I need to apply custom normalization transformation to the images of my dataset.

Yes, change the custom normalization method to work on a single input tensor only and it should work.
I.e. in particular something like this should work:

class normalization(object):
    def __call__(self, x):
          fmin = torch.min(x)
          fm = image - fmin
          image = 255 * fm / torch.max(fm)
          return image

Thanks! Now the image is in tensor form. But the custom normalization transform should be resulting in a tensor with min and max range between 0 and 1. Currently, the resulting tensor is between 0 and 255.

pytorch 2
pytorch 0

Unlike the results that I got when I applied the below normalization function for a single image.

def normalization(img, train_set):
    for n in enumerate(train_set):

        fmin= torch.min (img)
        fm= img- fmin
        img_sca= 255*fm/torch.max(fm)  
        return img_sca'/content/drive/MyDrive/Dataset/Chest x-ray (COVID-19 & Pneumonia)/train/COVID19/COVID19(0).jpg')

imgs_sca= normalization (img, train_set)
imgs_sca = imgs_sca[0, :,:]
plt.imshow(imgs_sca.squeeze(), cmap="gray") 
plt.title("Normalized image")

Output was a tensor between 0 and 1. As shown below.

pytorch 3
pytorch 4

I want this same result. What could be causing the difference in the output of these transform functions?

Hi @ptrblck,

I used the custom normalization function that you suggested. It gave the following error:

UnboundLocalError                         Traceback (most recent call last)
<ipython-input-12-72e5a241f536> in <module>
      1 for i in range(len(data_set)):
----> 3     image, label = data_set[i]
      4     print('types:', type(image), type(label))
      5     print(i, image.size())

2 frames
<ipython-input-9-954be77ef823> in __call__(self, x)
     11     def __call__(self, x):
     12           fmin = torch.min(x)
---> 13           fm = image - fmin
     14           image = 255 * fm / torch.max(fm)
     15           return image

UnboundLocalError: local variable 'image' referenced before assignment

So, I changed the function as:

class normalization(object):    
    def __call__(self, x):          
          fmin = torch.min(x)          
          fm = x - fmin     #fm=f−min⁡(f)
          image = 255 * fm / torch.max(fm)      #fs=K[fm/max⁡(fm ) , for 8 bit image, K=255 
          return image 

The normalization function is not working when used with transforms.Compose. Is there any other way I can apply the custom transform function to the whole dataset?

It works for me:

class normalization(object):    
    def __call__(self, x):          
          fmin = torch.min(x)          
          fm = x - fmin
          image = 255 * fm / torch.max(fm)
          return image 

transform = transforms.Compose([

img = transforms.ToPILImage()(torch.randn(3, 224, 224))
out = transform(img)
print(out.min(), out.max())
# tensor(0.) tensor(255.)

What kind of error are you seeing?

Ok, I got it. My custom transform is working differently depending on the different images given to it. It is working when used with transform.Compose. Thanks for your help so far.