T.Compose | TypeError: __call__() takes 2 positional arguments but 3 were given

I have been getting this odd error saying that I have passes too many arguments into my call() method. I am thoroughly perplexed because I am sure I have only passes 2 arguments.

Here is my Compose class:

class Compose(object):
    def __init__(self, transforms):
        self.transforms = transforms

    def __call__(self, image, target):
        for t in self.transforms:
            image, target = t(image, target)
        return image, target

Here’s the function that is call the method:

def get_transform(train):
    transforms = []
    transforms.append(T.ToTensor())
    if train:
        # during training, randomly flip the training images
        # and ground-truth for data augmentation
        transforms.append(T.RandomHorizontalFlip(0.5))
        transforms.append(T.Resize(INPUT_SIZE))
    return T.Compose(transforms)

Here’s the error msg:

Any help would be appreciated! <3

There are a few issues in the example code:

  • Even though you are defining your own Compose class, it seems you are still using the torchvision.transforms.Compose one, so you might want to remove the T namespace.
  • Your custom Compose object takes two inputs. However, the underlying torchvision.transforms don’t, so you would have to call the transformation on the image and target separately.
  • Since you are using (one) random transformation, the image and target will not be transformed using the same random number, which might be wrong (e.g. for a segmentation use case). If you want to apply the same random transform on both inputs, have a look at this post
  • RandomHorizontalFlip and Resize work on PIL.Images, so ToTensor should be applied last.

Here is a small dummy example:

class Compose(object):
    def __init__(self, transforms):
        self.transforms = transforms

    def __call__(self, image, target):
        for t in self.transforms:
            image = t(image)
            target = t(target)
        return image, target

import torchvision.transforms as transforms

transform = []
transform.append(transforms.RandomHorizontalFlip(0.5))
transform.append(transforms.Resize(10))
transform.append(transforms.ToTensor())
transform = Compose(transform)

to_image = transforms.ToPILImage()
x = to_image(torch.randn(3, 24, 24))
y = to_image(torch.randn(3, 24, 24))

transform(x, y)
4 Likes

Thank you! This did get me a little further. However, when I try to pass my target variable to the transform Resize I run into a problem, because my target variable is a dictionary, not a PIL image, with bounding boxes, labels, iscrowd, etc.

It looks like I will have to make a custom resize function for my target.
Do you know of a way to resize BBox coordinates so they correspond to the resized image?

I just realized that my dataset gives me the boxes as proportions of the image. All I have to do is apply the resize to the image then multiply the height and width of the transformed image by the given proportions of the bboxes.

@ptrblck, I am having a similar problem but this time using the transforms.Compose class.

What I’ve been trying to do is to create a composition of transformations to apply directly on the torchvision.dataset.Cityscapes dataset object (here I’m using target_type='semantic' ). For simplicity’s sake, let’s say I want to transform the image as a Numpy object with a custom transform, my code for said transform goes like:

class transform_ToNumpy(object):
    # Converts PIL image to Numpy array.
    def __call__(self, image,semseg):
        if not isinstance(image, PIL.Image) or not isinstance(semseg, PIL.Image):
            raise (RuntimeError("segtransform.ToNumpy() only handle PIL.Image"
                                "[eg: data readed by cv2.imread()].\n"))
        else:
          image = np.array(image)
          semseg = np.array(semseg)
            
        return image, semseg

And I am using my transforms.Compose object in the dataset class as:

dataset = Cityscapes(dataset_path, split='train', mode='fine', target_type='semantic', transforms=transforms.Compose([transform_ToNumpy()]))

Notice that I am using the parameter transforms to specify the transform that will be applied to both source and target images.

But I am still getting the error:

---------------------------------------------------------------------------

TypeError                                 Traceback (most recent call last)

<ipython-input-5-feacc9e6f37e> in <module>()
     26         ]))
     27 
---> 28 img, target = dataset[0] 

/usr/local/lib/python3.6/dist-packages/torchvision/datasets/cityscapes.py in __getitem__(self, index)
    182         if self.transforms is not None:
--> 183             image, target = self.transforms(image, target)
    184 
    185         return image, target

TypeError: __call__() takes 2 positional arguments but 3 were given

Which I don’t understand since my transform’s __call__ method expects two inputs: image and semseg .

Could you give me some insight on why is it saying that:

__call__() takes 2 positional arguments but 3 were given

When I do not specify any other inputs than the source and semantic segmentation images?

Thank you.

I think the error is raised by Compose, which accepts only a single argument as seen here.

Reproduction of your error:

trans = transforms.Compose([transform_ToNumpy()])
a = TF.to_pil_image(torch.randn(3, 24, 24))
b = TF.to_pil_image(torch.randn(3, 24, 24))
x, y = trans(a, b)

A workaround would be to write a custom Compose class, which accepts and forwards two arguments:

class MyCompose(object):
    def __init__(self, transforms):
        self.transforms = transforms

    def __call__(self, img, tar):
        for t in self.transforms:
            img, tar = t(img, tar)
        return img, tar


trans = MyCompose([transform_ToNumpy()])
a = TF.to_pil_image(torch.randn(3, 24, 24))
b = TF.to_pil_image(torch.randn(3, 24, 24))
x, y = trans(a, b)

Thanks @ptrblck!

I really don’t know how I was doing it before, but it was causing an error with my own Compose class (which was essentially the same that you posted).

But everything works now, this is the code:

train_transform = transform_Compose([
        transform_ToNumpy(),
        transform_id2Train(),
        transform_Resize((512,1024)),
        transform_RandomHorizontalFlip(),
        transform_ToTensor(),
        transform_Normalize(mean=mean, std=std),
        transform_increaseDim(),
        ])

dataset = Cityscapes(dataset_path, split='train', mode='fine', target_type='semantic', transforms=train_transform)

img, target = dataset[0] #PIL Images

Here the train_transform class takes two arguments.

great help~ thx ptrblck~

Following this tutorial

  1. Building your own object detector — PyTorch vs TensorFlow and how to even get started?
  2. TORCHVISION OBJECT DETECTION FINETUNING TUTORIAL which has a colab version here

Cloning and copying the files
According to 1st

%%bash
git clone https://github.com/pytorch/vision.git
cd vision
git checkout v0.3.0
cp references/detection/utils.py ../
cp references/detection/transforms.py ../
cp references/detection/coco_eval.py ../
cp references/detection/engine.py ../
cp references/detection/coco_utils.py ../

import numpy as np
import torch
import torch.utils.data
from PIL import Image
import pandas as pd
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from engine import train_one_epoch, evaluate
import utils
import transforms as T

def get_transform(train):
   transforms = []
   # converts the image, a PIL image, into a PyTorch Tensor
   transforms.append(T.ToTensor())
   if train:
      # during training, randomly flip the training images
      # and ground-truth for data augmentation
      transforms.append(T.RandomHorizontalFlip(0.5))
   return T.Compose(transforms)

2nd says

%%shell

# Download TorchVision repo to use some files from
# references/detection
git clone https://github.com/pytorch/vision.git
cd vision
git checkout v0.8.2

cp references/detection/utils.py ../
cp references/detection/transforms.py ../
cp references/detection/coco_eval.py ../
cp references/detection/engine.py ../
cp references/detection/coco_utils.py ../

from engine import train_one_epoch, evaluate
import utils
import transforms as T


def get_transform(train):
    transforms = []
    # converts the image, a PIL image, into a PyTorch Tensor
    transforms.append(T.ToTensor())
    if train:
        # during training, randomly flip the training images
        # and ground-truth for data augmentation
        transforms.append(T.RandomHorizontalFlip(0.5))
    return T.Compose(transforms)

I made the changes as you suggested

import os
os.chdir("/content/drive/MyDrive/PytorchObjectDetector/")
import numpy as np
import torch
import torch.utils.data
from PIL import Image
import pandas as pd

from torchvision.models.detection.faster_rcnn import FastRCNNPredictor

from engine import train_one_epoch, evaluate
import utils
import torchvision.transforms as transforms
import torchvision

def get_transform(train):
   transforms = []
   # converts the image, a PIL image, into a PyTorch Tensor
   transforms.append(transforms.ToTensor())
   if train:
      # during training, randomly flip the training images
      # and ground-truth for data augmentation
      transforms.append(transforms.RandomHorizontalFlip(0.5))
   return transforms.Compose(transforms)

On running this code

# use our dataset and defined transformations
dataset = FrDataset(root= "/content/drive/MyDrive/PytorchObjectDetector/fr_dataset/",
          data_file= "/content/drive/MyDrive/PytorchObjectDetector/fr_dataset/data/fr_labels.csv",
          transforms = get_transform(train=True))
dataset_test = FrDataset(root= "/content/drive/MyDrive/PytorchObjectDetector/fr_dataset",
               data_file= "/content/drive/MyDrive/PytorchObjectDetector/fr_dataset/data/fr_labels.csv",
               transforms = get_transform(train=False))

I get his error

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-9-f0fdd0373b7b> in <module>()
      2 dataset = FemaleConnectorDataset(root= "/content/drive/MyDrive/PytorchObjectDetector/fr_dataset/",
      3           data_file= "/content/drive/MyDrive/PytorchObjectDetector/fr_dataset/data/fr_labels.csv",
----> 4           transforms = get_transform(train=True))
      5 dataset_test = F
rDataset(root= "/content/drive/MyDrive/PytorchObjectDetector/fr_dataset",
      6                data_file= "/content/drive/MyDrive/PytorchObjectDetector/fr_dataset/data/fr_labels.csv",

<ipython-input-8-b85ee1833ff6> in get_transform(train)
      2    transforms = []
      3    # converts the image, a PIL image, into a PyTorch Tensor
----> 4    transforms.append(transforms.ToTensor())
      5    if train:
      6       # during training, randomly flip the training images

AttributeError: 'list' object has no attribute 'ToTensor'

What would you suggest I do? I am confused

You are importing transforms via:

import torchvision.transforms as transforms

and are then overwriting this modules as a list via:

transforms = []

which yields the error:

transforms.append(transforms.ToTensor())
> AttributeError: 'list' object has no attribute 'ToTensor'

Don’t use the same names for variables and for modules you’ve imported.

I followed your suggestion, changed the transforms to an alternative name

Now I get a different error when I run this block

num_epochs = 10
for epoch in range(num_epochs):
   # train for one epoch, printing every 10 iterations
   train_one_epoch(model, optimizer, data_loader, device, epoch,print_freq=10)
# update the learning rate
   lr_scheduler.step()
   # evaluate on the test dataset
   evaluate(model, data_loader_test, device=device)

os.mkdir("/content/drive/MyDrive/PytorchObjectDetector/fr_dataset/")
torch.save(model.state_dict(), "/content/drive/MyDrive/PytorchObjectDetector/fr_dataset/model")

Error I get

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:490: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  cpuset_checked))
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-16-8e894f6729ee> in <module>()
      2 for epoch in range(num_epochs):
      3    # train for one epoch, printing every 10 iterations
----> 4    train_one_epoch(model, optimizer, data_loader, device, epoch,print_freq=10)
      5 # update the learning rate
      6    lr_scheduler.step()

5 frames
/usr/local/lib/python3.7/dist-packages/torch/_utils.py in reraise(self)
    455             # instantiate since we don't know how to
    456             raise RuntimeError(msg) from None
--> 457         raise exception
    458 
    459 

TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataset.py", line 471, in __getitem__
    return self.dataset[self.indices[idx]]
  File "<ipython-input-6-8432aadbd103>", line 28, in __getitem__
    img, target = self.transforms(img, target)
TypeError: __call__() takes 2 positional arguments but 3 were given

This the block of code where the error is pointing to

class FrDataset(torch.utils.data.Dataset):
    def __init__(self, root, data_file, transforms=None):
        self.root = root
        self.transforms = transforms
        self.imgs = sorted(os.listdir(os.path.join(root, "images")))
        self.path_to_data_file = data_file
    def __getitem__(self, idx):
        # load images and bounding boxes
        img_path = os.path.join(self.root, "images", self.imgs[idx])
        img = Image.open(img_path).convert("RGB")
        box_list = parse_one_annot(self.path_to_data_file, 
        self.imgs[idx])
        boxes = torch.as_tensor(box_list, dtype=torch.float32)
        num_objs = len(box_list)
        # there is only one class
        labels = torch.ones((num_objs,), dtype=torch.int64)
        image_id = torch.tensor([idx])
        area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:,0])
        # suppose all instances are not crowd
        iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
        target = {}
        target["boxes"] = boxes
        target["labels"] = labels
        target["image_id"] = image_id
        target["area"] = area
        target["iscrowd"] = iscrowd
        if self.transforms is not None:
            img, target = self.transforms(img, target)
        return img, target
    def __len__(self):
            return len(self.imgs)

self.transforms(img, target) takes one input argument while you are trying to pass two to it.
I’m not familiar with your use case, but maybe you want to use the transforms from torchvision.detection which take two arguments as seen here.

Fixed the issue by following Putting Everything Together copying the files from the suggested vision version v0.3.0 from the github to the working directory

1 Like