Next(iter(dataloader)) error with positional arguments

As a solution to this question I posed, I changed an import statement for transforms from

import transforms as T

to

from torchvision import transforms as T

I did this in order to fix this:

def get_transform(train):
    transforms = []
    # converts the image, a PIL image, into a PyTorch Tensor
    transforms.append(T.Resize((400*5312/2988,400)))
    transforms.append(T.ToTensor())
    if train:
        # during training, randomly flip the training images
        # and ground-truth for data augmentation
        transforms.append(T.RandomHorizontalFlip(0.5))
    return T.Compose(transforms)

The linked Q and A can supply more detail.

Now, with the torchvision import, I get a new error:

TypeError: call() takes 2 positional arguments but 3 were given

I dug around and there were similar questions. This new error has to do with transforms.Compose only able to take 2 positional arguments. A suggested solution was to write a custom Compose statement. I added that to my code. Now the transform section has this:

from torchvision import transforms as T

class MyCompose(object):
    def __init__(self, transforms):
        self.transforms = transforms

    def __call__(self, img, tar):
        for t in self.transforms:
            img, tar = t(img, tar)
        return img, tar

def get_transform(train):
    transforms = []
    # converts the image, a PIL image, into a PyTorch Tensor
    transforms.append(T.Resize((400*5312/2988,400)))
    transforms.append(T.ToTensor())
    if train:
        # during training, randomly flip the training images
        # and ground-truth for data augmentation
        transforms.append(T.RandomHorizontalFlip(0.5))
    return MyCompose(transforms)

Now, I get this error:


TypeError Traceback (most recent call last)

in ()
5 collate_fn=utils.collate_fn)
6 # For Training
----> 7 images,targets = next(iter(data_loader))
8 images = list(image for image in images)
9 targets = [{k: v for k, v in t.items()} for t in targets]

3 frames

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in next(self)
515 if self._sampler_iter is None:
516 self._reset()
→ 517 data = self._next_data()
518 self._num_yielded += 1
519 if self._dataset_kind == _DatasetKind.Iterable and \

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _next_data(self)
1197 else:
1198 del self._task_info[idx]
→ 1199 return self._process_data(data)
1200
1201 def _try_put_index(self):

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _process_data(self, data)
1223 self._try_put_index()
1224 if isinstance(data, ExceptionWrapper):
→ 1225 data.reraise()
1226 return data
1227

/usr/local/lib/python3.7/dist-packages/torch/_utils.py in reraise(self)
427 # have message field
428 raise self.exc_type(message=msg)
→ 429 raise self.exc_type(msg)
430
431

TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File “/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py”, line 202, in _worker_loop
data = fetcher.fetch(index)
File “/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py”, line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File “/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py”, line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File “”, line 75, in getitem
img, target = self.transforms(img, target)
File “”, line 11, in call
img, tar = t(img, tar)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 889, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() takes 2 positional arguments but 3 were given

I don’t know where forward() is called exactly, but it appears to be part of the next(iter(dataset)) execution. More importantly, I don’t know what the positional arguments are. I mean, I expect them to be whatever part of the dataset are outputs in the objects images and targets. I don’t know what in the MyCompose would have changed to add another positional argument. How should I fix this?

The new error is raised by the internally used torchvision.transforms, which only accept a single input.
The linked post shows a custom Compose implementation, which accepts two inputs. However, note that internally a custom transform_ToNumpy transformation was used, which also accepts two inputs.
In your case you could thus apply the torchvision.transforms on both inputs separately.

Thanks for the reply, ptrblck.

I did try something like

def segmentation_transform(image, tar, train=False):
    image_cc = TF.center_crop(image, 1600)
    tar_cc = TF.center_crop(tar, 1600)
    image_T = TF.to_tensor(image_cc)
    tar_T = TF.to_tensor(tar_cc)
    if train:
        image_T = TF.hflip(image_T)
        tar_T = TF.hflip(tar_T)
    return image_T,tar_T

I’m not sure if that is what you meant. I got similar positional argument error with that when I run

dataset = four_chs(root = '/content/drive/MyDrive/data', transforms = get_transform(train=True)) 
data_loader = torch.utils.data.DataLoader(
 dataset, batch_size=1, shuffle=True, num_workers=2,
 collate_fn=utils.collate_fn)

# For Training
images,targets = next(iter(data_loader))

Unfortunately, it is still not clear to me how I can modify either the transforms function, so that I can use a T.Resize() with a size argument or a T.CenterCrop() with a size argument inside of the function and avoid getting the positional argument error (or maybe both transforms inside the function), or write a new class of Compose that can execute the transforms with the Dataset I create and read in with the Dataloader.