# F.interpolate weird behaviour

Hi !

So I might be missing something basic but I’m getting a weird behavior with `F.interpolate` :

I created a 3D-tensor using :

``````t  = torch.randn(4,4,4)
``````

Or a least I thought it was 3D, but `F.interpolate` doesn’t seem to agree. The following code :

``````F.interpolate(t, scale_factor=(1,2,1))
``````

gives the error

``````ValueError: scale_factor shape must match input shape. Input is 1D, scale_factor size is 3
``````

What am I missing here ?

1 Like

The error message might be a bit weird, but it refers to an input of shape `[batch_size, channels, *addidional_dims]`.
Given that you provide a “1D” input with a length of 4.
Here would be an example using an image tensor:

``````batch_size, c, h, w = 1, 3, 4, 4
x  = torch.randn(batch_size, c, h, w)
x = F.interpolate(x, scale_factor=(2,1))
print(x.shape)
> torch.Size([1, 3, 8, 4])
``````

As you can see, the batch size as well as the channels won’t be manipulated, just the spatial dimensions.

4 Likes

@ptrblck Hi, ptr.

what about a tensor shape like c=1:

torch.randn(B, 1, 512, 512)

I got error:

``````ValueError: size shape must match input shape. Input is 1D, size is 2

``````

When try to … interplolate on it

It works for me:

``````batch_size, c, h, w = 3, 1, 512, 512
x  = torch.randn(batch_size, c, h, w)
x = F.interpolate(x, scale_factor=(2,1))
``````

Based on your error message I guess you are using a 1D tensor?

1 Like

I get this error ValueError: size shape must match input shape. Input is 1D, size is 2

for when I apply transform of resize 2D on a 2D feature. However, I am not fully sure why it tells me my feature is 1D? What do you think?

features is: `torch.Size([503, 512])`

``````  transformed_features = self.transform(features)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 60, in __call__
img = t(img)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 297, in forward
return F.resize(img, self.size, self.interpolation, self.max_size, self.antialias)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torchvision/transforms/functional.py", line 403, in resize
return F_t.resize(img, size=size, interpolation=interpolation.value, max_size=max_size, antialias=antialias)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py", line 552, in resize
img = interpolate(img, size=[new_h, new_w], mode=interpolation, align_corners=align_corners)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/functional.py", line 3630, in interpolate
raise ValueError(
ValueError: size shape must match input shape. Input is 1D, size is 2
``````

and transforms are:

``````train_transforms = transforms.Compose(
[
transforms.Resize((256, 512)),
transforms.RandomResizedCrop(256),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
]
)

val_transforms = transforms.Compose(
[
transforms.Resize((256, 512)),
transforms.CenterCrop(256),
transforms.ToTensor(),
]
)
``````

I apply the transform like this:
` dataset_train = GraphDataset(os.path.join(data_path, ""), ids_train, transform=train_transforms)`

and I have:

``````class GraphDataset(data.Dataset):
"""input and label image dataset"""

def __init__(self, root, ids, target_patch_size=-1, transform=None):
*code*
def __getitem__(self, index):
*code*
transformed_features = self.transform(features)
#sample['image'] = features
sample['image'] = transformed_features
return sample
``````

I am not precisely sure what I am doing wrong.

Assuming you are passing a tensor to the transformation it should have at least 3 dims as `[channels, height, width]` and can have additional leading dimensions (e.g. the batch dimension).
If that’s the case, `unsqueeze` the channel dimension and remove the `ToTensor` transformation.

``````transform = transforms.Resize((256, 512))
x = torch.randn(3, 400, 600)
out = transform(x)
print(out.shape)
# > torch.Size([3, 256, 512])

x = torch.randn(503, 512)
out = transform(x)
# > ValueError: size shape must match input shape. Input is 1D, size is 2
``````

Thanks a lot for your response. I removed the ToTensor ones from the transformations, and get same error:

``````
train_transforms = transforms.Compose(
[
transforms.Resize((256, 512)),
transforms.RandomResizedCrop(256),
transforms.RandomHorizontalFlip(),
#transforms.ToTensor(),
]
)

val_transforms = transforms.Compose(
[
transforms.Resize((256, 512)),
transforms.CenterCrop(256),
#transforms.ToTensor(),
]
)
``````
`````` transformed_features = self.transform(features)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 60, in __call__
img = t(img)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 297, in forward
return F.resize(img, self.size, self.interpolation, self.max_size, self.antialias)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torchvision/transforms/functional.py", line 403, in resize
return F_t.resize(img, size=size, interpolation=interpolation.value, max_size=max_size, antialias=antialias)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py", line 552, in resize
img = interpolate(img, size=[new_h, new_w], mode=interpolation, align_corners=align_corners)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/functional.py", line 3630, in interpolate
raise ValueError(
ValueError: size shape must match input shape. Input is 1D, size is 2

torch.Size([2957, 512])
``````

This is despite the fact that features is a 2D input:
`features size: torch.Size([1813, 512])`
here:

``````print("features size: ", features.shape)
transformed_features = self.transform(features)
``````

As my code snippet shows, a 3D tensor is expected. You can copy-paste my code and check the error and compare it to the working solution.