ConvTranspose2d on one (1) pixel

The goal: transform a 1d vector of length 512 into an 3x299x299 image.

The attempt is to transform each value of the vector into a 1x1 channel, then deconvolve/upsample from there. Sadly, trying to run ConvTranspose2d over a tensor of size (512, 1, 1) throws

Kernel size can not be greater than actual input size

Below is an illustration of the desired transformation.

I can’t be the first one to try this, how is this usually done?

It works for me trying to create an output of 2x2 from a single pixel:

x = torch.randn(1, 1, 1, 1)
conv_trans = nn.ConvTranspose2d(1, 1, kernel_size=2)
out = conv_trans(x)
print(out.shape)
> torch.Size([1, 1, 2, 2])

so I guess your setup might be different?

1 Like

You are right. Since I had built the convolutions in a loop, I had set stride=2. This was the reason why it wasn’t working.

Thank you!