Padding error in resgenerator

Here is the stack trace of my error, as well as the section of the code that caused it beneath - could someone help me figure out it means please?

Error

result = self.forward(*input, **kwargs)

File “/home/mia/CV/PyTorch-GAN/implementations/cyclegan/models.py”, line 87, in forward
return self.model(x)
File “/home/mia/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 477, in call
result = self.forward(*input, **kwargs)
File “/home/mia/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py”, line 91, in forward
input = module(input)
File “/home/mia/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 477, in call
result = self.forward(*input, **kwargs)
File “/home/mia/anaconda3/lib/python3.6/site-packages/torch/nn/modules/padding.py”, line 163, in forward
return F.pad(input, self.padding, ‘reflect’)
File “/home/mia/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py”, line 2163, in pad
assert len(pad) == 2, ‘3D tensors expect 2 values for padding’
AssertionError: 3D tensors expect 2 values for padding

Code:

class GeneratorResNet(nn.Module):
def init(self, input_shape, num_residual_blocks):
super(GeneratorResNet, self).init()
channels = input_shape[0]

    # Initial convolution block
    out_features = 64
    model = [
        nn.ReflectionPad2d(channels),
        nn.Conv2d(channels, out_features, 7),
        nn.InstanceNorm2d(out_features),
        nn.ReLU(inplace=True),
    ]
    in_features = out_features

    # Downsampling
    for _ in range(2):
        out_features *= 2
        model += [
            nn.Conv2d(in_features, out_features, 3, stride=2, padding=1),
            nn.InstanceNorm2d(out_features),
            nn.ReLU(inplace=True),
        ]
        in_features = out_features

    # Residual blocks
    for _ in range(num_residual_blocks):
        model += [ResidualBlock(out_features)]

    # Upsampling
    for _ in range(2):
        out_features //= 2
        model += [
            nn.Upsample(scale_factor=2),
            nn.Conv2d(in_features, out_features, 3, stride=1, padding=1),
            nn.InstanceNorm2d(out_features),
            nn.ReLU(inplace=True),
        ]
        in_features = out_features

    # Output layer
    model += [nn.ReflectionPad2d(channels), nn.Conv2d(out_features, channels, 7), nn.Tanh()]

    self.model = nn.Sequential(*model)

def forward(self, x):
    return self.model(x)

It seems you are trying to pass a 3-dimensional input to your model, while image tensors are expected to have the shape [batch_size, channels, height, width].
Could you check it?

Yes that was the error! Thank you so much.

But now I think I have a normalization error in my output to visdom because I’m getting very white/oddly bright colored images during training even after a very long time. Is this normal? The last layer of my architecture is tanh so I renormalize by adding 1, dividing by 2, and multiplying by 255 before plotting it in visdom… I even checked the max min values of the generated images and theyre technically in range but both the max and the min hover around 253 or 254

It seems to be a bit strange that both, the min and max value, are approx 254.
This would make your images basically almost white.

However, could you transform the outputs to torch.uint8 before passing them to Visdom?
I’m not sure if Visdom tries to renormalize them if you pass float tensors.

It seems that then some of the images go black and some don’t…

could I post my code here to see whats wrong?

Sure!
If possible, make sure the code is executable and remove “unnecessary” parts of the code, which are not needed to reproduce your issue.