Problem with implementing new Channel Last feature

Hi I am trying to use this https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html on my model. But the channels are still not converted. Can you help me ? Is it even possible to convert ?

Here is my GAN generator model:

class UNetDown(nn.Module):
    def __init__(self, input_size: int, output_filters: int, normalize=True):
        super(UNetDown, self).__init__()

        self.model = Sequential(
            Conv2d(input_size, output_filters, kernel_size=(4, 1), padding=(1, 0), stride=(2, 1), bias=False),
            LeakyReLU(0.2)
        )

        if normalize:
            self.model.add_module("BatchNorm2d", BatchNorm2d(output_filters, momentum=0.8))

    def forward(self, x):
        return self.model(x)


class UNetUp(nn.Module):
    def __init__(self, input_size: int, output_filters: int, dropout=0.0):
        super(UNetUp, self).__init__()

        self.model = Sequential(
            Upsample(scale_factor=(2, 1)),
            ZeroPad2d((0, 0, 1, 0)),
            Conv2d(input_size, output_filters, kernel_size=(4, 1), stride=1, padding=(1, 0), bias=False),
            ReLU(inplace=True),
            BatchNorm2d(output_filters, momentum=0.8),
        )

        if dropout:
            self.model.add_module("Dropout", Dropout(dropout))

    def forward(self, layer, skip_input):
        layer = self.model(layer)
        layer = cat((layer, skip_input), 1)

        return layer


"""
Implementation based on UNet generator
"""


class Generator(nn.Module):
    def __init__(self, file_shape: tuple, output_filters=8, output_channels=2):
        super(Generator, self).__init__()

        # DownSampling
        self.down1 = UNetDown(file_shape[2], output_filters, normalize=False)
        self.down2 = UNetDown(output_filters, output_filters * 2)
        self.down3 = UNetDown(output_filters * 2, output_filters * 4)
        self.down4 = UNetDown(output_filters * 4, output_filters * 8)
        self.down5 = UNetDown(output_filters * 8, output_filters * 8)
        self.down6 = UNetDown(output_filters * 8, output_filters * 8)
        self.down7 = UNetDown(output_filters * 8, output_filters * 8)

        # UpSampling
        self.up1 = UNetUp(output_filters * 8, output_filters * 8)
        self.up2 = UNetUp(output_filters * 16, output_filters * 8)
        self.up3 = UNetUp(output_filters * 16, output_filters * 8)
        self.up4 = UNetUp(output_filters * 16, output_filters * 4)
        self.up5 = UNetUp(output_filters * 8, output_filters * 2)
        self.up6 = UNetUp(output_filters * 4, output_filters)

        self.last = nn.Sequential(
            Upsample(scale_factor=(2, 4)),
            ZeroPad2d((0, 0, 1, 0)),
            Conv2d(output_filters * 2, output_channels, kernel_size=4, stride=1, padding=(1, 0)),
            Sigmoid(),
        )

    def forward(self, x):
        d1 = self.down1(x)
        d2 = self.down2(d1)
        d3 = self.down3(d2)
        d4 = self.down4(d3)
        d5 = self.down5(d4)
        d6 = self.down6(d5)
        d7 = self.down7(d6)

        u1 = self.up1(d7, d6)
        u2 = self.up2(u1, d5)
        u3 = self.up3(u2, d4)
        u4 = self.up4(u3, d3)
        u5 = self.up5(u4, d2)
        u6 = self.up6(u5, d1)

        return self.last(u6)

What kind of error message are you seeing?
Could you post the code you are using to reproduce this issue, as I’m not sure which input shape to use to get your model working. :slight_smile:

Sure. Thanks for quick reply. My input shape is (2048, 1, 2) -> H,W,C format.

And this is how I create instance of Generator in init method of the other class:

        self.file_shape = (2048, 1, 2)
        self.generator = Generator(self.file_shape).to(memory_format=torch.channels_last)

And this is usage in my training loop, where real_B is tensor of shape (100, 2048,1,2) -> BHWC format:

 fake_A = self.generator(real_B.to(memory_format=torch.channels_last))

And the error message is RuntimeError: Given groups=1, weight of size [8, 2, 4, 1], expected input[100, 2048, 1, 2] to have 2 channels, but got 2048 channels instead

Thanks for the update!
There seems to be a small misunderstanding.
You should still create the tensors in the default format [N, C, H, W] and just call to(memory_format=torch.channels_last) on it, so that your code changes would be minimal.

This code works for me:

model = Generator((2048, 1, 2)).cuda().to(memory_format=torch.channels_last)
x = torch.randn(100, 2, 2048,1).cuda().to(memory_format=torch.channels_last)
out = model(x)
1 Like

Does .cuda() need to be called ? Lets say I want to train it only on CPU, or is GPU training required ?

Yes, you would need to use the GPU and also cudnn>=7603 to use this experimental memory format.

And is there some option for CPU training ? I dont have Cuda on my mac

I don’t think you would see any benefits from channels-first format on the CPU.
CC @VitalyFedyunin if I’m mistaken.

I am just curious because I have model in Keras which use this channel last format a model works fine, but with this channel first results are poor in pytorch. To be more specific it is this issue Problem with GAN(Pip2Pix) discriminator and generator loss

Usually we are seeing different convergence behavior between frameworks for a variety of reasons, such as a mismatch in the model architecture, another initialization of the parameters, different usage of schedulers and optimizers, different pre- or post-processing etc.
The memory format wasn’t on in this list so far, so do you observe that the model converges in one memory format and fails in the other?

As you can see in link i attached with my problem, models are same and difference is only in this memory format.

The architectures don’t seem to yield the same shape(s), so let’s keep the discussion in the other topic for further debugging.