Layer Reshape Issue

I have dynamically created my network, but I am facing an issue when I try to Reshape my layer from Linear back to Convolutional.

I am trying to work with MNIST dataset.

This is my network:
(0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): ReLU()
(5): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
(6): Flatten()
(7): Linear(in_features=3136, out_features=32, bias=True)
(8): ReLU()
(9): Linear(in_features=32, out_features=3136, bias=True)
(10): ReLU()
(11): ReshapeLayer()
(12): MaxUnpool2d(kernel_size=(2, 2), stride=(2, 2), padding=(0, 0))
(13): ConvTranspose2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(14): ReLU()
(15): MaxUnpool2d(kernel_size=(2, 2), stride=(2, 2), padding=(0, 0))
(16): ConvTranspose2d(32, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(17): ReLU()

I tried to print the ouput shapes.
And this is the error that I am getting:
torch.Size([50, 32, 28, 28])
torch.Size([50, 32, 28, 28])
torch.Size([50, 32, 14, 14])
torch.Size([50, 64, 14, 14])
torch.Size([50, 64, 14, 14])
torch.Size([50, 64, 7, 7])
torch.Size([50, 3136])
torch.Size([50, 32])
torch.Size([50, 32])
torch.Size([50, 3136])
torch.Size([50, 3136])
torch.Size([50, 64, 7, 7])
x = layer(x)
File “SomePath/python3.6/site-packages/torch/nn/modules/module.py”, line 491, in call
result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: ‘indices’

And this is how my Custom Layers look like:
class ReshapeLayer(nn.Module):

def __init__(self,batch_size, n_channels, height, width):
    super(ReshapeLayer, self).__init__()
    self.shape = [batch_size, n_channels, height, width]

def forward(self, x):
    a = x.view(self.shape)
    return a

class Flatten(nn.Module):
def init(self):
super(Flatten, self).init()

def forward(self, x):
    x = x.view(x.size()[0], -1)
    return x

MaxUnpool2d requires an indices argument, you can get this from your MaxPool2d layers by specifying return_indices=True.

How exactly can you retrieve these indices in MaxUnpooling layer?
Can you explain with example

Here’s a minimal example of pool and unpool:

import torch
from torch import nn

x = torch.rand(1, 3, 540, 960)
pool = nn.MaxPool2d(2, 2, return_indices=True)
unpool = nn.MaxUnpool2d(2, 2)

out, indices = pool(x)
out = unpool(out, indices)
3 Likes

How to print the output shapes of each layer for your net? Cloud you share you code?