I’m trying to construct a convolutional autoencoder but wrapped in a Class for ease. However, when I perform the MaxUnpooling in the decoder I get a “missing indices” error because as seen in many posts you should specify the indices from carrying out the MaxPooling in the encoder.
However I’m unsure as how to wrap this into the class…
class Autoencoder(torch.nn.Module):
def __init__(self):
super().__init__()
self.encoder = torch.nn.Sequential(
torch.nn.Conv2d(1,64,3,padding=1),
torch.nn.ReLU(),
torch.nn.Conv2d(64,128,3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2),
torch.nn.Conv2d(128,256,3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2))
self.decoder = torch.nn.Sequential(
torch.nn.MaxUnpool2d(2),
torch.nn.ReLU(),
torch.nn.ConvTranspose2d(256,128,3,padding=1),
torch.nn.MaxUnpool2d(2),
torch.nn.ReLU(),
torch.nn.ConvTranspose2d(128,64,3,padding=1),
torch.nn.ReLU(),
torch.nn.ConvTranspose2d(64,1,3,padding=1),
torch.nn.ReLU())
def forward(self,x):
features = self.encoder(x)
output = self.decoder(features)
return output
It seems inefficient to split the decoder up at every MaxUnpool during the forward process to call a separate unpooling function so is there a nice way to wrap the MaxUnpooling inside the class?
Edit: I found this post from last year so maybe this is still not possible -MaxUnpool2d with indices from MaxPool2d, all in nn.Sequential
Many thanks in advance for any advice!