Pytorch version of SpatialFullConvolution

I am trying to recreate a torch7 model architecture in pytorch. I have used several layers of SpatialFullConvolution and as such, was wondering if there is anything analogous to that in PyTorch. I have not been able to find anything similar by name.

ConvTranspose2d

http://pytorch.org/docs/nn.html#convtranspose2d

Okay. I made the changes but now I am getting a runtime error:

RuntimeError: input and target have different number of elements: 
input[128 x 1 x 128 x 128] has 2097152 elements, while target[128 x 2 x 128 x 128] has 4194304 element

This is the code for my model architecture:

class ColorizerNet(nn.Module):

def __init__(self):
	super(ColorizerNet, self).__init__()
	self.layer1 = nn.Conv2d(1, 8, 2, 2)
	self.layer2 = nn.Conv2d(8, 16, 2, 2)
	self.layer3 = nn.ConvTranspose2d(16, 8, 2, 2)
	self.layer4 = nn.ConvTranspose2d(8, 1, 2, 2)


def forward(self, x):
	x = F.relu(self.layer1(x))
	x = F.relu(self.layer2(x))
	x = F.relu(self.layer3(x))
	x = F.relu(self.layer4(x))
	return x

Am I making any obvious errors here? If required, I can start a separate thread on this followup error.

It’s an error in the loss function. Your output’s second dimension has a different size (1) than the target’s (2).

1 Like