Conv2d & FullyConnect Output Size

Hello,

I am wondering why the output of my fc2 would be 8 * 5.
The input X is of size 128 * 3 * 32 * 32.

I have tried the following but it is not working…

x = x.view((128, 512))
        print("0: ", x.shape)
        x = x.view(-1, 512)
        print("1: ", x.shape)

Any suggestion would be appreciated. Thank you!

def __init__(self):
    super().__init__()

    # TODO: define each layer
    self.conv1 = nn.Conv2d(3, 16, (5,5), stride = (2,2))

    self.conv2 = nn.Conv2d(16, 64, (5,5), stride = (2,2))

    self.conv3 = nn.Conv2d(64, 32, (5,5), stride = (2,2))

    self.fc1 = nn.Linear(512, 64)

    self.fc2 = nn.Linear(64, 32)

    self.fc3 = nn.Linear(32, 5)
    #

    self.init_weights()

def forward(self, x):
    N, C, H, W = x.shape

    # TODO: forward pass
    print("x.shape")
    print(x.shape)
    x = F.relu(self.conv1(x))
    x = F.relu(self.conv2(x))
    x = F.relu(self.conv3(x))
    x = x.view(-1, 512)
    x = F.relu(self.fc1(x))
    x = F.relu(self.fc2(x))
    z = self.fc3(x)
    print(z)
    #

    return z

The output of your fc2 layer is defined as 32, so I’m not sure why you assume it should be 40?

Anyway, your model architecture won’t currently work, since the output activation of conv3 will have the shape [batch_size, 32, 1, 1], so the number of input features to fc1 should be 32 instead of 512.

I figured out by changing the padding to the right size. thanks.

Now the output of my conv3 is torch.Size([128, 32, 4, 4])

is view(-1,512) the right way of “reshaping” the tensor as the input of fc1?

The output of conv3 is would be [batch_size, 32, 1, 1], and I would recommend to use x = x.view(x.size(0), -1) after this layer so keep the batch size equal.