Hi all,

I am trying to calculate the input of a linear layer based on the output of the previous convolutional layer. This calculation is dependent on the input size; for each input the output of a conv layer will be different. Below is my network:

```
class Net(nn.Module):
def __init__(self,k=16):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=(3,3))
self.act1 = nn.ReLU()
self.conv2 = nn.Conv2d(16, 16, kernel_size=(3,3))
self.act2 = nn.ReLU()
self.conv3 = nn.Conv2d(16, 16, kernel_size=(3,3))
self.act3 = nn.ReLU()
self.flat = nn.Flatten()
self.fc = nn.Linear(k, k//2) # k is to be calculated with each step.
self.act4 = nn.ReLU()
self.fc2 = nn.Linear(k//2 ,1)
def forward(self,x):
if x.size(-1) <= 4 or x.size(0) <= 4:
x = self.act1(self.conv1(x))
elif x.size(-1) >= 32 and x.size(0) >= 32:
x = self.act1(self.conv1(x))
x = self.act2(self.conv2(x))
x = self.act3(self.conv3(x))
else:
x = self.act1(self.conv1(x))
x = self.act2(self.conv2(x))
x = self.flat(x).T
# here the dimension should be calculated
# and here the new layer dimensions should be updated
x = self.act4(self.fc(x))
x = self.fc2(x)
return x
```

PS: k is the number of output neurons in the last convolutional layer

and my input is in the form [128,1,n,m] where n and m <=16 and are powers of 2 > 2, i.e., n,m = 4,8,16. Any idea on how to change the input size with each pass without losing the weights? If I simply define a new layer (i.e., self.fc = nn.Linear(k, k//2) after calculating k) the gradient won’t converge.

Thanks