How to dynamically change the dimensions of a layer in the forward pass without losing the weights?

Hi all,

I am trying to calculate the input of a linear layer based on the output of the previous convolutional layer. This calculation is dependent on the input size; for each input the output of a conv layer will be different. Below is my network:

class Net(nn.Module):
    def __init__(self,k=16):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 16, kernel_size=(3,3)) 
        self.act1 = nn.ReLU()
        self.conv2 = nn.Conv2d(16, 16, kernel_size=(3,3))
        self.act2 = nn.ReLU()
        self.conv3 = nn.Conv2d(16, 16, kernel_size=(3,3))
        self.act3 = nn.ReLU()
        self.flat = nn.Flatten()
        self.fc = nn.Linear(k, k//2) # k is to be calculated with each step.
        self.act4 = nn.ReLU()
        self.fc2 = nn.Linear(k//2 ,1)
    def forward(self,x):
        if x.size(-1) <= 4 or x.size(0) <= 4:
            x = self.act1(self.conv1(x))
        elif x.size(-1) >= 32 and x.size(0) >= 32:
            x = self.act1(self.conv1(x))
            x = self.act2(self.conv2(x))
            x = self.act3(self.conv3(x))
        else:
            x = self.act1(self.conv1(x))
            x = self.act2(self.conv2(x))
            
        x = self.flat(x).T
        # here the dimension should be calculated 
        # and here the new layer dimensions should be updated
        x = self.act4(self.fc(x))
        x = self.fc2(x)
        return x

PS: k is the number of output neurons in the last convolutional layer
and my input is in the form [128,1,n,m] where n and m <=16 and are powers of 2 > 2, i.e., n,m = 4,8,16. Any idea on how to change the input size with each pass without losing the weights? If I simply define a new layer (i.e., self.fc = nn.Linear(k, k//2) after calculating k) the gradient won’t converge.

Thanks

The weight matrix of a linear layer has a predefined shape and changing it by initializing a new module won’t work since you would lose the already trained parameters. You could try to define a custom weight nn.Parameter and either slice or stack it with a new parameter. However this approach would also train only a subset of the parameter of the input is smaller than the largest expected shape.
The common approach is to use adaptive pooling layers after the last conv layer which allows you to define the output shape and thus also the in_features of the first linear layer.

1 Like

Thanks for the reply.

If I use pooling, don’t I need to define the output size manually? If so, this is not what I want. I want to extract the output size from the last conv layer and use it in the fc layer. This is the description of the network that I am trying to realise “The convolutional layers are followed by two fully connected layers, with k/2 and 1 output neurons, respectively, where k is the number of output neurons in the last convolutional layer.”