2D conv and linear activation function


Hi all,
I’m pretty new to pytorch, so I apologize if the question is very basic.

I have a model where, for each layer, I set the number of features, but the input image size is not fixed (it can change among trainings).

The last layer of my model is a 2D convolution that converts n input features to 1 value per pixel. To do this I would use a linear activation function.

The question is:
how to dynamically set the input/output number of features of a linear function considering that the input image can change among trainings and only the number of features is fixed?

Thanks a lot.


(Juan F Montesinos) #2

Output size can be computed. You just need to pick outgoing size formula and to calculate it when you create your model. Check convolution docs.

(balamurali) #3

From my opinion, it is better to have a fixed input shape. If the layers you are mentioned is convolution layers. Then it can generally handle multiple shapes. Can you provide more information on this question ? The convolution parameters are generally fixed and it doesn’t change on the input shape.


thanks for replying.

I use the same code to train different models over different set of images having different shapes (among datasets, of course).

So, if the final 2D convolution has 32 features as output, and the image has shape 512x512, I have to initialize the linear module to 32x512x512 input features and 1x512x512 a output features?
Do I need also a “view” to reshape the output to 512x512 shape?

What if I do not set any activation function for a 2D convolution?
Which is the default one?

Thanks a lot.


(balamurali) #5

Can you share the architecture ? The linear module has only two parameters. Input and output dimensions. Ideally the fully connected ( linear ) will not have input dimension 32512512 (single number) because the number is huge. So repeated convolution and max pooling is carried before using linear layer. View command is used to flatten the convolution layer features into a single dimension by handling batch size. So if it is used the feature map (4x32x512x512) will become (4x(32x512x512)). Here 4 denotes batch size.


The class of my last convolution layer is this (basically I use it for converting a n features layer to a single feature (intensity) image):

class SingleConv1x1(nn.Module):
def init(self, in_feat, dropout):
super(SingleConv1x1, self).init()

    self.conv1 = nn.Sequential(nn.Conv2d(in_feat, 1,

def forward(self, inputs):
    outputs = self.conv1(inputs)

    return outputs

After the dropout I would put the linear activation function.

Thank you.



Any hint?