Some confusions about nn.Linear that bothered me ;-(

Linear layer requires 2D tensor. Does it mean that I got to use ‘view’ function to reshape the previous output every time I call it?

But for some experiments BN and RELU are both needed afterwards. They both require 4D, for which I need to call reshape function again and again…

and I can’t find one corresponding reshape function for nn.Sequence()? Is there any plan to add it?

What about BatchNorm1d (ref.)?
And of course, a Linear Module is simply an affine transformation, hence it requires vectors (1D) or stack of vectors, i.e. matrices (2D).
ReLU() is a point-wise operator, so it should work with any input dimensionality.

No, we don’t plan to add any modules for reshaping, that could be put into a Sequential. You might want to take a look at how torchvision models are implemented.

thank you for replying me! I’m so sorry for not noticing BatchNorm1d in the document. Now my confusion is perfectly solved~

1 Like

that’s reasonable enough~ Thanks for your replying!