Given network architecture, what are the possible ways to define fully connected layer fc1 to have a generalized structure such as nn.Linear($size_of_previous_layer$, 50)?
The main issue arising is due to x = F.relu(self.fc1(x)) in the forward function. After using the flatten, I need to incorporate numerous dense layers. But to my understanding, self.fc1 must be initialized and hence, needs a size (to be calculated from previous layers). How can I declare the self.fc1 layer in a generalized manner?
My Thought:
To get the size, I can calculate the size of the outputs from each of Convolution layer, and since I have just 3, it is feasible. But, in case of n layers, how can you get the output size from the final convolutional layer?
You could put the kernel sizes that will be used to initialize the Conv layers in a list.
Then you could write a small function that calculates the output size given the list and the input size. The number of channels is given by the last Conv layers num_features.
Maybe I am missing something here but wouldn’t @lfolle solution would work for n convolutional layers? You put the n kernel sizes, the n padding sizes, n dilations and n strides in lists and then calculate in a function what would be the output size after the n layers. Then you can just call that function at __init__
Can you show me the contents of the function calc_out_conv_layers?
To my understanding, the function is calculating the output size from an immediate previously stated convolutional layer. This means, the function self.calc_out_conv_layers has to be called upon whenever I’m declaring a convolutional layer. This is not what I want.
In addition to this, the forward function has x = F.relu(F.max_pool2d(self.conv1(x), 2)), hence, the calc_out_conv_layers function needs to adapted manually according to the case if or not I’m using Pooling or not.
You would call this function only when you have finished adding all the kernels, paddings, ... to the appropriate list. But yes, you would need to modify this function to also take into account Pooling or not. Probably something more robust is instead of having calls to F.max_pool2d in the forward function you can add the pooling as modules at __init__ with nn.MaxPool2d. Then you can have a piece of code that iterates over all the registered modules and make the necessary calculations.
for m in self.modules():
if type(m) is nn.Conv2d:
// do some calculations
elif type(m) is nn.MaxPool2d:
// do more calculations
I think what you are looking for is a function that does not exist. There’s no function that tells you automatically what would be the last shape of the last layer or of a sequence of convolutional layers. At least not one that I know of. You always have to make the calculations at the side.
Looks good. This was my first thought for doing the calculation but I anticipated that there might a better dynamic way to do it, but now, I reckon, I have to continue treating the function as a layer as suggested.
I am, still, in a doubt and would love to ask you if there is a mathematical/conceptual way to do this (quoted) or we just have to calculate the output size layer by layer?
As far as I know the mathematical/conceptual way of doing it is layer by layer. This is because different input image sizes will have different output shape i.e. the output shape will be different for an input of size (3, 128, 128) than for an input size of (3, 1024, 1024). There is no generalization because you will always have the variable of the input size. But if you find out a way I would also like to know it
@aponcedeleonch solution is probably the best way to do this.
As another approach you could also try to input an example tensor into the network and print the shape of the intermediate tensor before passing it to the fully-connected layer.
My perhaps inefficient but general (and correct by construction:) ) way of doing it is to start with an example input, and when building up the model in __init__() feed that input through each layer in turn, get the size of the output, use that to initialize the next layer, and repeat until done
Yes, this was the question, to get the output in a generalized manner, and @aponcedeleonch 's solution is the best known method right now i.e. to calculate layer by layer.
Can a pytorch insider please advise why the function is not built into the torch.nn layer classes ? or at least, implement it in a container, such as sequential? In tensorfkow, you only need specify the input shape of first layer, then TF automatically figures out the output shape of each layer and pass the information to the input of next layer. This feature is critical to train a complex network, as otherwise you would have to check/update each layer if you change feature vector dimension. thanks