Hi!
I have a sequential layer with convolutions and max pooling, followed by a linear layer
The problem is, if I change the settings of convolutions or max pooling, the incoming dimension of the linear layer will change
What would be a good way to automatically check and find the right dimension, so the linear layer always works?
Thanks very much!
P.S. I tried LazyLinear but because the weights are uninitialized, I can’t initialize them with my initialize-weights function. So I’m trying to use a nn.Linear, which requires me to know the input dimension before creating it
1 Like
You could still use the lazy layer and initialize it afterwards once the parameters were created.
Thanks a lot for your help ptrblck! I see this in the error message as well. I think I will have an initialize method like this: (I’m not sure about the call to forward() since it’s supposed to be done by proxy of the nn.Module as a callable)
# inside the class hosting the model
initialize(self, dummy_input):
self.forward(model) # perhaps I should have a compute() where I have all the computation? And then for my forward() I just call the compute()?
self.network_1.apply(fn) # I have a bunch of networks that I can't put under the same nn.Sequential because I'm inserting extra calculation steps between their usage
self.network_2.apply(fn)
...
Yes, having and internal initialize
method should work, but I assume you want to pass the dummy_input
to the model instead of the undefined model
variable.
So something like this?
def initialize(self, dummy_input):
self(dummy_input)
...
You don’t need to call self.forward
and can just pass the input to self
in the same way you would use model(input)
instead of model.forward(input)
, although it will most likely not matter here.
1 Like
Thank you! I know what to do now!