Hi guys!
So I just found out the hard way that if i do something like
self.conv1 = nn.Conv2d(3, 6, 3)
then the weights of the kernels are recognized as a parameter (through net.parameters()) so that the optimizer does go through those weights, while doing the following
self.convNets.append(nn.Conv2d(3,6,3))
Or in other words, having a list of conv2d within your nn.module,
does not register the weights , so the optimizer does not change those weights at all.
My question is, what’s the best way to make sure that those weights do get optimized? Looking at the source code I can see the register_parameter() method within nn.module , but is that the best way?
I was hoping that,since I am using those weights within my forward() pass, the framework will optimize those weights automatically, tbh…
Thanks in advance,
Yoni.