Hi,
Sorry if this has answered before, it’s a bit hard to search for.
If I write a class where I instantiate torch.nn
modules inside __init__
and make them attributes of self
, then use those same layers in a Sequential
model, is that a silent error?
Like this, I mean:
class MyModule(nn.Module):
def __init__(self):
self.conv1 = nn.Conv2d(3, 64)
self.conv2 = nn.Conv2d(64, 64)
self.net = nn.Sequential(self.conv1, self.conv2)
def forward(self, x):
return self.net(x)
Would parameters get updated twice somehow, or something like that?
I realized I had done this without thinking about it when torchinfo
showed me double the number of parameters for a layer – it counted once for the self.conv
attribute and again for the parameters from the same module instance inside the Sequential
.
Guessing the answer is “no” since this could result in some unpleasant surprises but just want to be sure.
Thanks!
edit: this looks close to what I’m asking
Not sure if that post is saying that what I’ve done above is a problem though.