model_1=resnet18 c=model_1.children children=[] for item in c(): children.append(item)
class CHILD(torch.nn.Module):
def __init__(self):
super(CHILD, self).__init__()
self.models = nn.ModuleList(children).eval()
def forward(self, x):
for ii,model in enumerate(self.models):
print(np.shape(x))
print(model)
if ii==9:
x = torch.flatten(x,0)
x=model(x)
return x
MOD=CHILD() output2=MOD(d1)
I observed that output1 is not same as output2
I guess it is because of the residual connections, which have been omitted in second case, If my point is correct then plz suggest me the way to know the position of residual connections (if from the
model’s definition its not clear to me), so that I can match the second case with the actual case.
You will get the same output, of both approaches use model.train() or model.eval().
Currently you are updating the running statistics of all batchnorm layers in your first approach, while the second uses these (updated) stats by calling eval().
On a similar case I am not realizing that how are these different:
Method (1):
class new_m(torch.nn.Module):
def __init__(self):
super(new_m, self).__init__()
Model = models.mobilenet_v2(args).cuda()
self.network1=Model.features
self.network2=Model.classifier
def forward(self, x):
for ii,model in enumerate(self.network1):
x = model(x)
x=self.network4(x)
`
Gives correct result,
whereas
Method (2):
class new_m(torch.nn.Module): def __init__(self): super(new_m, self).__init__() Model = models.mobilenet_v2(args).cuda() self.network1=list(Model.features) self.network2=Model.classifier
def forward(self, x):
for ii,model in enumerate(self.network1):
` x = model(x)`
x=self.network4(x)
Doesn’t give the correct result. Which has only one change at Model.features
Modules or parameters in Python list and dict won’t be registered properly as such (and will thus be missing in e.g. model.parameters()).
Use nn.ModuleList or nn.ModuleDict instead.