Why is the difference between both methods?

Why does out varies in both of these executions:

Execution 1:

input=torch.rand(2,3,32,32)
resnet18 = models.resnet18(pretrained=True)
output1=resnet18(input)

Execution 2:

model_1=resnet18
c=model_1.children
children=[]
for item in c():
children.append(item)

class CHILD(torch.nn.Module):

def __init__(self):
    super(CHILD, self).__init__()
    self.models = nn.ModuleList(children).eval()     
    
def forward(self, x):
    for ii,model in enumerate(self.models):
        print(np.shape(x))
        print(model)
        if ii==9:
            x = torch.flatten(x,0)
        x=model(x)
    return x

MOD=CHILD()
output2=MOD(d1)
I observed that output1 is not same as output2
I guess it is because of the residual connections, which have been omitted in second case, If my point is correct then plz suggest me the way to know the position of residual connections (if from the
model’s definition its not clear to me), so that I can match the second case with the actual case.

You will get the same output, of both approaches use model.train() or model.eval().
Currently you are updating the running statistics of all batchnorm layers in your first approach, while the second uses these (updated) stats by calling eval().

Thank you, yes this worked. Batchnorm was creating the problem where as I explicitly removed Dropout in the code.
Problem Solved.

On a similar case I am not realizing that how are these different:
Method (1):

class new_m(torch.nn.Module):

def __init__(self):
    super(new_m, self).__init__()
    Model = models.mobilenet_v2(args).cuda()
    self.network1=Model.features
    self.network2=Model.classifier
def forward(self, x):
    for ii,model in enumerate(self.network1):
        x = model(x)
    x=self.network4(x)

`
Gives correct result,
whereas
Method (2):

class new_m(torch.nn.Module):
def __init__(self):
super(new_m, self).__init__()
Model = models.mobilenet_v2(args).cuda()
self.network1=list(Model.features)
self.network2=Model.classifier

def forward(self, x):
    for ii,model in enumerate(self.network1):
       ` x = model(x)`

    x=self.network4(x)

Doesn’t give the correct result. Which has only one change at Model.features

Why is listing the layers create the problem?

Modules or parameters in Python list and dict won’t be registered properly as such (and will thus be missing in e.g. model.parameters()).
Use nn.ModuleList or nn.ModuleDict instead.

1 Like

Yes, It worked and improved my understanding.
Thank you so much, that’s a great help.