The result of layer-wise execution of pytorch code is strange

Currently I am trying to use mobilenet-v2 like the code below.

input_data = np.random.uniform(-1,1,size=(1,3,224,224))
layer = models.mobilenet_v2(pretrained=True).features[0:4]
layer.eval()
out_original = layer(input_data)

In addition, to check the intermediate result value,i split the layer and inferenced as follows.

L_zero_from_two = layer[0:3]
output_middle = L_zero_from_two(input_data)

L3 = layer[3]

output = L3.conv[0](output_middle)
output = L3.conv[1](output)
output = L3.conv[2](output)
output = L3.conv[3](output)

output_test = L3(output_middle)
print(output[0][0][0][0], " : ", output_test[0][0][0][0] )

The output result is as follows.
tensor(0.8066, grad_fn=) : tensor(0.9868, grad_fn=)

the results are not the same. I think it should be the same, but I don’t understand why they are not.
some anyone tell me which part is the problem?

Thank you!

The L3 module is not a pure nn.Sequential module and uses a skip connection, is use_res_connect is defined as seen here.

Adding this skip connection to your code yields the same outputs:

input_data = torch.randn(1, 3, 224, 224)
layer = models.mobilenet_v2(pretrained=True).features[0:4]
layer.eval()
out_original = layer(input_data)

L_zero_from_two = layer[0:3]
output_middle = L_zero_from_two(input_data)

L3 = layer[3]

output = L3.conv[0](output_middle)
output = L3.conv[1](output)
output = L3.conv[2](output)
output = L3.conv[3](output)
if L3.use_res_connect:
    output = output + output_middle

output_test = L3(output_middle)
print((output - output_test).abs().max())
> tensor(0., grad_fn=<MaxBackward1>)
print((out_original - output_test).abs().max())
> tensor(0., grad_fn=<MaxBackward1>)

Thank you!! it solve my problem!
Thank you ptrblck!