Forward function convert negative numbers to zero

Hi,

I have this snippet code, I have divided Alexnet to 3 sub-modules because I want to take the output of each sub-module to feed it to another network later.

class alex (nn.Module):
def init(self):
super(alex,self).init()
self.net= models.alexnet(pretrained= True)
for param in self.net.parameters():
param.requires_grad=False
self.feat_list=list(self.net.features.children())
self.feat_model= nn.Sequential(*self.feat_list)

        self.c1= list(self.net.classifier.children())[:-2]
        self.c1.pop(2)
        self.c1.insert(2, nn.ReLU(0.2))            
        self.sub_cl1=nn.Sequential(*self.c1)            
        self.c2= list(self.net.classifier.children())[5:7]
        self.sub_cl2=nn.Sequential(*self.c2)  
def forward(self,x):
          x_feat= self.feat_model(x)
          x_feat=self.net.avgpool(x_feat)
          x_feat=x_feat.view(-1,x_feat.size(1)*x_feat.size(2)*x_feat.size(3))

===> x_sub_classifier1= self.sub_cl1(x_feat)
===> model=self.sub_cl2(x_sub_classifier1)
return model, x_feat, x_sub_classifier1

The problem is during the runtime I debug the code and I can see the output of these variables:
x_feat
==>x_sub_classifier1
model
everything seems fine but step ahead from x_sub_classifier1 to model all negative values in x_sub_classifier1 become zero (note that I take the output after the Linear layer so there is no Relu function after the linear function to make the negative values Zeros) and the sub_cl1 has this sub-architecture

(sub_cl1): Sequential(
(0): Dropout(p=0.5, inplace=False)
(1): Linear(in_features=9216, out_features=4096, bias=True)
(2): ReLU(inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Linear(in_features=4096, out_features=4096, bias=True)
)

Am I missing something or there is something I don’t understand?

self.sub_cl2 uses an inplace ReLU as the first layer, which will (as the name suggests) apply the relu inplace on the input tensor, which will thus apply it on x_sub_classifier1.
You could set the inplace argument to False or alternatively clone x_sub_classifier1 before passing it to self.sub_cl2.

PS: Unrelated to your current issue, but you should use nn.ModuleList instead of Python lists, as the former approach will make sure to properly register all parameters.

Thanks for your reply
Based on what you have pointed out, is that the right way to re-write the previous code?

class alex (nn.Module):

def __init__(self):
        super(alex,self).__init__()
        self.net= models.alexnet(pretrained= True)
        self.feat_model = []
        self.sub_cl1 = []
        self.sub_cl2 = []
        for param in self.net.parameters():
            param.requires_grad=False
        self.feat_model.append(self.net.features)
        self.sub_cl1.append(self.net.classifier[:-2])
        self.sub_cl2.append(self.net.classifier[5:7])
        self.sub_cl2[0][0]=nn.ReLU()

def forward(self,x):
    
          for layer in self.feat_model:
            feature = layer(x)
          x_feat=self.net.avgpool(feature)
          x_feat=x_feat.view(-1,x_feat.size(1)*x_feat.size(2)*x_feat.size(3))                
          for subm1 in self.sub_cl1:
            x_sub_classifier1 = subm1(x_feat)                
          for subm2 in self.sub_cl2:
            model= subm2(x_sub_classifier1)        
          return feature,  x_sub_classifier1,model

Hi @ptrblck is the code above correct?

The code looks alright for changing the inplace behavior.
Do you still see the issue using this model?

Unrelated to the current issue, but you should use nn.ModuleList instead of Python lists, as the former approach will make sure to properly register all modules inside the model.

Thanks @ptrblck!

I’m no longer seeing the problem related to Relu. I used nn.ModuleList and yes it registered all layers parameters.

Thanks for your helping!