Requires_grad becomes false after pooling layer

Hi,
when using eval mode, I need to use gradients in part of my model. In forward method, I set the requires_grad flag of parameters to True and expect the following module will calculate and store gradients, but after the maxpooling layer, requires_grad flag becomes false and I don’t know why…

        xl1 = self.conv_block1(xf3)
        xl2 = self.conv_block2(xf4)
        xl3 = self.conv_block3(xf5)
        
        for x in [xl1, xl2, xl3]:
            x.requires_grad_(True)
        xl1.register_hook(self.save_gradient)
        xl2.register_hook(self.save_gradient)
        xl3.register_hook(self.save_gradient)
        
        xl1 = self.max1(xl1) #xl1.requires_grad is False, here self.max1 = nn.MaxPool2d(kernel_size=56, stride=56)

Thanks in advance!

I find out that I made a mistake that I wrap these code in torch.no_grad() :rofl: