Guided backprop - single ReLU module

Hi, I want to implement guided backpropagation and my model is currently using torch.nn.functional.relu. It looks like it is not easy to add a hook to this, so I am wondering if replacing the calls to F.relu could be replaced by calls to a single nn.ReLU() module for every layer. The backprop code would be implemented by:

self.model.zero_grad()

    for module in self.model.modules():
        if type(module) == nn.ReLU:
            module.register_backward_hook((torch.clamp(grad_in[0], min=0.0), ))

Would using only one nn.ReLU() module that is reused at every layer be a problem? It seems like it would be fine to me, but I wanted to ask as I’m not sure how I’d discover if it was behaving as expected.

Thanks

Hi, I also encounter the problem. Have you found a way to do guided backprop with F.relu? Any suggestions or examples would be helpful.

Thanks