What is the equivalent of Caffe's propagate_down: false?

Let’s say I have a layer that takes multiple inputs.
In this layer (e.g. eltwise multiplication) I want to disable the backpropagation on one of the inputs.
In Caffe, I was able to do this by setting propagate_down: false for that input inside that layer.
How could this be achieved using Pytorch? Thanks a lot in advance.

Suppose you dont want to backprop through inp2, then you could do:
op = layer(inp1, inp2.detach())

1 Like

Thanks. This makes sense, but I was unsure to use detach because of what is said in the following post:

“When a variable is detached, the backward computations will not visit the branches that start from this variable (all the operations done on it).”

If I understand this statement correctly, this is not exactly what I want. I do want backprop on operations that involved this variable (although not with respect to this variable itself.)

With detach, you will not backprop on operations that only involve the detached variable. But if an operation involves both your detached variable and a variable requiring grad computations, then you will derivate the operation, taking the detached variable as a constant.

1 Like

I see. Thanks for the explanation!