# Condition on tensor with requires_grad = True

Hi. Suppose that I have 3 tensors, `a`, `b` and `c`, and a threshold `thresh` and I would like to define the tensor `d` as follows:

d[a>thresh] = b[a>thresh]
d[a<thresh] = c[a<thresh]

a, b, and c are in our case tensors with autograd = true that depends on an input `x`

How can I perform the following operation such that the gradient of `d` with respect to `x` will be computed correctly.

Basically, if I have three functions f,g,and h taking an input tensor x and outputting another tensor, and I have

def forward(self, x):
a=f(x)
b=g(x)
c=h(x)

d[a>thresh] = b[a>thresh]
d[a<thresh] = c[a<thresh]

return d

I would like d to be differentiable with respect to x and the parameters of f (for example, f can be a convolution), without implementing my own backward function

How can I do this?

d=(a>thresh) .float()*b +(a<thresh) .float()*c

but I am not sure the gradient will be computed properly with autograd

Thanks

Hi,

for b and c it will work as you expect with the current code.
The problem for a is that the way you use it is not differentiable. it does not make sense to ask how would the output change if I change a very sligtly. You see that it will do no change (gradient of 0) almost everywhere. And at the point where it crosses the threshold value, then the gradient would be infinite as the change in the output is not contiguous. So for a, the only gradient you can get is 0 everywhere. Which you can’t use to learn anything.

Actually, the gradient on a is not that important, the gradient on b and c are what give the most information, so if it works for b and c, I am fine Note that `f` will not be learnt then !
Thanks, but `f` in my case has no parameters, it’s `abs(x)`