Differentiable way of "Replace By Value" in pytorch

Hello everyone,

I am trying to figure out is there any “replace by value” way in pytorch that is differetiable.
For eg

a = torch.tensor([1.5, 2, 1.5, 3]) 

And I want to replace 1.5 with 0.

Is there a way that can be done and with the autograd supporting it?

Thanks!!

Probably something like

eps = 1e-7
z = torch.zeros((), device=a.device, dtype=a.dtype)
a_new = torch.where((a - 1.5).abs() < eps, a, z)

probably is what you want.
Note that a == 1.5 might not be a good condition to test for due to the accuracy limits of floating-point computation.
You can assign the new thing to a again if you want.

Best regards

Thomas

Hi @tom, I tried implementing what you have mentioned.

threshold=nn.Parameter(torch.tensor(1.5))
threshold.requires_grad=True
eps=1e-7
z=torch.zeros((),dtype=a.dtype)
a_new=torch.where((a-threshold).abs()<eps,a,z)

Now I try to do the backward pass

a_new.sum().backward()

But I am getting RuntimeError:
element 0 of tensors does not require grad and does not have a grad_fn

Oh sorry, I had to declare z as the nn.Parameter and make requires_grad=True for z.
That solved the problem.
But
threshold.grad
returns 0. So I am unclear if the gradient is being computed for threshold even though its requires_grad parameter is True. If not, am I interpreting it wrong?
Is there a way we can have a learnable threshold?

Well if you write down the gradient formula for the threshold here, you will just get 0 almost everywhere. So it is expected to return 0 :confused: