I am trying to figure out is there any “replace by value” way in pytorch that is differetiable.
a = torch.tensor([1.5, 2, 1.5, 3])
And I want to replace 1.5 with 0.
Is there a way that can be done and with the autograd supporting it?
Probably something like
eps = 1e-7
z = torch.zeros((), device=a.device, dtype=a.dtype)
a_new = torch.where((a - 1.5).abs() < eps, a, z)
probably is what you want.
a == 1.5 might not be a good condition to test for due to the accuracy limits of floating-point computation.
You can assign the new thing to
a again if you want.
Hi @tom, I tried implementing what you have mentioned.
Now I try to do the backward pass
But I am getting RuntimeError:
element 0 of tensors does not require grad and does not have a grad_fn
Oh sorry, I had to declare z as the nn.Parameter and make
requires_grad=True for z.
That solved the problem.
returns 0. So I am unclear if the gradient is being computed for threshold even though its
requires_grad parameter is True. If not, am I interpreting it wrong?
Is there a way we can have a learnable threshold?
Well if you write down the gradient formula for the threshold here, you will just get 0 almost everywhere. So it is expected to return 0