Dumb question: is there any hack around gradient not flowing in the following computation?
value = 1.0 # or a 0D tensor with requires_grad=True
tensor = torch.linspace(0, 10, requires_grad=True) # or any 1D tensor with requires_grad=True
rank_value_in_tensor = tensor.le(value).float().mean() # this has no gradient since .le breaks gradient flowing
Unsuccessful things I tried:
-
rank_value_in_tensor = torch.relu(tensor - value).clamp(0, 1e-6).mean() / 1e-6
but it’s definitely not a robust solution (plus gradient likely to have weird behaviour)