Tensor.le and gradient flowing

Dumb question: is there any hack around gradient not flowing in the following computation?

value = 1.0 # or a 0D tensor with requires_grad=True
tensor = torch.linspace(0, 10, requires_grad=True)  # or any 1D tensor with requires_grad=True
rank_value_in_tensor = tensor.le(value).float().mean() # this has no gradient since .le breaks gradient flowing

Unsuccessful things I tried:

  1. rank_value_in_tensor = torch.relu(tensor - value).clamp(0, 1e-6).mean() / 1e-6 but it’s definitely not a robust solution (plus gradient likely to have weird behaviour)

@marchinidavide
This is the closes that i got to

value = 1.0 # or a 0D tensor with requires_grad=True
tensor = torch.linspace(0, 10, requires_grad=True)  # or any 1D tensor with requires_grad=True
rank_value_in_tensor = F.relu(tensor-value)
rank_value_in_tensor[rank_value_in_tensor>0]=1
rank_value_in_tensor = rank_value_in_tensor.mean()
print(rank_value_in_tensor.requires_grad)