Thresholding Operation in pytorch with gradients

While Implementing a custom loss function There is requirement to threshold a tensor and it is necessary for the gradients to flow while .backward() pass with autograd.

I have a tensor of shape (N,7) , need to find out for each row that how many values are greater than a threshold (th), finally I need a tensor of shape (N,1).

Toy Example :

 In = [[1,2,3,4,5,6,7],
th = 5

Out = [[2],[2],[3]]

Currently the problem is that while directly trying to threshold the gradients are vanishing.

So thresholding (setting small values to 0) is different to counting the number of exceeding values. Which do you need?

Also, counting values has a discrete result (1, 2, 3). An infinitesimal change to a value (usually) doesn’t change how many are above this threshold, so the function isn’t differentiable…

Best regards