Differentiable Threhold for Conv2d output

Hi I am trying to to implement learnable thresholding for the output of conv2d layers in the CNN.
I want to make the output of conv layer to 0 for values between -threshold to +threshold. Also I want to differentiate the loss function, with respect to the threshold and update the threshold with each iteration.
I have come across .clamp and the nn.threshold method in pytorch but I am unknown about about how to use them in class definition(eg. alexnet) and will autograd support this. If not what is the way around?

Thank you!!

Hi Ajinkya!

You want a differentiable, “soft” threshold function. I think that
nn.Tanhshrink gives you most of what you want. You can put in
a threshold parameter like this:

threshold * nn.functional.tanhshrink (x / threshold)

I’ve never done this, but I believe that if you make your
threshold value a nn.Parameter and include it in your model,
it should work.

The method in this post:

looks plausible to me (but I can’t vouch for it).


K. Frank

Thank you Frank,
I would look into the solution you are proposing