Hi I am trying to to implement learnable thresholding for the output of conv2d layers in the CNN.
I want to make the output of conv layer to 0 for values between -threshold to +threshold. Also I want to differentiate the loss function, with respect to the threshold and update the threshold with each iteration.
I have come across .clamp and the nn.threshold method in pytorch but I am unknown about about how to use them in class definition(eg. alexnet) and will autograd support this. If not what is the way around?
You want a differentiable, “soft” threshold function. I think that nn.Tanhshrink gives you most of what you want. You can put in
a threshold parameter like this: