Regarding clamped learnable parameter

is this a correct way to clamp a learnable parameter in a range of 0-1?

z = nn.Parameter(torch.clamp(torch.rand(1), 0, 1))

if I want to set a threshold as a learnable parameter, and clamp it in a range, is this correct way to do it?

Hey @vainaijr,

Would gradient clipping with clip_grad work for you? You can call it on the model parameters before the optimizer step.

No, I do not think clip_grad will work, what I want to do is, zero out elements of a matrix, that fall below a threshold, but this threshold is not fixed, it is learnable, I do not set it, model needs to learn this threshold, and zero out elements of matrix that fall below this threshold.

The following code works for me:

class Clamp(torch.autograd.Function):
    @staticmethod
    def forward(ctx, input):
        return input.clamp(min=0, max=1) # the value in iterative = 2

    @staticmethod
    def backward(ctx, grad_output):
        return grad_output.clone()

clamp_class = Clamp()

and in nn.Module:

self.z = nn.Parameter(torch.tensor(1.0), requires_grad=True)
clamp_class.apply(self.z)
1 Like