Restrict gradiant values in pytorch

Can I fore pytorch to gradiant parameter values in restricted bound ?
For example:
I have this torch paramater

class Model(nn.Module):

    def __init__(self, args):
        super(Model, self).__init__()
        self.m = nn.Parameter(torch.tensor(0.02), requires_grad=True)

I want to m values to be between 0.1 and 0.9

Hi Mohammed!

You ask about restricting gradients.

But then you ask about restricting the value of a Parameter itself. Let me assume
that you want to restrict the Parameter itself, rather than its gradient.

(Note that in your example, you are asking that m lie between 0.1 and 0.9, but
you are initializing it to 0.02 – outside of that range.)

My recommendation is to let your trainable Parameters be unrestricted, that is,
range from -inf to inf, and then map them to a derived variable that satisfies
your desired restriction. (This is because gradient-descent optimization doesn’t
work for optimizing constrained variables without additional complication.)

So in your case in your __init__() method I would do something like:

self.m_unconstrained = nn.Parameter (torch.tensor (-1.946),  requires_grad = True)

and then in your forward() method:

m_ constrained = 0.1 + 0.8 * self.m_unconstrained.sigmoid()
# use m_constrained in the rest of forward()

(This makes use of the fact that .sigmoid() maps (-inf, inf) to (0.0, 1.0).)

Best.

K. Frank

1 Like

Thank you @KFrank . Yes that waht I was meaning