class weightConstraint(object):
def __init__(self):
pass
def __call__(self,module):
if hasattr(module,'weight'):
print("Entered")
w=module.weight.data
w=w.clamp(0.1001,1.0)
module.weight.data=w
model = pm.ModelCustomLayerTimeWindow(input_size, 1024 , 1)
model.window.register_forward_hook(get_activation('window'))
constraints=weightConstraint()
model._modules['window'].apply(constraints)
I want to constrain the value of my window layer between 0.1001 and 1.0, but during training, I see that they can take some values below 0.1001 (for example, 0.0287). The problem is when the weight of this layer take the value of 0.0287, they are stuck and doesn’t update anymore, which is the reason i am training to put the constraint
Is there a way to have a more constrained interval?
The short story is consider using pytorch’s parametrize functionality to bound
your parameter.
I have over time come to lean against the idea of constraining parameters by
modifying them by hand. The problem is that doing things that way doesn’t
really cooperate with backpropagation and optimization. opt.step() can modify
a parameter so that it no longer satisfies some constraint and then you reimpose
that constraint by hand (and then the next opt.step() can move it outside of
the constraint again).
I would suggest that you work with an unconstrained parameters that runs from -inf to inf and pass it through a parametrization that maps it to (0.1001, 1.0).
(You can use sigmoid() to map (-inf, inf) to (0.0, 1.0) and then use simple
arithmetic to map that to (0.1001, 1.0).)
(As an aside, .data has been deprecated for some time and can cause errors.
I don’t know whether it is likely to cause problems in your specific code, but, in
general, you shouldn’t use it.)