Monotonic neural net in PyTorch

I wanted to build a monotonic neural net in PyTorch, specifically such that the weights of all layers are restricted to be positive. I want to know if this is possible to do within the object class itself, instead of adding constraints during optimization or instantiation.

As an example, consider the network -

class NeuralNet(nn.Module):
    def __init__(self, in_features):
        self.layer1 = nn.Linear(in_features, 10, bias=False)
        self.layer2 = nn.Linear(10, 20, bias=False)

    def forward(self, x):
        return self.layer2(self.layer1(x))

Now, I would like to, at every forward pass, set every weight value which is negative (<0) to 0. How do I do this appropriately? How do I ensure that autograd tracks weight updates correctly in this scenario?

Found a way!

class NeuralNet(nn.Module):
    def __init__(self, in_features):
        self.layer1 = nn.Linear(in_features, 10, bias=False)
        self.layer2 = nn.Linear(10, 20, bias=False)

    def forward(self, x):
        return self.layer2(self.layer1(x))

   def __call__(self):
        for module in self.modules:
            if hasattr(module, 'weight'):
                w = module.weight.data
                w = w.clamp(0.0, None)
                module.weight.data = w