This code should work:
self.pred = torch.nn.Linear(2, 10, bias=False)
with torch.no_grad():
self.pred.weight.div_(torch.norm(self.pred.weight, dim=1, keepdim=True))
...
- You have to flatten the activation somehow, so
.view
would be the easiest way.
Alternatively, you could write aFlatten
module, initialize if in your model’s__init__
, and call it in yourforward
. I’m not sure, if I understood your question correctly, so let me know if I missed something. - You can just access it as you’ve already done normalizing the weights:
print(self.pred.weight)
. - You could norm the activation in the
forward
method (similar to your weight norm code).