I am trying to implement trainable parameters with norm always=1.

Basically, I want to optimize the following variable `p`

directly:

```
p = Variable(torch.zeros(15), requires_grad = True)
optimizer = optim.SGD([p], lr=0.1)
```

Every training step, `p`

will be updated, but to prevent the values of `p`

of exploding, I want to normalize `p`

every step to have l2norm = 1 (but still trainable through the optimizer). Is this possible?

Thank you all!