I want to define a n-dimensional parameter that requires grad and also make sure that its value lies within the range of [-a, b] at all times and when the parameter is initialized I want it to take values from the normal Gaussian distribution. Please help!
I am unsure of exactly what you are looking for in this (if you can give some example, that will definitely help). One way to achieve this would be to initialise a parameter randomly and always apply a tanh activation function to it before consuming it. This way, you will ensure both the range (-1, 1) and updates from gradients.
Here is a simple example to demonstrate this:
# initialise randomly from normal distribution
>>> myparameter = torch.randn((100, 100))
>>> print(torch.unique(myparameter))
tensor([-3.7089, -3.5629, -3.4922, ..., 3.2828, 3.4450, 3.5880])
# normalise using tanh activation
>>> myparameter_norm = torch.tanh(myparameter)
>>> print(torch.unique(myparameter_norm))
tensor([-0.9988, -0.9984, -0.9981, ..., 0.9972, 0.9980, 0.9985])
# proceed to use myparameter_norm as input to next compute
I want to use clamp with parameter but don’t know how to do that. Please help
From torch.clamp — PyTorch 1.10.0 documentation
>>> a = torch.randn(4)
>>> a
tensor([-1.7120, 0.1734, -0.0478, -0.0922])
>>> torch.clamp(a, min=-0.5, max=0.5)
tensor([-0.5000, 0.1734, -0.0478, -0.0922])
>>> min = torch.linspace(-1, 1, steps=4)
>>> torch.clamp(a, min=min)
tensor([-1.0000, 0.1734, 0.3333, 1.0000])