Hi,
I’m working on dynamically changing the parameter values. For example, I have a linear layer with parameter weight and bias, and I need to change their values in runtime.
Below works for me:
- create a new parameter in the model init:
name = “weight1”
np = nn.Parameter(data=torch.randn(self.weight.data.shape), requires_grad=True)
self.register_parameters(name, np) - and I found that self.weight1 is created by Pytorch and visible in forward() and I can directly replace self.weight with self.weight1 as below:
self.weight = self.weight1
But as I have many parameters in this model, it’s tedious to do it in hard-coding way, so I do it dynamically:
for n, p in self.param_dict.items():
# n = “weight”
name = n + str(1) # name = “weight1”
# get the reference to original “self.weight”
pn = self.get_parameter(n) # fc.weight
# get the reference to the new parameter “fc.weight1”
pnp = self.get_parameter(name)
# set fc.weight1 to fc.weight
pn = pnp
I expected pn = pnp would work in the same way as “self.weight = self.weight1”, but it didn’t. It seems that pnp will directly point to the parameter itself, instead of self.weight, so that assignment doesn’t work and in forward() the parameter value of self.fc.weight doesn’t change at all.
Do we have a way to return the variable fc.weight instead of the parameter fc.weight pointed to? Or how can we do the assignment dynamically by their names “weight” and “weight1”, instead of the assignment between hardcoding variable names for example, fc.weight=fc.weight1?
Many thanks!