How to assignment between two parameters with their string names?

Hi,
I’m working on dynamically changing the parameter values. For example, I have a linear layer with parameter weight and bias, and I need to change their values in runtime.
Below works for me:

  1. create a new parameter in the model init:
    name = “weight1”
    np = nn.Parameter(data=torch.randn(self.weight.data.shape), requires_grad=True)
    self.register_parameters(name, np)
  2. and I found that self.weight1 is created by Pytorch and visible in forward() and I can directly replace self.weight with self.weight1 as below:
    self.weight = self.weight1

But as I have many parameters in this model, it’s tedious to do it in hard-coding way, so I do it dynamically:
for n, p in self.param_dict.items():
# n = “weight”
name = n + str(1) # name = “weight1”
# get the reference to original “self.weight”
pn = self.get_parameter(n) # fc.weight
# get the reference to the new parameter “fc.weight1”
pnp = self.get_parameter(name)

        # set fc.weight1 to fc.weight
        pn = pnp

I expected pn = pnp would work in the same way as “self.weight = self.weight1”, but it didn’t. It seems that pnp will directly point to the parameter itself, instead of self.weight, so that assignment doesn’t work and in forward() the parameter value of self.fc.weight doesn’t change at all.
Do we have a way to return the variable fc.weight instead of the parameter fc.weight pointed to? Or how can we do the assignment dynamically by their names “weight” and “weight1”, instead of the assignment between hardcoding variable names for example, fc.weight=fc.weight1?

Many thanks!

I can change weight by pn.data = pnp.data, but it doesn’t work for me because autograd won’t work here.

I don’t think any assignment would be a good idea since you would also need to update e.g. the parameters passed to an optimizer. Assuming you want to change the actual values of the trainable parameter, copy the new data in a no_grad context into the already registered parameter inplace:

with torch.no_grad():
    self.weight1.copy_(new_data)

Many Thanks! I will have a try on this :slight_smile: