nn.Module add new parameter, setattr() VS register_parameter()

When I want to add a new parameter to an nn.Module, I basically see 2 approaches.

First is just to use the torch builtin register_parameter() function, and the added Tensor will show up in the Module’s state_dict.

Second is to use the python function setattr(), but the added Tensor will not show up in the Module’s state_dict. But what’s really amazing about this is that, this hidden Tensor can actually be accessed from the Module’s state_dict. I’ll give my example below.

>>> import torch
>>> conv = torch.nn.Conv2d(512, 512, kernel_size = 1, padding=0)
>>> testTensor = torch.randn(512)
>>> setattr(conv, 'testTensor', testTensor)
>>> conv._parameters.keys()
odict_keys(['weight', 'bias'])
>>> conv._parameters[testTensor]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
KeyError: tensor([ 1.6598e-01, -2.5838e+00,  8.0253e-01, -7.5928e-01,  1.3394e-01,
        ......

So you can see, despite the error message, the last command actually gives out the correct testTensor value. And I tried to add this testTensor into the computation graph (just by adding it to the bias), the backpropagation works just as good as usual. So the question is, what really happens when using setattr() to add another tensor into nn.Module? And is it really equal to using register_parameter?

setattr just adds an attribute to a python class, and that’s all: you can use getattr to get this attribute. But it is not sufficient to be an attribute of a pytorch model to be a parameter of this model, no.

To be a parameter of a pytorch model, it is necessary to be an instance of the class torch.nn.parameter.Parameter and to be in the dictionary (OrderedDict) _parameters of this model (which is one of its attributes). This is what register_parameter(name: str, param: Optional[Parameter]) does : self._parameters[name] = param (after several checks) : see the source code

2 Likes