When I want to add a new parameter to an nn.Module, I basically see 2 approaches.

First is just to use the torch builtin register_parameter() function, and the added Tensor will show up in the Module’s state_dict.

Second is to use the python function setattr(), but the added Tensor will not show up in the Module’s state_dict. But what’s really amazing about this is that, this hidden Tensor can actually be accessed from the Module’s state_dict. I’ll give my example below.

```
>>> import torch
>>> conv = torch.nn.Conv2d(512, 512, kernel_size = 1, padding=0)
>>> testTensor = torch.randn(512)
>>> setattr(conv, 'testTensor', testTensor)
>>> conv._parameters.keys()
odict_keys(['weight', 'bias'])
>>> conv._parameters[testTensor]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyError: tensor([ 1.6598e-01, -2.5838e+00, 8.0253e-01, -7.5928e-01, 1.3394e-01,
......
```

So you can see, despite the error message, the last command actually gives out the correct testTensor value. And I tried to add this testTensor into the computation graph (just by adding it to the bias), the backpropagation works just as good as usual. So the question is, what really happens when using setattr() to add another tensor into nn.Module? And is it really equal to using register_parameter?