nn.Parameters vs nn.Module.register_parameter

According to the document, nn.Parameter will:

they are automatically added to the list of its parameters, and will appear e.g. in parameters() iterator

and nn.Module.register_parameter will

Adds a parameter to the module.

I wonder since nn.Parameter will add tensor into parameters automatically, why we need register_parameter function?

7 Likes

I guess it is because there are some tasks to add parameters to given module. Here is an example code. (Below code is just example, but I did this kind of thing when I implemented spectral normalization layer)

import torch
import torch.nn as nn

class Test(nn.Module):
    def __init__(self, module):
        super(Test, self).__init__()
        self.module = module
        self.register_param()
    
    def register_param():
        exist_w = hasattr(self.module, 'w')
        if not exist_w:
            w = nn.Parameter(torch.ones(1))
            self.module.register_parameter(w) # register 'w' to module

    def forward(self, x)
        return x

conv = nn.Conv2d(3, 3)
conv_w = Test(conv)
4 Likes

nn.Module.register_parameter takes the tensor or None but first checks if the name is in dictionary of the module. While nn.Parameter doesn’t have such check.

12 Likes

Hey, thank you for that answer. In this example, what type does the self.module have? Is it an nn.Module?