[SOLVED] Register_parameter vs register_buffer vs nn.Parameter

I was looking at the code of batchnorm

    def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True,
                 track_running_stats=True):
        super(_BatchNorm, self).__init__()
        self.num_features = num_features
        self.eps = eps
        self.momentum = momentum
        self.affine = affine
        self.track_running_stats = track_running_stats
        if self.affine:
            self.weight = Parameter(torch.Tensor(num_features))
            self.bias = Parameter(torch.Tensor(num_features))
        else:
            self.register_parameter('weight', None)
            self.register_parameter('bias', None)
        if self.track_running_stats:
            self.register_buffer('running_mean', torch.zeros(num_features))
            self.register_buffer('running_var', torch.ones(num_features))
            self.register_buffer('num_batches_tracked', torch.tensor(0, dtype=torch.long))
        else:
            self.register_parameter('running_mean', None)
            self.register_parameter('running_var', None)
            self.register_parameter('num_batches_tracked', None)
        self.reset_parameters()

and I don’t really understand when to use a register_buffer/ register_parameter vs nn.parameter

By doing some test:

a = torch.nn.BatchNorm2d(100)

a.register_parameter('test',None)

a
Out[34]: BatchNorm2d(100, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

a.test

a.test2 = torch.nn.parameter.Parameter(requires_grad=False)

a.test2
Out[37]: 
Parameter containing:
tensor([])

The behavior is different, in case of a registered parameter there is no return when None is used.
Register parameter only can register a parameter or None, so why is it used?

With respect to register_buffer docs just says it is used when u want to register something which is not a parameter. So I assume i does not compute gradients. Is there any different between register_buffer and a parameter with requires_grad = false?

In the code above, why if self.track:running.stats =True they register a buffer but if False they register a parameter?
I checked it and register_buffer can also register None

3 Likes

Hi,

You have two kinds of Tensors that a Module want to hold to:

  • Some that are learnable. Represented as nn.Parameter and that should be registered with mod.register_parameter("name", value) where value can be either None or an nn.Parameter
  • Some that are not learnable. Represented as regular torch.Tensors and that should be registered with mod.register_buffer(“name”, value) where value can be either None or a torch.Tensor.

Note that for simplicity, when you do mod.name = something. If something is an nn.Parameter, register_parameter() will be called automatically. EDITED: not true for Tensors.
As you can see, the only way to set a parameter or a buffer to None is to call the method directly. Otherwise, you can use natural assignment.

14 Likes
       if self.track_running_stats:
            self.register_buffer('running_mean', torch.zeros(num_features))
            self.register_buffer('running_var', torch.ones(num_features))
            self.register_buffer('num_batches_tracked', torch.tensor(0, dtype=torch.long))
        else:
            self.register_parameter('running_mean', None)
            self.register_parameter('running_var', None)
            self.register_parameter('num_batches_tracked', None)

@albanD, a nitpicking: why running_mean, running_var and num_batches_tracked in case of track_running_stats=False are registered as parameters ? I understand that this have no effect but this looks a bit strange…

1 Like

@albanD, thank you for that explanation.

I verified this to work with not using register_parameter and assigning nn.Parameter instead.

But it doesn’t seem to be the case with torch.Tensor (bolded in the quote above) automatically triggering a call to register_buffer.

e.g. inside:

class MyModule(nn.Module):
    def __init__(self):
        super().__init__()
        self.something = torch.zeros(1)

doesn’t add something to self._buffers

It’s empty if I dump the keys, say during forward.

print("keys", self._buffers.keys())

if I use:

self.register_buffer('something', torch.zeros(1))

it does end up in self._buffers.

tested with pytorch 1.0.0.dev20190405

Also is there a magic repr way to get pytorch to show all the registered buffers in a given or all layer? (Other than doing it manually).

Thanks.

1 Like

As far as I know, an setting a tensor as an attribute won’t register it as a buffer, but you would rather need to call self.register_buffer directly.

You can call print(list(model.buffers())) or print(dict(model.named_buffers())) to get all buffers (adapted from the .parameters()/.named_parameters() methods).

2 Likes

Thank you for validating that - hope @albanD then can edit his “accepted” solution to fix that.

You can call print(list(model.buffers())) or print(dict(model.named_buffers())) to get all buffers (adapted from the .parameters() / .named_parameters() methods).

Awesome, thank you, @ptrblck!

model.named_buffers is what I was after, model.buffers isn’t useful for debug since it’s unnamed data. In particular this gives me the listing of all registered buffers w/o data:

print(dict(learn.model.named_buffers()).keys())

So after doing self.register_buffer('mybuf1', tensor(0.)) the above printout give me:

dict_keys(['0.2.mybuf1', '1.2.mybuf1', '2.2.mybuf1'])

where 0.2 corresponds to group 0 layer 2, 1.2 group 1 layer 2, etc. Cool!

1 Like