How to properly Register a Buffer

have recently worked with detectron2 and found that developers use nn.Module._buffers explicitly ( So I wanted to check will it work similar way if we use buffers() method instead.

So here’s the experiment:

import torch

class MyModule(nn.Module):
    def __init__(self):
        self.batch_norm = nn.BatchNorm1d(num_features=1)
        self.register_buffer(str('new_buf'), torch.rand(1))

    def forward(self, x):
        return x + 10

m = MyModule()
print(m._buffers.values(), m._buffers.keys())
print([el for el in m.buffers()])

This code produces the following:

odict_values([tensor([0.0600])]) odict_keys(['new_buf'])
[tensor([0.0600]), tensor([0.]), tensor([1.]), tensor(0)]

Why there is no mention of BatchNorm parameters in first case, we see only buffers added by registration? But registered is visible in second print.
Correct me if I am wrong: “One should use nn.Module.buffers() to see all buffers, and if it is needed to see just user defined use ._buffers.values() instead (user defined == registered by .register_buffer())”

The nn.BatchNorm1d buffers are registered in this particular module and you can access them via:


However, note that _buffers is an internal method (exactly for this reason of confusion :wink: ) and you should stick to buffers() to get all buffers recursively.

1 Like