Setting module buffers without using register_buffer?

The typical method to assigning buffers to a module is obviously using register_buffer:

foo = torch.random.rand(3, 4)
self.register_buffer('buffer', foo)

and the buffer can be accessed by self.buffer elsewhere in the module class.

My question is, is it possible to first explicitly perform an assignment (self.buffer = ...), then somehow “mark” this attribute as a buffer? While this does not affect the actual runtime behavior, I rely a lot on my IDE’s hinting / auto-complete features, and the IDE cannot detect attributes if they are not assigned values by an assignment statement somewhere else (preferrably in __init__) in the first place.

It seems that parameters / modules can be registered by assignment, instead of using the dedicated methods register_parameter and add_module. I wonder if the same can be done for buffers.

It’s not possible to directly register a buffer via a plain assignment, but you could call self.register_buffer afterwards as seen here:

class MyModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.my_buffer = torch.randn(1)
        self.register_buffer('buffer', self.my_buffer)
        
model = MyModel()
print(dict(model.named_buffers()))
# > {'buffer': tensor([-0.3787])}

Would this work for you?

1 Like