How to get all registerd buffer by self.register_buffer()

Hi, we can get all required gradient tensor by self.parameters() method,
but how to get all non-required gradient tensor created by self.register_buffer() method?

You could filter the state_dict, which holds all buffers and parameters.

dict(filter(lambda v: (v[1].requires_grad==False), model.state_dict().iteritems()))

nn.Parameters Could also be requires_grad=False, So by filter no gradients may not work, I guess?.

Yeah, you are right. Besides that, it seems all parameters stored in state_dict have requires_grad=False by default.

You could try to call the internal buffer dict, but it’s not recommended, since it’s an internal method:

model._buffers

Could you explain a bit about your use case, i.e. why you need all buffers without their specific name?

in some case, i need a class buffers list as self.params=[tensor() for i in range(n)]. Since the buffer is a list and its number is determined by run-time parameters n, I prefer to access these buffers using self.params[i] instead of giving each tensor an name.
@ptrblck

1 Like

Same question. Is there a canonical way to register a list of buffers (not just a single buffer)?

I currently do this:

self.params = []

for ii in range(3):
    pname = 'param_{}'.format(ii)
    self.register_buffer(pname, torch.tensor(9))
    self.params.append(getattr(self, pname))

Also, I use register_buffer to get the auto conversions when model.cuda() is called, but I really do not care if it goes to state_dict

Hey guys!

I think what you are looking for is now available as Module.buffers(). It returns an iterator over the registered buffers.

3 Likes

Good news. Is there any support for dict? I want to store a dict in the buffers, so that I can access data by Key.