Torch.as_tensor vs register_buffer in nn.Module

  1. Since buffers are serialized, they can also be restored using model.load_state_dict. Otherwise you would end up with a newly initialized buffer.

  2. That’s possible, but not convenient, e.g. if you are using nn.DataParallel, as each model will be replicated on the specified devices. Hard-coding a device inside your model won’t work, so you would end up using some utility functions are pushing the tensor inside forward to the right device.