Torch.as_tensor vs register_buffer in nn.Module

Would be nice as a feature to have a register_buffer option that avoid the serializing part. I have some intermediate tensors that are constant, so there is no point in serializing them for example. But still will benefit from automatic .cuda() pushing to the correct device for example.

1 Like