Does torch.unbind()
create a tuple of views or copes of the original tensor i.e. is new memory allocated?
What about when applied to shared_memory
tensors?
Is there a way to re-bind them?
Does torch.unbind()
create a tuple of views or copes of the original tensor i.e. is new memory allocated?
What about when applied to shared_memory
tensors?
Is there a way to re-bind them?
Based on this code snippet it’s a view:
x = torch.randn(3, 10)
y = torch.unbind(x, 0)
y[0].zero_()
print(x)
# tensor([[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
# 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
# [ 1.1271e+00, -1.6113e-01, -9.3410e-01, 1.2211e-01, 8.4843e-01,
# 4.7476e-01, -8.4909e-01, -1.1142e-01, -4.4881e-01, -3.7674e-01],
# [-1.5271e+00, -1.0888e+00, -1.3062e-03, -2.3746e-01, -1.6353e+00,
# 1.0766e+00, 4.8345e-01, -6.0865e-01, 8.8547e-02, 3.6673e-01]])