Hi,
The following
import torch
from torch.autograd import Variable
a = Variable(torch.zeros(10)).cuda()
b = a
c = a.clone()
a[0] = 1
print(a)
print(b)
print(c)
produces
tensor([ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], device='cuda:0')
tensor([ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], device='cuda:0')
tensor([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], device='cuda:0')
which is expected as a and b share the same memory. An update a also updates b.
However,
import torch
from torch.autograd import Variable
a = Variable(torch.ones(10)).cuda()
b = a
c = a.clone()
a = a * 10
print(a)
print(b)
print(c)
gives me
tensor([ 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.], device='cuda:0')
tensor([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], device='cuda:0')
tensor([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], device='cuda:0')
In this case, b is not changed. Why?
Thanks