I was trying to create new copies (so that python was pointing at different objects and not just updating the same copy across multiple namespaces). I tried it by getting the actual numpy data from a tensor and using that to create a brand new tensor but that didn’t work. Why did it not work? How does one do this?
>>> y
3 3
3 3
3 3
[torch.FloatTensor of size 3x2]
>>> yv.data = torch.FloatTensor( y.numpy() )
>>> yv
Variable containing:
3 3
3 3
3 3
[torch.FloatTensor of size 3x2]
>>> y
3 3
3 3
3 3
[torch.FloatTensor of size 3x2]
>>> yv.data.fill_(5)
5 5
5 5
5 5
[torch.FloatTensor of size 3x2]
>>> yv
Variable containing:
5 5
5 5
5 5
[torch.FloatTensor of size 3x2]
>>> y
5 5
5 5
5 5
[torch.FloatTensor of size 3x2]
You should also detach the tensor from the computational graph if it requires grad else gradients will be calculated for both that may lead to OOM issues.
x = torch.tensor([1.,2.,3.], requires_grad=True)
y = x.detach().clone()
y.requires_grad = True