Suppose I do this:
x = Variable(torch.rand(3,5))
y = torch.Tensor(torch.rand(3,5))
print(x + y)
It gives error. I can use
Variable.data if I want the output to be of
Tensor type. But I want the sum to be a
Variable. How can I do this?
- This example is trivial. When both x and y are matrices, things get complicated (like doing operation between a Variable and a Tensor that has come through register_buffer).
- I am new to PyTorch. So is there a good reference that explain all these things?
Why not wrap your
y into a
Thanks. Do I have to wrap y every time I use it?
Also, this requires gradient with respect to y to be calculated during
backward. Since y is a tensor (like running_var in BatchNorm`, I don’t want the gradient to flow through it.
requires_grad attrribute of a Variable is
False by default. This means that the gradient will not flow though
In : y = torch.autograd.Variable(torch.Tensor(4))
In : y.requires_grad
I also have question about this. Why is it designed like this?