Confusion with Variable and tensor.requires_grad

Hello everyone,

I’m a little confuse about the correct use of Variable or tensor.requires_grad=True for an intermediate Tensor created between a trainable layer and the loss function. For example:

import torch
import torch.nn as nn


x = torch.rand((5, 10))
l1 = nn.Linear(10, 10)

y = l1(x)
first_two = Variable(torch.zeros((5, 2)))  # just get the first 2 values of each example of y (just an example)
# The intermediate tensor "first_two" should be initialized as Variable(torch.zeros((5, 2))) or
# Variable(torch.zeros((5, 2)), requires_grad=True) or just torch.zeros((5, 2), requires_grad=True)?
for i in range(y.size(0)):
    first_two[i] = y[i, :2]
loss = dummy_loss(first_two)
loss.backward()

I ask it to ensure the correct backpropagation to the trainable Linear layer.
Thanks you very much in advance

Variables are deprecated since PyTorch 0.4 so you should use tensors in newer versions.

That being said, I would just slice y directly and pass it to the loss function:

y = l1(x)
first_two = y[:, :2]
loss = dummy_loss(first_two)
loss.backward()
1 Like

I see, I will forget about Variables then

Thanks a lot