Hello everyone,
I’m a little confuse about the correct use of Variable or tensor.requires_grad=True for an intermediate Tensor created between a trainable layer and the loss function. For example:
import torch
import torch.nn as nn
x = torch.rand((5, 10))
l1 = nn.Linear(10, 10)
y = l1(x)
first_two = Variable(torch.zeros((5, 2))) # just get the first 2 values of each example of y (just an example)
# The intermediate tensor "first_two" should be initialized as Variable(torch.zeros((5, 2))) or
# Variable(torch.zeros((5, 2)), requires_grad=True) or just torch.zeros((5, 2), requires_grad=True)?
for i in range(y.size(0)):
first_two[i] = y[i, :2]
loss = dummy_loss(first_two)
loss.backward()
I ask it to ensure the correct backpropagation to the trainable Linear layer.
Thanks you very much in advance