Seeing the gradients w.r.t. subset of a tensor

I tried out the following script, to see if i can get the gradients of parts of a tensor:

x = torch.tensor([1,2,3]).float()
x.requires_grad_(True)
a = x[0]
b = x[1]
c = x[2]
loss = x.dot(torch.tensor([1.,2.,3.]).float())
for t in [a,b,c]:
  print(t.grad)
  pass
loss.backward()
for t in [a,b,c]:
  print(t.grad)
  pass
for t in x:
  print(t.grad)
  pass
print(x.grad)

I am getting the following output

None
None
None
None
None
None
None
None
None
tensor([1., 2., 3.])

Basically i am not able to refer to parts of the tensor to see their grads. I tried a different experiment, to first refer to parts of x as individual tensors and then have these tensors make their requires_grad=True, there i end up with an error:

x = torch.tensor([1,2,3]).float()
a = x[0]
b = x[1]
c = x[2]

for t in [a,b,c]:
  t.requires_grad_(True)
  pass
loss = x.sum()
for t in [a,b,c]:
  print(t.grad)
  pass
loss.backward()
for t in [a,b,c]:
  print(t.grad)
  pass

I get the output:

None
None
None


RuntimeError Traceback (most recent call last)
in ()
6 print(t.grad)
7 pass
----> 8 loss.backward()
9 for t in [a,b,c]:
10 print(t.grad)

/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
91 products. Defaults to False.
92 “”"
—> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph)
94
95 def register_hook(self, hook):

/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
88 Variable._execution_engine.run_backward(
89 tensors, grad_tensors, retain_graph, create_graph,
—> 90 allow_unreachable=True) # allow_unreachable flag
91
92

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

I think you are looking for for t in [a,b,c]: t.retain_grad().

Best regards

Thomas

Getting this now:

x = torch.tensor([1,2,3]).float()
x.requires_grad_(True)
a = x[0]
b = x[1]
c = x[2]
for t in [a,b,c]:
  t.retain_grad()
  pass

loss = x.dot(torch.tensor([1.,2.,3.]).float())

print('of a,b,c ')
print('before backward')
for t in [a,b,c]:
  print(t.grad)
  pass
loss.backward()

print('of a,b,c ')
print('after backward')
for t in [a,b,c]:
  print(t.grad)
  pass
print('per element of x')
for t in x:
  print(t.grad)
  pass
print(x.grad)

of a,b,c
before backward
None
None
None
of a,b,c
after backward
None
None
None
per element of x
None
None
None
tensor([1., 2., 3.])