# Seeing the gradients w.r.t. subset of a tensor

I tried out the following script, to see if i can get the gradients of parts of a tensor:

``````x = torch.tensor([1,2,3]).float()
a = x[0]
b = x[1]
c = x[2]
loss = x.dot(torch.tensor([1.,2.,3.]).float())
for t in [a,b,c]:
pass
loss.backward()
for t in [a,b,c]:
pass
for t in x:
pass

``````

I am getting the following output

None
None
None
None
None
None
None
None
None
tensor([1., 2., 3.])

Basically i am not able to refer to parts of the tensor to see their grads. I tried a different experiment, to first refer to parts of `x` as individual tensors and then have these tensors make their `requires_grad=True`, there i end up with an error:

``````x = torch.tensor([1,2,3]).float()
a = x[0]
b = x[1]
c = x[2]

for t in [a,b,c]:
pass
loss = x.sum()
for t in [a,b,c]:
pass
loss.backward()
for t in [a,b,c]:
pass

``````

I get the output:

None
None
None

RuntimeError Traceback (most recent call last)
in ()
7 pass
----> 8 loss.backward()
9 for t in [a,b,c]:

/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
91 products. Defaults to `False`.
92 “”"
94
95 def register_hook(self, hook):

88 Variable._execution_engine.run_backward(
89 tensors, grad_tensors, retain_graph, create_graph,
—> 90 allow_unreachable=True) # allow_unreachable flag
91
92

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

I think you are looking for `for t in [a,b,c]: t.retain_grad()`.

Best regards

Thomas

Getting this now:

``````x = torch.tensor([1,2,3]).float()
a = x[0]
b = x[1]
c = x[2]
for t in [a,b,c]:
pass

loss = x.dot(torch.tensor([1.,2.,3.]).float())

print('of a,b,c ')
print('before backward')
for t in [a,b,c]:
pass
loss.backward()

print('of a,b,c ')
print('after backward')
for t in [a,b,c]:
pass
print('per element of x')
for t in x:
pass
``````

of a,b,c
before backward
None
None
None
of a,b,c
after backward
None
None
None
per element of x
None
None
None
tensor([1., 2., 3.])