Hi,I realize that I have no knowledge of how gradient values we get after calling backward.

for example :

```
x = torch.randn((4,5),requires_grad = True)
z = x.mean()
z.backward()
#print(x.grad)
```

Assume I have the values of x

```
>>> x
tensor([[-0.3571, 0.1481, 0.1713, -1.2597, -0.7667],
[-0.1553, -0.9620, 0.0103, 3.3494, 0.2220],
[ 2.1131, -0.2404, 0.4820, 0.3816, 1.9752],
[ 1.7232, -0.5064, -0.8151, 0.3720, 0.1470]], requires_grad=True)
>>>
```

And the result of x.grad returns

```
tensor([[1.1500, 1.1500, 1.1500, 1.1500, 1.1500],
[1.1500, 1.1500, 1.1500, 1.1500, 1.1500],
[1.1500, 1.1500, 1.1500, 1.1500, 1.1500],
[1.1500, 1.1500, 1.1500, 1.1500, 1.1500]])
```

And if I change `z = x.mean()`

to `z = x.sum()`

, x.grad becomes

```
tensor([[2.1500, 2.1500, 2.1500, 2.1500, 2.1500],
[2.1500, 2.1500, 2.1500, 2.1500, 2.1500],
[2.1500, 2.1500, 2.1500, 2.1500, 2.1500],
[2.1500, 2.1500, 2.1500, 2.1500, 2.1500]])
```

I would like to know how do we compute and get the values both 1.1500 and 2.1500

Thanks in advance !