Hi, I was implementing a simple differentiation with Pytorch Tensor, and the results were strange compared to Numpy array. When I used Pytorch Tensor, I checked that the **tmp_val value** changes after calculating multi_func(x), while Numpy array did not.

Why is this happening and how do I modify the code to get the same results as the Numpy array?

```
def multi_func(x):
return x[0]**2 + x[1]**2
def derivative(f, x):
h = 1e-4 # 0.0001
grad = torch.zeros_like(x)
for idx in range(x.size()[0]):
tmp_val = x[idx]
# f(x+h)
x[idx] = float(tmp_val) + h
fxh1 = f(x)
print(tmp_val)
# f(x-h)
x[idx] = tmp_val - h
fxh2 = f(x)
grad[idx] = (fxh1 - fxh2) / (2*h)
x[idx] = tmp_val
return grad
print(derivative(multi_func, torch.tensor([3.0,4.0])))
```

tensor(3.0001)

tensor(4.0001)

tensor([2.9945, 4.0054])

=========================================================

```
def multi_func(x):
return x[0]**2 + x[1]**2
def derivative(f, x):
h = 1e-4 # 0.0001
grad = np.zeros_like(x)
for idx in range(x.size):
tmp_val = x[idx]
# f(x+h)
x[idx] = float(tmp_val) + h
fxh1 = f(x)
print(tmp_val)
# f(x-h)
x[idx] = tmp_val - h
fxh2 = f(x)
grad[idx] = (fxh1 - fxh2) / (2*h)
x[idx] = tmp_val
return grad
print(derivative(multi_func, np.array([3.0,4.0])))
```

3.0

4.0

[6. 8.]