# Why change a tensor's value in a function can change the outer tensor?

I found that defining a tensor variable and change its value in a function, outer the function the tensor’s value will change though I did not return anything. So what does happen when I did things above?

``````from torch import Tensor
a = Tensor([0])
>>> a = tensor(0.)
def b(t):
t[0]=3
b(a)
print(a)
>>> a=tensor(3.)
``````

It’s may about the scope of a tensor or torch low-level method. However I want someone can give me a complete reason to explain it.

The reason for the above scenario is that tensors are mutable objects therefore they are changeable in-place
This implies that when you call `b(a)`, instead of a new local variable `a` being created in the function scope, a ‘reference’ to `a` will be made and `a[0]` will be assigned the value 3.

However if `a` wasn’t a mutable object (not changeable in-place) the reverse would happen.i.e a new variable a would be created in the function scope and thus not affecting the outer a variable.
For instance, the function below doesn’t change the outer/global a

``````a = 0
def my_function(b):
b = 3
print(b)

my_function(a)
print(a)
``````
1 Like