_saved_result comparison with y

Please refer to the below code snippet:

a = torch.ones(5, requires_grad=True)
y = torch.exp(a)

print(y.grad_fn._saved_result.equal(y)) 
print(y.grad_fn._saved_result is y)     

print(y.data_ptr())
print(y.grad_fn._saved_result.data_ptr())
Output:
True
False
94655401114176
94655401114176

My question is, why the output of print(y.grad_fn._saved_result is y) is False even though it is evident from the data_ptr() output that both the tensors share the same storage? If they are pointing to the same location, then shouldn’t the comparison with “is” should return True.
Am I missing something here?

According to the python documentation: All none keyword arguments are converted to strings using str(). This mean that the print() can’t be trusted. What’s happening in the example below might be the reason for your results.

a = 5
b = "5"
print(a is b)
print(a)
print(b)
Output:
False
5
5

In other words, there is a string conversion happening in the print() function that might be the source for the confusion. If you have a debugger, add a break point and have a look in the variable explorer to see if there’s any difference.

Edit:
Try adding this to your code when checking

print(y.data_ptr(),type(y.data_ptr()))
print(y.grad_fn._saved_result.data_ptr(),type(y.grad_fn._saved_result.data_ptr()))

Then you’ll get more information in the printout that might convey the mystery :slight_smile:

Hey,
The output of the type(…) is the same for both, the output is class<'int'>

Of course, they do. I forgot to check the docs before I made my edit above. It says that .data_ptr() returns the address of the first element in the tensor it’s attached to.

The detailed answer to your question I believe you can find here: Autograd mechanics — PyTorch 2.1 documentation

>>> a = torch.ones(5, requires_grad=True)
>>> y = torch.exp(a)
>>> id(y.grad_fn._saved_result)
140420994735056
>>> id(y)
140420994748480

Under the hood, to prevent reference cycles, PyTorch has packed the tensor upon saving and unpacked it into a different tensor for reading. Here, the tensor you get from accessing y.grad_fn._saved_result is a different tensor object than x (but they still share the same storage).

It says that the memory location is shared between the objects, but the comparison using “is” return False. As far as I know, “is” return True if two objects are at the same memory location.

What’s the difference between id() and .data_ptr()?

id is a “pointer” to tensor itself, which is compound object containing other objects, not the data
I think this is similar to (soft / shallow) copy()

>>> a = [1,2,3]
>>> b = a.copy()
>>> a is b
False

as you can see a and b are different objects, but

>>> hex(id(a))
'0x7fb56080a180'
>>> hex(id(b))
'0x7fb560817740'
>>> hex(id(a[0]))
'0x7fb64fa72930'
>>> hex(id(b[0]))
'0x7fb64fa72930'

they both point to same elements inside