Is it fine to dump/overlap the tensor name that is already existing?

I have no idea how pytorch’s autograd and constructing computational graph work,
thus I’m not sure whether it is fine to write a code like below:

def forward(self, x):
x = self.conv(x)
return x

I think in order to track the gradient of each tensor, the name of output of convolution layer should be different from “x”, but I found many code actually dump their tensor name.

Is it still possible to backpropagate the gradient properly till the front part of the model?
I mean, what if the input ‘x’ of above forward funciton actually came from another trainable network or something

If it is, how is that possible?

Yes, you can reassign the output of a differentiable operation to a tensor with the same name as its input. Autograd will track these operations and will reference the input (thus prevent its deletion) if it’s needed for the gradient computation.