Changing input in a preforward hook results in breaking of computation graph?

This is a typical pre forward hook function.

def pre_forward_hook_function(self, input):
     # Input would be tuple of size 1. To obtain the tensor
     x = input[0]
     # perform computations (simple addition/multiplication) on x and return it
     return x

The preforward hook has the input in the form a tuple, which is immutable.
So to change the input, a new variable is assigned. x = input[0]
Note that here, the input I am refering to is the input that is accessed within the pre forward hook function.

The problem is will this assignment on x result in breaking of the computation graph for the input(input to the network, not the input to the pre hook function) ? I have this doubt since the input of the pre hook function is a tuple and x is torch.tensor type, two different data types.

After running the code, I can confirm this does not break the computation graph