Preserving the gradient

Hi all,
I have an input to my NN consisting of the tensor(u) which stores the gradient needed for backpropagation in my example its grad_fn<AddBackward0> and numpy.array(y).
The NN model takes an input that’s a tensor, so I convert the array to tensor and concatenate them together. z
I feed this concatenated tensor into the NN model. It is important to note that the u part of the input stores gradient and the part corresponding to y doesn’t (it originates from numpy.array()).
After I check the type of the output of the NN I see that it is a tensor with a grad_fn=<CatBackward>

What is the meaning of the changed name of grad_fn?
Does it mean that the gradient has been overwritten?
If I run backpropagation through the model will it be able to use the gradient stored in u to do it successfully?
If we want to backpropagate just through u (even though input is u and y) is it possible?

Thanks in advance

What is the meaning of the changed name of grad_fn?

The grad_fn just tells you which op created this Tensor. In this case a concatenation

Does it mean that the gradient has been overwritten?

No, just that this concatenation op is taken into account when gradients will be computed (even if it is a no-op here because the other Tensor is a numpy array).

If we want to backpropagate just through u (even though input is u and y) is it possible?

If y does not require gradients (which it cannot if it is a numpy array), then the autograd will automatically ignore it when running backprop.