Grad_fn confusion

Hello,

I’m a little confused on what the grad_fn property of a tensor actually means.

I have a tensor that before I take the mean of it has

grad_fn = <StackBackward>

after I take the mean of it, this becomes

grad_fn = <MeanBackward0>

It’s just not clear to me what this actually means for my network. The tensor in question is my loss, which immediately afterwards I have

loss.backward()

And I’m just looking for clarification on what this changing grad_fn means for my loss function and the backward function here.

Thanks!

It has to do with the creation of the computation graph. After the mean operation, the tensor is basically just “remembering” the function that created it so that we have a complete history of computation. You’re not gonna want to change grad_fn.

More information can be found on the AutoGrad page

So this grad_fn doesn’t change how it’s taking the gradient in the end? It’s just keeping track of what created the tensor in the first place?

grad_fn is a part of how it computes the gradient in the end. It keeps track of the computation graph so it can perform backpropagation.

So it is in fact a bad thing that it’s changing then? How do I prevent it from changing?

It’s not a bad thing at all. It’s the process of which gradients are created. You don’t really want to be messing with it. It’s more like you’re getting a new tensor each time.