Extending module: functional forward() inputs

Hi there. I’m new to pytorch and have a conceptual question about extending pytorch’s autograd module like the linear module on the docs:

everytime we perform an operation on pytorch’s Variables it automatically register a graph and later can be backpropagated automatically.

In forward() function the inputs are all Variables and there are a bunch of operations we perform on them, which will register a graph for backpropagation itself.
But I still need to override a backward() function since I’m extending autograd.
It’s like I register a autograd graph when I’m extending it. So does it mean that I can ignore the graph created during forward()?

Thanks!

yes, during backward you only define the correct graph between grad_output and grad_input