Theory question on autograd

I have some questions regarding the theory of autograd:

1.) Are y and x the input and output respectively?
2.) Where does v come from? and how is it computed?

Hi,

  1. When you write y = f(x), that means that x is the input. And y is the output.
  2. v can be any Tensor you want. A special example is if you function has a single output (like a loss function in NN) then J will be a simple row and by setting v = 1, you get the gradient of your function.