Use of .backward()

what is the use of backward i cannot understand
x=torch.tensor(3.,requires_grad=True)
w=torch.tensor(2.,requires_grad=True)
b=torch.tensor(4.,requires_grad=True)
y=w*x+b
y.backward()
x.grad
w.grad
b.grad

could you please explain use of .backward class also use of .grad

Hi,

y.backward() will perform backprop to compute the gradients for all the leaf Tensors used to compute y.
The .grad attribute of leaf Tensors is where these computed gradients are stored.

The 60min Blitz on pytorch here and the part on autograd describes this in details.

i came after watching that lol :smile: . it is not given in laymans terms. i can understand untill above gradients but after that

Let’s backprop now. Because out contains a single scalar, out.backward() is equivalent to out.backward(torch.tensor(1.)) .

what does this mean i cannot get it .

This means it will use the backpropagation algorithm (very close to reverse mode automatic differentiation) to compute the gradient of the output with respect to the input Tensors that require gradients.

no still i cannot get it could you please explain me in simple terms all i know is differentiation jacobians ect . could you please explain me somewhat like what happens when you call .backpropagation() in simple terms (are you telling that we are taking dy/dx and y is output here and x is input) .
ok in the example above y.backward() y is output then what is input

As mentionned in the doc .backward() computes the gradient for all the leafs used to compute the output. Doc for leaf Tensor is here.
If you use autograd.grad doc here you can specify explicitly which inputs you want the gradient for.
Depending on what you want to do, you should use the one that fits best.

Backpropagation algorithm is very well explained on the wikipedia article: https://en.wikipedia.org/wiki/Backpropagation
The short answer in this case of scalar output is that it computes the gradient dy/dx (either stored in y.grad if you use .backward() or returned to your if you use autograd.grad).