How does backward in Pytorch work?

I am new to pytorch. I have this doubt. Will Pytorch be able to compute the gradients for predefined tensor functions like torch.sum, torch.cat, etc. ? Here is a code snippet for example

class Module1(nn.Module):

	def __init__(self, input_size, output_size):
		super(Module1, self).__init__()
		self.input_size = input_size
                self.output_size = output_size
                self.layer = nn.Linear(input_size, output_size)

       def forward(self, x):
               return self.layer(x)

loss_fn = nn.MSELoss()
mod = Module1(20,20)
x = torch.autograd.Variable(torch.rand(20))
target = torch.autograd.Variable(torch.rand(1))
loss = loss_fn(torch.sum(x,0),target)
loss.backward()

PyTorch’s autograd is able to compute the gradients for most of the functions.
Have a look at the Autograd Tutorial.
It’s a nice explanation, how autograd works. Also, your code snippet is missing! :wink:

3 Likes

Sorry, I clicked to post by mistake while editing. Thank you very much.

check out this medium post on how pytorch backward() function works

1 Like