How can Pytorch calculate total loss like Tensorflow?

Short answer is yes. Autograd will accumulate gradients unless user explicitly changes (zeroes) them.

I think this tutorial A Gentle Introduction to torch.autograd — PyTorch Tutorials 1.8.1+cu102 documentation is a good demonstration. You can think of your problem as a simple equation c = a + b where a and b have their own computational graph (i.e. operations).

1 Like