Autograd on custom loss function

Would autograd still work if I used non torch functions on my tensors? My model outputs a series of human poses in the form of 3D coordinates (each frame/pose is composed of 17 such coordinates), so the shape of output and target being fed to my loss function would be (seq_len, batch_len, 17, 3). I want my loss function to be the euclidean distance between the output and target summed (i.e. distance between output[0][0][0] and target[0][0][0], then output[0][0][1] and target[0][0][1], and so on, and all those distances summed). My question is, if I were to compute this via a loop that iterated through all the frames/poses, would autograd still know what to do? Or do I have to do this calculation using built in torch functions?

You have to use built-in torch functions.

Yes. A loop does not detach the computation graph and as long as you are using PyTorch methods, Autograd will create the backward pass for you.
If you need to leave PyTorch and e.g. use numpy in the forward pass, you would need to implement the backward function manually.

1 Like