RuntimeError: the derivative for 'target' is not implemented when trying to backprop through KL?

Hi all

I’m trying to use F.kl_div as a memory-efficient way of calculating the KL between two distributions during the forward pass of my model as a form of regularisation. I had previously rolled my own implementation, but I noticed that PyTorch has a C-under-the-hood version already, so I swapped that in.

However, trying to use it in my forward pass gives the error in the title. Is there a way around this?

Thanks
Kris

Not really, no. KLDivLoss currently only supports getting the derivative of the input.

Please feel free to submit a feature request and/or a PR if you’d like to fix it: https://github.com/pytorch/pytorch