The use of loss function calculated with Numba in Pytorch

Hi All. I have a model written in Pytorch; however, I would like to calculate loss function by using Numba. I have searched online and could not find any appropriate solution. What I made up is to calculate loss by using a loss function written for Pytorch variables, let’ s say MSE loss, and then change “” to what I calculate with my loss function in Numba. I do want to ask if there is any more elegant way to incorporate my loss function to Pytorch.


This won’t work as the computed gradients won’t be the ones corresponding to what you computed in numba.
If you can, you should reimplement your loss function with pytorch operations to have automatic differentiation of it.
Otherwise, you will need to write a new autograd.Function as discussed here to specify what the backward should be.

This sounds like an interesting question.
Could you share your source code to investigate?

Thank you for your answer. It makes sense.

Sorry. I am not allowed to share my code.