External Custom Loss function

Hello,

I want to take the output of the forward pass and detach it to generate a matrix (that is similar to the input) using an external function (not a pytorch function), then create a loss function based on this generated matrix and the input matrix.

does this cause a problem for the gradient?

Thank you

Yes, using other libraries or explicitly detaching the tensors won’t allow Autograd to track these operations and will thus detach the tensor from the computation graph (parameters used in previous operations also won’t gradients).
To use other libraries you could write a custom autograd.Function as described here.

1 Like

Thank you very much @ptrblck

Hi again @ptrblck
I am trying to use some opencv functions like the following:

class opencv_torch(Function):
    @staticmethod
    def forward(ctx, im1, im2, **params,):
        numpy_im1 = im1.to('cpu').detach().numpy()
        result = cv2.FUNCTION(numpy_im1 , **params)
        return torch.FloatTensor(result)
    @staticmethod
    def backward(ctx, grad_output):
        return None, None

I want this Function to be parameter-less without any learnable parameters so it gives only the output, does this break the gradient flow?

Thank you

Since you are returning None gradients the previous layers won’t get any valid gradients so this implementation could be seen as detaching the computation graph.