Do I need some decorator for cpp operators when I use pytorch1.6?

Hi,

I am trying to implement an operator that will change the labels for computing the loss function. The loss function is like this:

class OhemCELoss(nn.Module):

    def __init__(self, thresh, gnore_lb=255):
        super(OhemCELoss, self).__init__()
        self.score_thresh = thresh
        self.ignore_lb = ignore_lb
        self.criteria = nn.CrossEntropyLoss(ignore_index=ignore_lb, reduction='mean')
    def forward(self, logits, labels):
        import ohem_cpp
        n_min = (labels != self.ignore_lb).numel() // 16
        labels = ohem_cpp.score_ohem_label(logits, labels,
                self.ignore_lb, self.score_thresh, n_min).detach()
        loss = self.criteria(logits, labels)
        return loss

Here the ohem_cpp is implemented according to the principle of the pytorch extension, and the ohem_cpp.score_ohem_label will compute the softmax scores of the input logits, choose the scores lower than the thresh and set the corresponding element of the labels to the assigned ignore_lb. I noticed that in pytorch1.6, users should add decorator of @custum_fwd if they want to implement their own operators that is derived from autograd.Function. I have no idea whether I can call my ohem_cpp module directly as in the code above without potential errors. Did I miss anything here ?

I am asking this because I sometimes met the error of terminate called after throwing an instance of 'thrust::system::system_error'(not often), I do not know whether is is associated with my wrongly using of my self-implemented operators.

Where did you saw that? There were no changes that I’m aware of to autograd.Function in 1.6.

I have no idea whether I can call my ohem_cpp module directly as in the code above without potential errors.

Yes you can. And as long as you only use differentiable ops on the c++ side you can backprop through it.
If you don’t, then you’ll need a custom Function and you can see here how to do it.

Thanks, I got the @custum_fwd specification from the amp tutorial.

Apply custom_fwd and custom_bwd (with no arguments) to forward and backward respectively.

Thanks for the link. I wasn’t aware of this one :smiley:

It looks orthogonal to the choice of using a custom Function or not.
As mentioned above, if you c++ code is differentiable, you’re all good. If not, you’ll have to define the backward and can do so with a custom Function in python as instructed above that just happens to call you c++ code.