Can autograd detect the computation on the input if it was converted to numpy?

I am writing my own loss function in which I want to use ‘histogram2d’ function in numpy, but I am wondering if the operations will be detected by autograd?

class normalized_mutual_info_loss(nn.Module):
def int(self):
super(normalized_mutual_info_loss, self).init()

def __call__(self, pred, ref):
    hist_2d, _, _ = np.histogram2d(pred.cpu().detach().numpy().ravel(),
                                   ref.to(torch.cpu().detach().numpy().ravel(),
                                   bins=20)
    nmi = self.normalized_mutual_information(torch.from_numpy(hist_2d).float().requires_grad_().cuda())

    return 1 - nmi

def normalized_mutual_information(self, hgram):

return …

Hi,

  • You should not modify the __call__ method on nn.Module, it use used (especially for hooks). You should only define the forward() method.
  • Only pytorch’s own functions are tracked by the autograd. So you cannot use numpy functions. If you need a function that is not in pytorch, you will need to do create a new autograd.Function and implement the backward yourself.

@albanD
Thank you for your quick response!

I do not really understand your first point. Does it mean the function name within the loss class has to be forward()? I use call to make it called automatically as there are more than one fuction in the loss class.

second, if I implement the histogram2d by myself on the tensor, I do not have to write the backward by myself, right?

From the code you sent, your loss class is a subclass of nn.Module. As you can see in the doc you should only implement __init__ and forward and use it by calling it on inputs as:

mod = YourMod(init_args)
out = mod(input) # This will call .forward() + do other stuff needed by pytorch

The __call__ method is implemented on the base class and should not be override.

If you implement this method using only pytorch’s methods, the autograd will give you the gradients yes.

@albanD

Ok, got it. Thank you!