I have probably a problem which is probably not a problem… I’m trying to implement a custom nn.Module loss function.
# Here I want to compute repeatability between two sets of points def repeatability(kp1, kp2, threshold=3): if 0 in (len(kp1_), len(kp2_)): return 0 dist = torch.cdist(kp1_, kp2_) r_0 = torch.sum(dist.min(dim=0).values <= tau) # this function break the computation graph r_1 = torch.sum(dist.min(dim=1).values <= tau) rep = torch.div(r_0 + r_1, len(kp1_) + len(kp2_)) return rep k0 = torch.randn(50, 3) k1 = torch.randn(50, 3) repeatability_kp(k0, k1)
Is there a problem for the training step if I break the computation graph during loss computation ?
Can i just return like
return torch.tensor(rep, requires_grad=True)
Thank you for your response.