# Breaking Computation Graph

Hello everyone,
I have probably a problem which is probably not a problem… I’m trying to implement a custom nn.Module loss function.

``````# Here I want to compute repeatability between two sets of points
def repeatability(kp1, kp2, threshold=3):
if 0 in (len(kp1_), len(kp2_)):
return 0
dist = torch.cdist(kp1_, kp2_)
r_0 = torch.sum(dist.min(dim=0).values <= tau) # this function break the computation graph
r_1 = torch.sum(dist.min(dim=1).values <= tau)
rep = torch.div(r_0 + r_1,  len(kp1_) + len(kp2_))
return rep
k0 = torch.randn(50, 3)
k1 = torch.randn(50, 3)
repeatability_kp(k0, k1)
``````

Is there a problem for the training step if I break the computation graph during loss computation ?
Can i just return like

``````return torch.tensor(rep, requires_grad=True)
``````

Based on your code snippet I don’t think it would be a problem to re-wrap `rep` into a new tensor, as it shouldn’t be attached to a computation graph in the first place (so you are not detaching it at all).