Module has no graph nodes

Trying to understand modules/variables/autograd a bit better, so I made a a loss function that penalizes a point by 1 if it is 0.5 away from the target and 0 otherwise. I used the following forward

def forward(self, input_var, target_var):
    D = torch.norm(input_var - target_var)
    return sum(D > 0.5)

But I get the error RuntimeError: there are no graph nodes that require computing gradients when I run backward() on this loss. I don’t seem to release any data from variables or use numpy arrays. So I think I’m misunderstanding something fundamental about autograd. Any help would be appreciated. Thanks

The variable you want to train needs to be created with requires_grad=True. (Equivalently, you can make it an nn.Parameter rather than a Variable)

1 Like

Thanks for recommendation. I think I get the need for requires_grad=True, but I’m still not sure I understand when. Fr example, the following runs:

def forward(self, x, y):
    return sum(torch.abs(x - y))

But this gives the runtime error:

def forward(self, x, y):
    return sum(torch.abs(x - y) > 0.5)

What is it about the ‘>’ that causes the error?

@scoinea the problem in the second snippet is that comparison is not a differentiable operation, so you can’t compute its gradient

2 Likes

That is indeed very correct. Thanks for the help.