Backpropagation with mask in output

If I want to compute backprop with some element of output from model:

output = model(image) # tensor([[0.6000], [0.2000], [0.1500], [0.0500]])
mask = output >= 0.2 # tensor([[ True], [ True], [False], [False]])
predict = output[mask] # tensor([0.6000, 0.2000])
loss = criterion(predict, ground_truth[mask]) # some number
optim.zero_grad()
loss.backward()

In the example above, assume that I am dealing with the binary classification with the batch size is 4 and I only want to compute the gradient from which output is greater or equal to 0.2. So my question, Is it the correct way to do it, if not please let me know how, thank you.

Normally one uses torch.where() function.

@CedricLy I just want to confirm that the autograd do what I want it to do. Can you guarantee that if I use torch.where(), the gradient will be computed on those nodes only?