Using torch.logical_and turns requires_grad flag to False

As a toy code

tseq = torch.linspace(0,1,100).requires_grad_(True)
print(tseq.requires_grad)

result = torch.logical_and(0.2 < tseq, tseq < 0.7)
print(result.requires_grad)

It seems that torch.logical_and automatically turns requires_grad to False. Is this function not supported by autograd?

Hi Hajin!

This is correct. bools are discrete, so it doesn’t make sense to try to
differentiate them and autograd doesn’t try.

(More specifically, the tensor 0.2 < tseq itself has dtype bool and
pytorch has already set its requires_grad property to False, even
though it was computed from a tensor that had requires_grad = True.
So the problem has already occurred before the logical_and() function.)

Best.

K. Frank

1 Like