Awesome. That means the python bitwise operators work like elementwise logical operators. For unit8 tensors a and b and numpy bool array equivalents A and B:

a | b ~= np.logical_or (A, B)
a & b ~= np.logical_and (A, B)
a ^ b ~= np.logical_xor (A, B)

Kindly help me with two concerns, regarding the logical xor operation between two tensors:

Would the python syntax like ^ (xor operation) be consistent with tensor.backwards() gradient flow?

The PyTorch version that my docker image has; it supports torch.bool type but not torch.logical_xor() operation. So, I used bool_tensor_1 ^ bool_tensor_1 . How is the efficiency of ^ compared to torch.logical_xor ?

If you want to be able to use automatic differentiation, yes. Note that in classical ML, this is also why when your original loss has non-continuous values, you introduce a surrogate loss to be able to optimize something.

Same as 1

If you try to run this code with float tensors that require gradients, you will get the following error: â€śarguments donâ€™t support automatic differentiationâ€ť. So if your code ran fine, it was because the inputs did not require gradients.