The logical "or/and/not" function about pytorch

Compute the truth value of x1 OR/AND/NOT x2 element-wise in Numpy is numpy.logical_or , numpy.logica_and , numpy.logical_not

The same function of TensorFlow is tf.logical_or , tf.logical_and, tf.logical_not

and i do not found in PyTorch docs, Is there some equivalent in PyTorch and if not, would it be possible to add these functions?

On a byte tensorz you can use the python syntax for them or directly __or__, __and__ etc

2 Likes

Awesome. That means the python bitwise operators work like elementwise logical operators. For unit8 tensors a and b and numpy bool array equivalents A and B:

a | b ~= np.logical_or (A, B)
a & b ~= np.logical_and (A, B)
a ^ b ~= np.logical_xor (A, B)
2 Likes

Hi @albanD,

Kindly help me with two concerns, regarding the logical xor operation between two tensors:

  1. Would the python syntax like ^ (xor operation) be consistent with tensor.backwards() gradient flow?
  2. The PyTorch version that my docker image has; it supports torch.bool type but not torch.logical_xor() operation. So, I used bool_tensor_1 ^ bool_tensor_1 . How is the efficiency of ^ compared to torch.logical_xor ?

Hi,

  1. The inputs to xor are binary. So gradients don’t really exist. We can only provide gradients for contiguous number spaces (float/double).
  2. I am not sure about the difference, but they should be routing to the same implementation under the hood.
1 Like

Does that mean that:

  1. In the loss function calculation, there should be no boolean data/number from starting to end?
  2. Also, there should be no boolean operator from starting to end (in loss function calculation)?
  3. If I do float_tensor ^ float_tensor, it’s gradient would not be a valid thing?

Why my program didn’t throw any error ? How has it interpreted the differentiation across xor function, and differentiation of boolean numbers ?

  1. If you want to be able to use automatic differentiation, yes. Note that in classical ML, this is also why when your original loss has non-continuous values, you introduce a surrogate loss to be able to optimize something.
  2. Same as 1
  3. If you try to run this code with float tensors that require gradients, you will get the following error: “arguments don’t support automatic differentiation”. So if your code ran fine, it was because the inputs did not require gradients.
1 Like

These also work for torch.bool tensors (introduced in pytorch 1.2).

There’s also an element-wise not operation:

~a  == np.logical_not (A)  

In pytorch 1.4+, this works for both ByteTensors and BoolTensors, however in pytorch 1.2 (1.3 also?) it only worked for ByteTensors (ie: uint8).