What's the use of `torch.bool` tensors?

What’s the use of torch.bool tensors, if most layers use floats?

I see the smallest bitwidth tensor besides torch.bool is torch.quint4x2 (unsigned 4-bit integer), but only one sort of layer (nn.EmbeddingBag sparse layer) supports it.

Pytorch’s docs on “Quantization” define it as:

techniques for performing computations and storing tensors at lower bitwidths than floating point precision.

But as low as 1-bit?

See my related AI.StackExchange question.

You can use it e.g. as a masking tensor:

x = torch.randn(10)
print(x)
# tensor([ 0.0662,  0.6553,  0.7496,  0.2320, -0.0108, -0.2568,  0.7380,  0.3812,
#          0.2378,  1.4405])

mask = x > 0.
print(mask)
# tensor([ True,  True,  True,  True, False, False,  True,  True,  True,  True])
print(mask.dtype)
# torch.bool

y = x[mask]
print(y)
# tensor([0.0662, 0.6553, 0.7496, 0.2320, 0.7380, 0.3812, 0.2378, 1.4405])
1 Like