Bitwise Operation on Float Tensor

Hello I’d like to ask how could I do bitwise operations on float tensors? Currently I have to use struct to convert them and then do the operation on CPU and move back, which is very inefficient… Do we have pytorch solutions for that?

Found previous questions like 6 years ago but no solutions there: Bitwise Operations on Cuda Float Tensor

On the other hand, is it possible to interpret a tensor as another type (not type cast)? For example, interpret a DoubleTensor as a LongTensor? If I can do this, I can then use bitwise operations just like those for integer types…

view should work as seen here:

a = torch.tensor(1., device="cuda")
b = a.view(torch.int)
print(b)
# tensor(1065353216, device='cuda:0', dtype=torch.int32)

# both represent this bit pattern
# 00111111 10000000 00000000 00000000 

and allows you to interpret the data in another format.

2 Likes

Why doesn’t this work with doubles? When I try torch.float64, I don’t get matching bit patterns.

Could you post an example showing the issue in the same way I’ve shown it works?

Never mind, I think I resolved the issue. I was manually comparing with the results from these two IEEE754 converters from double to binary, but there are slight differences (e.g. the closest float to 3.1415):

The former seems to match what .view(torch.long) gives.

Is there a way to preserve the grad_fn with view?

Sorry, I don’t understand the question so could you explain how the bit representation, which can be changed via the view operation, being discussed in this topic is related to a grad_fn?

For example, doing view(torch.int) on a tensor(6234., grad_fn=<DotBackward0>) makes it lose the grad_fn: tensor(1170395136, dtype=torch.int32)

Never mind, it isn’t possible, as “only Tensors of floating point dtype can require gradients”.

Thanks for the explanation. Interpreting the tensor in another dtype is not differentiable (e.g. integer dtypes are not differentiable by design) so you would need to implement your backward function manually based on your definition of the backward operation.

1 Like

Like this?

Almost, as I don’t think directly assigning tensors to ctx and unpacking them is a valid approach since it could cause memory leaks, if I’m not mistaken.
Use ctx.save_for_backward and ctx.saved_tensors instead as shown in the tutorial.

1 Like