Hello I’d like to ask how could I do bitwise operations on float tensors? Currently I have to use struct to convert them and then do the operation on CPU and move back, which is very inefficient… Do we have pytorch solutions for that?
On the other hand, is it possible to interpret a tensor as another type (not type cast)? For example, interpret a DoubleTensor as a LongTensor? If I can do this, I can then use bitwise operations just like those for integer types…
a = torch.tensor(1., device="cuda")
b = a.view(torch.int)
print(b)
# tensor(1065353216, device='cuda:0', dtype=torch.int32)
# both represent this bit pattern
# 00111111 10000000 00000000 00000000
and allows you to interpret the data in another format.
Never mind, I think I resolved the issue. I was manually comparing with the results from these two IEEE754 converters from double to binary, but there are slight differences (e.g. the closest float to 3.1415):
Sorry, I don’t understand the question so could you explain how the bit representation, which can be changed via the view operation, being discussed in this topic is related to a grad_fn?
Thanks for the explanation. Interpreting the tensor in another dtype is not differentiable (e.g. integer dtypes are not differentiable by design) so you would need to implement your backward function manually based on your definition of the backward operation.
Almost, as I don’t think directly assigning tensors to ctx and unpacking them is a valid approach since it could cause memory leaks, if I’m not mistaken.
Use ctx.save_for_backward and ctx.saved_tensors instead as shown in the tutorial.