Bitwise Operations on Cuda Float Tensor

I would like to access the bit representation of a float tensor on a GPU and perform manipulations such as shifting, anding, etc. on it. I am wondering the best way to do this and if it’s even possible in PyTorch. Thanks for the help!

1 Like

I also need this feature!

We only support bitwise operations on Integer Tensor types at the moment.
The supported operations are listed here:

You can write your own C/C++ function that takes in the CUDA float* pointer and you can manipulate it yourself. Here’s an example of writing a CUDA extension:

Alternatively, you can use the cupy package, and write your whole CUDA kernel / manipulation in python scripts, for example:

What exactly is your use-case?