# Bitcast pytorch vs tensorflow (different shapes?)

Which implementation is “corrrect”??

# going from tiny to big

in pytorch

``````torch.tensor([[0, 0, 3072, -16402], [0, 0, -25600, -16416], [0, 0, -14336, -16417], [0, 0, 30720, 16370]], dtype=torch.int16).view(torch.float64)
# outputs
tensor([[-0.9390],
[-0.5190],
[-0.4966],
[ 1.1543]], dtype=torch.float64)
``````

in tensorflow

``````tensorflow.bitcast(tensorflow.constant([[0, 0, 3072, -16402], [0, 0, -25600, -16416], [0, 0, -14336, -16417], [0, 0, 30720, 16370]], dtype=tensorflow.float64), tensorflow.int8)
# outputs
[-0.938965,
-0.519043,
-0.496582,
1.154297]
``````

Data is same the difference is on dimention

• one less for tensorflow or one more for pytorch

# going from big to tinnier

``````torch.tensor([100000], dtype=torch.float64).view(torch.uint8)
tensor([0, 0, 0, 0, 0, 106, 248, 64], dtype=torch.uint8)
``````

in tensorflow

``````tensorflow.bitcast(tensorflow.constant([100000], dtype=tensorflow.float64), tensorflow.uint8)
# outputs
[[0, 0, 0, 0, 0, 106, 248, 64]]
``````

Data is same the difference is on dimention

• one less for pytorch or one more for tensorflow

They inverted shapes going from big to tiny or from tiny to big.

I add this screenshots to show the problem

# float64 to uint8

• `tensorflow.bitcast(tensorflow.constant([100000], dtype=tensorflow.float64), tensorflow.uint8)`
• `torch.tensor([100000], dtype=torch.float64).view(torch.uint8)`

# Worse, this one differ also on data, not only shape

but numpy and pytorch match

• `torch.tensor([[0, 0, 3072, -16402], [0, 0, -25600, -16416], [0, 0, -14336, -16417], [0, 0, 30720, 16370]], dtype=torch.float64).view(torch.int8)`
• `tensorflow.bitcast(tensorflow.constant([[0, 0, 3072, -16402], [0, 0, -25600, -16416], [0, 0, -14336, -16417], [0, 0, 30720, 16370]], dtype=tensorflow.float64), tensorflow.int8)`
• `np.array([[0, 0, 3072, -16402], [0, 0, -25600, -16416], [0, 0, -14336, -16417], [0, 0, 30720, 16370]], dtype=np.float64).view(dtype=np.int8)`

Why are you not using the same dtypes in both frameworks? E.g. I see `int8` and `int16`?

A yeah, is just a typo between all the test I did and copied here (fixed now), as I said, there is no difference in contents but in shapes difference in some cases in content too.

Look at the extra `[]` but they are symetric in the implementation of pytorch vs tensorflow.

Im more confused now… with the example that show not only different shape, but different values… (and to be clear by values I mean the whole row in the inner dimension, on the first capture only the outer shape is different, but on the second shape is different the inner values differ).

Does `bitcast` intensorflow and `.view(dtype)` in pytorch should be exactly equivalent? or they are implemented as wanted??? or Im doing something wrong that I cant see (apart from copy wrong example results)?

I don’t know how TF handles the shape, but it seems PyTorch keeps to numpy’s approach as a reference.

In case you are referring to the fist example, increase the precision during printing:

``````torch.set_printoptions(precision=6)
torch.tensor([[0, 0, 3072, -16402], [0, 0, -25600, -16416], [0, 0, -14336, -16417], [0, 0, 30720, 16370]], dtype=torch.int16).view(torch.float64)
# tensor([[-0.938965],
#         [-0.519043],
#         [-0.496582],
#         [ 1.154297]], dtype=torch.float64)
``````

which matches the TF output.

Let me know if anything in PyTorch looks wrong (I’m not deeply familiar with TF, but would assume a bitcast should be equal).