Which implementation is “corrrect”??

# going from tiny to big

in pytorch

```
torch.tensor([[0, 0, 3072, -16402], [0, 0, -25600, -16416], [0, 0, -14336, -16417], [0, 0, 30720, 16370]], dtype=torch.int16).view(torch.float64)
# outputs
tensor([[-0.9390],
[-0.5190],
[-0.4966],
[ 1.1543]], dtype=torch.float64)
```

in tensorflow

```
tensorflow.bitcast(tensorflow.constant([[0, 0, 3072, -16402], [0, 0, -25600, -16416], [0, 0, -14336, -16417], [0, 0, 30720, 16370]], dtype=tensorflow.float64), tensorflow.int8)
# outputs
[-0.938965,
-0.519043,
-0.496582,
1.154297]
```

Data is same the difference is on dimention

- one less for tensorflow or one more for pytorch

# going from big to tinnier

```
torch.tensor([100000], dtype=torch.float64).view(torch.uint8)
tensor([0, 0, 0, 0, 0, 106, 248, 64], dtype=torch.uint8)
```

in tensorflow

```
tensorflow.bitcast(tensorflow.constant([100000], dtype=tensorflow.float64), tensorflow.uint8)
# outputs
[[0, 0, 0, 0, 0, 106, 248, 64]]
```

Data is same the difference is on dimention

- one less for pytorch or one more for tensorflow

They inverted shapes going from big to tiny or from tiny to big.

I add this screenshots to show the problem

# float64 to uint8

`tensorflow.bitcast(tensorflow.constant([100000], dtype=tensorflow.float64), tensorflow.uint8)`

`torch.tensor([100000], dtype=torch.float64).view(torch.uint8)`

# Worse, this one differ also on data, not only shape

but numpy and pytorch match

`torch.tensor([[0, 0, 3072, -16402], [0, 0, -25600, -16416], [0, 0, -14336, -16417], [0, 0, 30720, 16370]], dtype=torch.float64).view(torch.int8)`

`tensorflow.bitcast(tensorflow.constant([[0, 0, 3072, -16402], [0, 0, -25600, -16416], [0, 0, -14336, -16417], [0, 0, 30720, 16370]], dtype=tensorflow.float64), tensorflow.int8)`

`np.array([[0, 0, 3072, -16402], [0, 0, -25600, -16416], [0, 0, -14336, -16417], [0, 0, 30720, 16370]], dtype=np.float64).view(dtype=np.int8)`