Numerical change from transpose operation?

I found a numerical quirk where the output of a computation is changed by using the transpose operation. The difference is small, but I’m curious if anyone knows the underlying reason for this.

Start with:

x = torch.tensor([0.0014, -0.0306,  0.0005,  0.0011,  0.0012,  0.0022,  0.0017,  0.0011,
                  0.0017,  0.0011,  0.0012,  0.0017,  0.0014,  0.0015,  0.0010,  0.0006,
                  0.0006,  0.0004,  0.0009,  0.0007,  0.0008,  0.0007,  0.0013,  0.0013,
                  0.0015,  0.0023,  0.0007], dtype=torch.float64)

Now compute:

y1 = (x.unsqueeze(0) @ torch.ones(x.shape[0], 1, dtype=torch.float64)).item()
> 3.2526065174565133e-19

y2 = (x.unsqueeze(0) @ (torch.ones(1, x.shape[0], dtype=torch.float64).T)).item()
> 2.4936649967166602e-18

I would not expect the transpose operation to have this effect. Does anyone know the underlying reason?

Different algorithms can be used for different input shapes which do not produce bitwise-identical outputs.