Confusing result in torch.matmul()

A confusing result about basic torch.matmul().

    all_sequence_output = torch.rand(12,512)
    all_visual_output = torch.rand(1,512)
    retrieve_logits3 = torch.matmul(all_sequence_output, all_visual_output.t())
    retrieve_logits4 = torch.matmul(all_sequence_output[0], all_visual_output.t())
    print(torch.equal(retrieve_logits3[0], retrieve_logits4))

result:

False
tensor([127.57218933105468750000000000000000]) tensor([127.57218170166015625000000000000000])

This is expected behavior caused by the limited floating point precision and a potentially different order of operations.