Seems like an issue with numerical accuracy. If this is a problem at some point in your code you can either

Increase numerical precision, i.e., in this case n = torch.randn(200, 385, dtype=torch.float64).

or do the equality check with higher tolerance, e.g., torch.allclose(s, s.T, atol=1e-4). Note that here the default is atol=1e-8 and the “a” stands for tolerance with respect to absolute deviation.