Pytorch Problem On normalization

t1 = torch.tensor([0.0906, 0.1655, 0.0000, 0.3719, 0.0000, 0.0000, 0.4430, 0.0000, 0.3730,
0.1266, 0.0000, 0.6395, 0.0000, 0.1207, 0.4568, 0.0000, 0.6271, 0.0000,
0.0000, 0.7378, 0.0000, 0.3717, 0.2919, 0.0000, 0.8226, 0.0000, 0.0000,
0.6840, 0.0000, 0.6421, 0.0147, 0.0000, 0.9095, 0.0000, 0.2307, 0.4923,
0.0000, 0.8629, 0.0000, 0.0000, 0.8621, 0.0000, 0.5355, 0.1971, 0.0000,
0.9826, 0.0000, 0.0211, 0.6814, 0.0000, 0.7961, 0.0000, 0.0000, 0.9715,
0.0000, 0.3510, 0.3933, 0.0000, 0.9612, 0.0000, 0.0000, 0.8254, 0.0000,
0.6456, 0.0433, 0.0000, 0.9984, 0.0000, 0.1246, 0.5653, 0.0000, 0.8528,
0.0000, 0.0000, 0.8991, 0.0000, 0.4378, 0.2331, 0.0000, 0.9367, 0.0000,
0.0000, 0.6800, 0.0000, 0.6727, 0.0000, 0.0000, 0.8842, 0.0000, 0.2043,
0.3807, 0.0000, 0.7892, 0.0000, 0.0000, 0.7080, 0.0000, 0.4442, 0.0571,
0.0000, 0.7686, 0.0000, 0.0000, 0.4457, 0.0000, 0.5654, 0.0000, 0.0000,
0.6181, 0.0000, 0.1990, 0.1562, 0.0000, 0.5377, 0.0000, 0.0000, 0.3707,
0.0000, 0.2729, 0.0000, 0.0000, 0.3420, 0.0000, 0.0000, 0.0755, 0.0000,
0.0689, 0.0000])
t2 = torch.tensor([0.0000e+00, 0.0000e+00, 1.0001e-07, 1.4898e-34, 0.0000e+00, 1.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 8.2454e-35, 5.5626e-36, 3.1411e-37,
0.0000e+00, 5.0282e-37, 0.0000e+00, 0.0000e+00, 2.9625e-36, 4.6642e-33,
5.1396e-26, 0.0000e+00, 1.2186e-08, 6.8570e-37, 1.8719e-35, 0.0000e+00,
0.0000e+00, 8.0672e-16, 9.9961e-01, 5.7946e-19, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 1.1239e-29, 1.1849e-23, 8.2719e-32, 1.9147e-23,
0.0000e+00, 3.9956e-16, 0.0000e+00, 5.9800e-19, 0.0000e+00, 0.0000e+00,
0.0000e+00, 6.7426e-16, 0.0000e+00, 2.4604e-32, 6.8941e-32, 6.1563e-34,
0.0000e+00, 9.1442e-25, 0.0000e+00, 1.0000e+00, 6.3760e-21, 0.0000e+00,
0.0000e+00, 7.6713e-31, 0.0000e+00, 0.0000e+00, 0.0000e+00, 4.8445e-12,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.4843e-12, 0.0000e+00,
2.3431e-32, 9.9998e-01, 1.3083e-15, 0.0000e+00, 0.0000e+00, 0.0000e+00,
9.9988e-01, 0.0000e+00, 0.0000e+00, 4.5408e-27, 4.0411e-31, 0.0000e+00,
5.0567e-39, 0.0000e+00, 0.0000e+00, 5.6249e-09, 0.0000e+00, 4.4536e-16,
0.0000e+00, 9.9994e-01, 0.0000e+00, 0.0000e+00, 0.0000e+00, 6.2092e-19,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.6951e-15, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 9.9993e-01,
9.9991e-01, 0.0000e+00, 0.0000e+00, 0.0000e+00, 6.4316e-14, 0.0000e+00,
0.0000e+00, 0.0000e+00, 4.3031e-26, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 3.6186e-08, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
2.1497e-17, 0.0000e+00, 2.2815e-09, 2.2894e-35, 8.5116e-30, 1.0000e+00,
4.4553e-29, 1.0000e+00])

norm = torch.nn.functional.normalize((t1*t2).unsqueeze(0), p=1.0, dim=1).unsqueeze(-1)

print(norm.sum())

The result is not reasonable to me, I am wondering if the wrong result is caused by tensor.float precision?

I can see your frustration with that, but most of t2 is 0 and thus the product is mostly 0. If you need unit vectors from this (in any norm), you likely are in trouble with something beyond normalize not doing what you want for your input.

Best regards

Thomas

Thanks,I get it. I change some parts in my code with different output. :grinning: