# About the difference between `none` and `sum` in `reduction` for BCELoss

According to my understanding, the following `nloss.sum()` and `sloss` should be equal, but the actual is different. Why is this?

``````import torch
import torch.nn as nn

mean_loss = nn.BCELoss(reduction='mean')
sum_loss = nn.BCELoss(reduction='sum')
none_loss = nn.BCELoss(reduction='none')

prediction = torch.rand((4, 1, 256, 256))
target = torch.randint(0, 2, (4, 1, 256, 256), dtype=torch.float32)

mloss = mean_loss(prediction, target)
sloss = sum_loss(prediction, target)
nloss = none_loss(prediction, target)

print(f"none_loss: {nloss}")
print(f"none_loss size: {nloss.size()}")

print(f"none_loss sum: {nloss.sum()}")
print(f"sum_loss: {sloss}")

print(f"none_loss mean: {nloss.mean()}")
print(f"sum_loss/(4*1*256*256): {sloss / (4 * 1 * 256 * 256)}")
print(f"mean_loss: {mloss}")
``````

output:

``````none_loss: tensor([[[[1.3495, 0.2165, 0.1645,  ..., 0.9105, 1.3246, 2.3588],
[0.2977, 1.2160, 0.2346,  ..., 0.2156, 0.0112, 0.0647],
[2.3740, 0.0570, 1.7437,  ..., 1.9400, 0.1285, 2.8168],
...,
[0.6525, 0.3042, 0.9111,  ..., 1.0126, 1.0627, 6.1224],
[4.6391, 0.6456, 0.9346,  ..., 0.9919, 0.0441, 0.0186],
[1.5025, 0.8117, 0.3026,  ..., 1.2144, 0.8634, 0.0161]]],

[[[1.8923, 1.4190, 1.1883,  ..., 0.4101, 1.8231, 0.7425],
[0.0231, 0.7847, 2.2190,  ..., 0.5121, 0.1161, 1.3471],
[0.9289, 3.2961, 0.2482,  ..., 2.1655, 0.3986, 0.3489],
...,
[0.0197, 2.0553, 0.0988,  ..., 0.0907, 0.8908, 0.6585],
[0.7845, 4.9424, 1.9141,  ..., 2.5432, 2.2620, 1.1243],
[0.6280, 0.2520, 1.0300,  ..., 0.4113, 1.2322, 0.4129]]],

[[[3.9760, 1.3871, 0.5002,  ..., 0.0077, 0.3474, 0.0095],
[3.5548, 2.1170, 1.0698,  ..., 0.3680, 2.2655, 0.0573],
[1.6277, 2.4666, 0.0724,  ..., 0.6252, 0.1662, 0.3559],
...,
[1.9241, 0.0977, 1.0205,  ..., 1.4400, 0.4261, 1.4168],
[0.3839, 1.5772, 0.2207,  ..., 0.4982, 1.3771, 0.0717],
[0.6420, 0.4036, 0.5927,  ..., 1.3667, 2.6873, 0.0586]]],

[[[1.5006, 0.1055, 0.5871,  ..., 0.2185, 0.3801, 0.0206],
[0.0636, 0.2080, 0.5735,  ..., 0.3822, 0.7734, 0.7849],
[0.8799, 1.9971, 1.4169,  ..., 0.0417, 0.4123, 1.9111],
...,
[0.9491, 0.8655, 0.8146,  ..., 0.0889, 1.1045, 0.0404],
[2.3005, 0.9741, 1.6709,  ..., 0.3271, 0.4325, 1.3791],
[1.7653, 6.5351, 1.3232,  ..., 0.4984, 0.6124, 1.4612]]]])
none_loss size: torch.Size([4, 1, 256, 256])
none_loss sum: 261970.9375
sum_loss: 261969.8125
none_loss mean: 0.9993398189544678
sum_loss/(4*1*256*256): 0.999335527420044
mean_loss: 0.999335527420044
``````

They are equal, it results from precision of float. Change `dtype` of `prediction` and `target` to `torch.float64` will give you closer answer, but float is float, it’s never exactly the same.

It does seem that way.

``````import torch
import torch.nn as nn

mean_loss = nn.BCELoss(reduction='mean')
sum_loss = nn.BCELoss(reduction='sum')
none_loss = nn.BCELoss(reduction='none')

torch.set_default_tensor_type(torch.DoubleTensor)
prediction = torch.rand((4, 1, 256, 256))
target = torch.randint(0, 2, (4, 1, 256, 256)).to(prediction.dtype)

mloss = mean_loss(prediction, target)
sloss = sum_loss(prediction, target)
nloss = none_loss(prediction, target)

print(f"none_loss: {nloss}")
print(f"none_loss size: {nloss.size()}")

print(f"none_loss sum - sum_loss: {nloss.sum() - sloss}")
print(f"sum_loss: {sloss}")

print(f"none_loss mean - mean_loss: {nloss.mean() - mloss}")
print(f"none_loss mean - sum_loss/(4*1*256*256): {nloss.mean() - sloss / (4 * 1 * 256 * 256)}")

print(f"mean_loss - sum_loss/(4*1*256*256): {mloss - sloss / (4 * 1 * 256 * 256)}")
``````

output:

``````none_loss: tensor([[[[1.6909e-01, 2.2602e-01, 2.9009e-01,  ..., 6.7879e-01,
2.9715e-01, 2.2295e-01],
[1.6360e-01, 6.4142e-01, 4.2995e+00,  ..., 5.5690e+00,
3.1017e+00, 1.5648e+00],
[2.1270e-02, 1.3671e+00, 1.0986e+00,  ..., 6.4392e-01,
4.7506e-01, 3.2062e+00],
...,
[3.1378e-01, 5.4515e-01, 1.8998e+00,  ..., 2.4271e+00,
4.0148e-02, 8.0373e-01],
[2.1827e+00, 2.1392e+00, 4.0147e-02,  ..., 1.5841e+00,
3.2365e-01, 4.0541e-01],
[4.8378e-03, 2.2808e+00, 1.0209e+00,  ..., 1.0025e-01,
8.9763e-01, 6.0305e-01]]],

[[[8.6865e-01, 2.3439e-01, 9.7892e-01,  ..., 2.8537e-01,
8.0262e-01, 1.4477e+00],
[3.6908e-01, 1.3302e-01, 3.6755e-01,  ..., 4.4746e-01,
1.1312e+00, 1.5401e+00],
[5.9827e-01, 1.2599e-01, 4.1681e-01,  ..., 1.4241e+00,
1.2873e-01, 3.6063e-02],
...,
[2.1580e-02, 1.2570e+00, 3.5218e-01,  ..., 5.6630e-04,
1.1745e+00, 8.3446e-01],
[6.5295e-01, 2.8552e-01, 6.6924e-01,  ..., 3.3659e-01,
5.2286e-01, 4.0466e+00],
[3.0521e-02, 4.1409e-01, 6.3107e-02,  ..., 5.7095e-01,
9.7246e-01, 1.6637e-02]]],

[[[1.2195e+00, 1.6163e+00, 6.2884e-01,  ..., 9.3902e-01,
8.8280e-01, 9.0248e-01],
[6.1688e-01, 1.1943e+00, 6.5388e-01,  ..., 2.2729e-02,
1.8613e+00, 4.6382e-01],
[2.0489e+00, 4.9627e-01, 1.0688e+00,  ..., 1.2870e+00,
4.2555e+00, 6.5962e-01],
...,
[4.7444e-02, 1.7521e-01, 9.8799e-01,  ..., 1.7306e-01,
4.7791e-01, 9.8417e-01],
[5.8264e+00, 2.9936e-01, 6.4415e-01,  ..., 1.5320e+00,
8.1047e-02, 1.1640e+00],
[2.4262e-02, 3.5514e-01, 3.5209e-01,  ..., 8.9354e-01,
2.3711e+00, 1.9904e-01]]],

[[[8.9773e-01, 1.8652e-01, 2.3964e+00,  ..., 1.5578e-01,
4.3828e-01, 1.4039e-01],
[9.0407e-01, 3.1064e-01, 5.5755e-01,  ..., 2.1385e+00,
3.2113e-01, 1.1566e+00],
[4.6947e-01, 8.2999e-01, 3.8598e-01,  ..., 6.5852e-01,
3.8495e-01, 2.8922e+00],
...,
[1.0772e-01, 4.0390e-01, 2.4569e+00,  ..., 1.2215e+00,
1.4233e-01, 1.3202e+00],
[1.8880e-02, 3.5397e-01, 4.5789e-01,  ..., 3.6323e-01,
1.5102e+00, 1.8710e-01],
[3.0939e-01, 1.5822e-01, 8.6607e-01,  ..., 8.2385e-01,
1.7487e+00, 1.0783e+00]]]])
none_loss size: torch.Size([4, 1, 256, 256])
none_loss sum - sum_loss: 2.153683453798294e-09
sum_loss: 261975.25710102005
none_loss mean - mean_loss: 8.215650382226158e-15
none_loss mean - sum_loss/(4*1*256*256): 8.215650382226158e-15
mean_loss - sum_loss/(4*1*256*256): 0.0
``````

When I commented out `torch.set_default_tensor_type(torch.DoubleTensor)`, it outputs:

``````none_loss: tensor([[[[3.5456e+00, 3.7387e+00, 8.4511e-01,  ..., 8.3419e-01,
3.1007e-01, 6.0784e-01],
[3.2497e-01, 7.1582e-02, 1.0087e+00,  ..., 1.0328e+00,
1.1397e+00, 4.6922e-01],
[9.5055e-01, 1.1585e+00, 5.0625e-01,  ..., 9.2214e-02,
1.2421e+00, 1.0988e+00],
...,
[8.3196e-01, 1.9343e+00, 3.8697e+00,  ..., 5.8421e-01,
3.5900e-01, 1.4599e+00],
[1.7304e+00, 3.9824e+00, 6.6815e-01,  ..., 8.6631e-01,
7.6646e-02, 7.7781e-01],
[3.0506e+00, 1.0827e+00, 3.8251e+00,  ..., 1.8362e+00,
4.5225e-01, 2.1761e-01]]],

[[[6.5931e-01, 2.4224e+00, 2.9061e-01,  ..., 1.5686e+00,
4.9234e+00, 7.5245e-01],
[4.5296e-01, 7.7876e-01, 3.3517e-01,  ..., 4.0715e-01,
1.8543e-01, 5.9114e-02],
[1.2131e+00, 1.4136e+00, 5.6655e-01,  ..., 1.6677e+00,
1.1074e-01, 3.2634e-01],
...,
[2.0635e+00, 6.5898e-02, 5.1794e-01,  ..., 2.5420e-01,
1.9394e-01, 1.3224e-01],
[7.3909e-01, 8.0544e-01, 1.9719e+00,  ..., 1.1651e+00,
1.4917e+00, 1.2840e-01],
[1.1194e+00, 3.3584e-01, 6.4722e-01,  ..., 1.1440e+00,
2.4911e-01, 4.8057e-01]]],

[[[3.3183e-01, 2.2492e-01, 1.5291e+00,  ..., 9.8453e-01,
4.4725e-01, 1.3618e-01],
[7.1685e-01, 1.9812e+00, 3.0295e-01,  ..., 4.9616e-01,
4.2325e-01, 1.2133e+00],
[1.1641e+00, 5.6044e-01, 3.5851e-01,  ..., 8.3811e-01,
2.6152e-01, 1.2948e+00],
...,
[4.4844e-01, 1.6640e+00, 1.4553e+00,  ..., 3.4472e+00,
2.0390e-03, 5.9021e-01],
[2.0639e-03, 7.7447e-01, 7.5002e-01,  ..., 5.5059e-01,
1.1729e+00, 4.1135e-01],
[3.1777e-01, 1.4453e-01, 1.4527e+00,  ..., 1.9885e-01,
1.9115e+00, 2.5113e+00]]],

[[[5.8665e-01, 7.0809e-01, 2.9890e-01,  ..., 2.6212e-01,
2.9796e-01, 2.8047e-01],
[1.4107e+00, 9.7047e-03, 2.5597e-01,  ..., 4.2398e-01,
1.3963e+00, 1.2573e-01],
[3.5281e-01, 8.8076e-01, 1.1918e-01,  ..., 1.3587e+00,
2.2086e+00, 9.7953e-01],
...,
[1.7415e-01, 2.1369e-01, 1.1759e+00,  ..., 3.1513e-01,
2.6444e+00, 6.0460e-01],
[4.3002e-01, 8.0646e-02, 5.3878e-01,  ..., 3.2743e-01,
3.4625e+00, 8.5545e-01],
[1.4224e-01, 1.2358e+00, 1.2238e+00,  ..., 7.6897e-02,
1.5435e+00, 2.6061e-01]]]])
none_loss size: torch.Size([4, 1, 256, 256])
none_loss sum - sum_loss: 2.1875
sum_loss: 262595.4375
none_loss mean - mean_loss: 8.344650268554688e-06
none_loss mean - sum_loss/(4*1*256*256): 8.344650268554688e-06
mean_loss - sum_loss/(4*1*256*256): 0.0
``````

The order of magnitude of the difference has changed from `-15` to `-6`.