# Conv1d results depend on length of input data

Hi there,

I get different results from Conv1d when using sub-length of data.
A simple example:

``````conv = torch.nn.Conv1d(40, 30, 3, stride=1, padding=1, dilation=1)
test_data = torch.randn((1, 40, 705), dtype=torch.float32)
``````

I expect to get the same results passing all data vs. sub-length of data (of course, only for the sub-length results).
But when I compute the following results

``````r1 = conv(test_data)
r2 = conv(test_data[:,:,:405])
r3 = conv(test_data[:,:,:603])
``````

they differ - I only compare a sub-length of 30 for the results:

``````(r1[:,:,:30] == r2[:,:,:30]).all()
>>> tensor(False)
(r1[:,:,:30] == r3[:,:,:30]).all()
>>> tensor(True)
``````

What am I missing? I expect all results to be the same, but they differ for a sub-length of 405, but not for a larger sub-length of 603.

You are most likely seeing small abs. errors due to the limited floating point precision and can’t expect to see bitwise equal values if the order of operations is not fixed (or if determinism is set otherwise).
You could see the same limitation by comparing the results of:

``````x = torch.randn(100, 100, 100)
s1 = x.sum()
s2 = x.sum(0).sum(0).sum()
print((s1 - s2).abs().max())
> tensor(0.0002)
``````

You can check the mean distance between the points via:

``````print(torch.mean((r1[:,:,:30]-r2[:,:,:30])**2)**0.5)
``````

It’s fairly small. Something like e-06 to e-08.

The actual algorithm used to calculate convolutions is not from Pytorch but from 3rd party libraries, depending on cpu or gpu, etc.

Keep in mind that torch.float32 has only 5 points of precision. (i.e. 0.4322). So the mean difference falls below the rounding error threshold.

Additionally, in Machine Learning applications, you generally do not need such high precision. Even half precision is sufficient.