Finite differences along specific dimension

I need to compute the approximate derivative of a tensor x using finite differences, i.e. x[1:] - x[:-1]. My problem is that my input tensor is multidimensional and that I need to compute those finite differences along different dimensions. I would thus need a generic function f(x, dim) that basically do the following:

if dim == 0:
    return x[1:] - x[:-1]
if dim == 1:
    return x[:,1:] - x[:,:-1]
if dim == 2:
    return x[:,:,1:] - x[:,:,:-1]
if dim == -1:
    return x[...,1:] - x[...,:-1]
etc...

The only trick I found was to use index_select, but it is much slower to define indices and iterate over them than using slices, as you can see below:

$ python3 -m timeit -s 'import torch; a = torch.randn(5, 2048, 5)' 'a[:,1:,:] - a[:,:-1,:]'
10000 loops, best of 3: 75.4 usec per loop
$ python3 -m timeit -s 'import torch; a = torch.randn(5, 2048, 5)' 'indices = torch.arange(a.size(1)); torch.index_select(a, 1, indices[1:]) - torch.index_select(a, 1, indices[:-1])'
10 loops, best of 3: 26.8 msec per loop

Does it exist a clean and fast way to solve this problem?

You can use narrow to your advantage:

a = torch.randn(5, 2048, 5)
print(((a[:,1:,:] - a[:,:-1,:])-(a.narrow(1, 1, a.size(1)-1)-a.narrow(1, 0, a.size(1)-1))).abs().max().item())
%timeit a[:,1:,:] - a[:,:-1,:]
%timeit a.narrow(1, 1, a.size(1)-1)-a.narrow(1, 0, a.size(1)-1)

gets me

0.0
14.3 µs ± 190 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
11.9 µs ± 15.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

so by going to PyTorch directly I even shave off a couple of µs.

Best regards

Thomas

It works perfectly! Thanks a lot for your answer!

What if the a[1:,:,:] - a[:-1,:,:]?