Hello,

I have a tensor with shape Nx2 that stores coordinates (x,y). I want to compute the x-distance and the y-distance between all coordinates. So if I have N coordinates, I will have `N*N`

distances in the end. I could easily do this with two loops or with python’s itertools. However, it would be much faster to do it with PyTorch on the GPU.

Is there an easy possibility to combine each element in a tensor with every other element in this tensor?

The number of coordinates will be < 100, so in the worst case there will be 10.000 distances to be computed. So it should not be a problem with GPU memory.

Best,

Simon

I think broadcasting might work:

```
x = torch.arange(10).view(-1, 1).expand(10, 2)
x[:, 0] - x[:, 1].view(-1, 1)
```

1 Like

Awesome, thank you!

The expand is not necessary (if I don’t miss anything). And your idea extended to a batch of coordinates looks like this:

```
# y shape: BS x N_COORDS x 2
# dist_gt shape: BS x N_COORDS x N_COORDS x 2
dist_gt = torch.zeros(bs, n_lm, n_lm, 2)
dist_gt[:,:,:,0] = y[:,:,0].view(bs,1,-1) - y[:,:,0].view(bs,-1,1)
dist_gt[:,:,:,1] = y[:,:,1].view(bs,1,-1) - y[:,:,1].view(bs,-1,1)
```

It is much faster than looping over the tensor manually (~5s for the whole dataset compared to 12 min)

1 Like

Good to hear it’s working fine!

I’ve just used `expand`

to be on par with your input shapes, but you are right: also without `expand`

it should work.

1 Like