Manual compute euclidean distance using 'one for loop'

Please see the screenshot below

or you can see the pastebin here
https://pastebin.com/BFnLXveJ

The related code is at line 31 and 32.

since x_train is (3, 16, 16 )
and x_test is (10, 3, 16, 16 )

I need to reshape it to be (1, 3, 16, 16 ) such that broadcasting will be performed at the first dimension which is the number of observation.

Did I get this correct? Thank you.

 for train in range(num_train):
    dists[train]=((x_train[train].reshape(-1,C,H,W) - x_test)**2).sum().sqrt()

Hi,

I am not sure why you do the for loop here? And why do you compare each training sample with every test one.

Why not just replace the whole for loop by (x_train - x_test).norm() ? Note that if you want to keep the value for each sample, you can specify the dim on which to compute the norm in the torch.norm function.

Thanks for the prompt reply.

The tutorial is trying to demonstrate the speed difference between .norm() and also the for loop and also in a way ‘teach’ us how to do vectorize it. The solution that you provided is the next step but I was wondering how do I do it with the for loop.

Thanks

Ok,
In that case, I think the problem you have is that you should be indexing the x_test as well:

for train in range(num_train):
    dists[train] = ((x_train[train] - x_test[train])**2).sum().sqrt()

Yeah I think that’s for ‘two for loops’ whereby it’s the slowest of the bunch without any vectorization.

for train in range(num_train):
    for test in range(num_test):
      dists[train][test] = ((x_train[train]- x_test[test])**2).sum().sqrt()

Hi,

Do you actually want to compute the distance between every pair of train and test? Or just the first train with first test, second train with second test, etc?

With every pair as we have initialized it as accordingly.

# Initialize dists to be a tensor of shape (num_train, num_test) with the
  # same datatype and device as x_train
  num_train = x_train.shape[0]
  num_test = x_test.shape[0]
  dists = x_train.new_zeros(num_train, num_test)

Then your code sample will work as you want :slight_smile:

Sorry do you mean the one at post #1( single for loop ) will work and also the double for loops?

Thanks.

The one with the double for loop.
The one in the first post won’t work because the sum() will sum the results for all targets for a single prediction. You could change that to sum(dim=foo) to keep the dimension but that would be a half vectorized solution, which might not be what you want.

Hi,

Thanks for your reply.

this works ok
dists[train]=((x_train[train].reshape(-1,C,H,W) - x_test)**2).sum().sqrt()

but I have errors with

dists[train]=((x_train[train].reshape(-1,C,H,W) - x_test)**2).sum(axis=1).sqrt()'

RuntimeError: expand(torch.DoubleTensor{[10, 16, 16]}, size=[10]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (3)

for axis=1, we’re collapsing along the rows rather than columns right?

Thank you once again.

Hi,

You can check the doc for sum here but the dim you specify is the one you sum over. In you case, you want to sum over the C, H and W dimensions. So either use 3 different sums or use a view to collapse these three into a single dimension then sum over this new dimension.

Thanks @albanD,

Managed to dial in with this code

  C = x_train[0].shape[0]
  H = x_train[0].shape[1]
  W = x_train[0].shape[2]
  flat = C * H * W
  for train in range(num_train):
    dists[train]=((x_train[train].view(-1,flat) - x_test.view(-1,flat))**2).sum(axis=1).sqrt()

Question:
So when I reshape it with view (-1,flat ) I need to use axis=1 because I want to collapse via the row.
If I were to use (flat, -1 ) I need to use axis=0 as I want to collapse via column, right?

Also, I equate both methods , double loop with single loop.it matches now.
Imgur

Thanks once again.

Hi,

You cannot do (flat, -1) ! View is not a transpose. It is just a different way to look at the same data.
So if you do view with flat, -1 you will see very unexpected data.

I would recommend you play with this in a python shell. make a 2, 3 random tensor and try and view it as 3, 2. You’ll see that the result is fairly unintuitive (though expected).

Apologies. Yeah I understood regarding the .view which is actually the equivalent for .reshape
So, in this case the correct way is to use np.newaxis instead?

Hi,

You never have to use some numpy features in pytorch.
If you want to add a new axis at position 0 for example, you can use t.unsqueeze(0) or t.view(1, "your other dims") or t[None, :] (not sure about the last one as I don’t use it personnaly).