I am not sure why you do the for loop here? And why do you compare each training sample with every test one.

Why not just replace the whole for loop by (x_train - x_test).norm() ? Note that if you want to keep the value for each sample, you can specify the dim on which to compute the norm in the torch.norm function.

The tutorial is trying to demonstrate the speed difference between .norm() and also the for loop and also in a way ‘teach’ us how to do vectorize it. The solution that you provided is the next step but I was wondering how do I do it with the for loop.

Do you actually want to compute the distance between every pair of train and test? Or just the first train with first test, second train with second test, etc?

With every pair as we have initialized it as accordingly.

# Initialize dists to be a tensor of shape (num_train, num_test) with the
# same datatype and device as x_train
num_train = x_train.shape[0]
num_test = x_test.shape[0]
dists = x_train.new_zeros(num_train, num_test)

The one with the double for loop.
The one in the first post won’t work because the sum() will sum the results for all targets for a single prediction. You could change that to sum(dim=foo) to keep the dimension but that would be a half vectorized solution, which might not be what you want.

this works ok dists[train]=((x_train[train].reshape(-1,C,H,W) - x_test)**2).sum().sqrt()

but I have errors with

dists[train]=((x_train[train].reshape(-1,C,H,W) - x_test)**2).sum(axis=1).sqrt()'
RuntimeError: expand(torch.DoubleTensor{[10, 16, 16]}, size=[10]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (3)

for axis=1, we’re collapsing along the rows rather than columns right?

You can check the doc for sum here but the dim you specify is the one you sum over. In you case, you want to sum over the C, H and W dimensions. So either use 3 different sums or use a view to collapse these three into a single dimension then sum over this new dimension.

C = x_train[0].shape[0]
H = x_train[0].shape[1]
W = x_train[0].shape[2]
flat = C * H * W
for train in range(num_train):
dists[train]=((x_train[train].view(-1,flat) - x_test.view(-1,flat))**2).sum(axis=1).sqrt()

Question:
So when I reshape it with view (-1,flat ) I need to use axis=1 because I want to collapse via the row.
If I were to use (flat, -1 ) I need to use axis=0 as I want to collapse via column, right?

Also, I equate both methods , double loop with single loop.it matches now.

You cannot do (flat, -1) ! View is not a transpose. It is just a different way to look at the same data.
So if you do view with flat, -1 you will see very unexpected data.

I would recommend you play with this in a python shell. make a 2, 3 random tensor and try and view it as 3, 2. You’ll see that the result is fairly unintuitive (though expected).

Apologies. Yeah I understood regarding the .view which is actually the equivalent for .reshape
So, in this case the correct way is to use np.newaxis instead?

You never have to use some numpy features in pytorch.
If you want to add a new axis at position 0 for example, you can use t.unsqueeze(0) or t.view(1, "your other dims") or t[None, :] (not sure about the last one as I don’t use it personnaly).