# Manually computing the euclidean norm of the difference between two MNIST images with PyTorch

I’ve been playing around with the MNIST dataset and PyTorch and have loaded my dataset like so:

``````#Loading the MNIST training and test data:

train_data = datasets.MNIST(
root = 'data',
train = True,
transform = ToTensor(),
)
test_data = datasets.MNIST(
root = 'data',
train = False,
transform = ToTensor()
)
``````

what’s interesting to me is that I am thinking of these images as 28 by 28 matrices with each entry representing the shade of the pixel. So I wanted to compute the Euclidean distance between two images after flattening them, in particular, I defined the following function:

``````def wtrain(i,j):
s = train_data.data[i]
t = train_data.data[j]
s = torch.flatten(s)
t = torch.flatten(t)
d = (s-t)/1000
d = (torch.norm(d))**2
return d
``````

What’s weird is that wtrain[0,1] does not equal wtrain[1,0]?

I was wondering if anyone could see why or offer an alternative way to compute this distance.

Both output tensors are using their raw `uint8` `dtype` since you are indexing the internal `.data` attribute in `train_data.data[index]` and the `ToTensor` transformation is thus skipped.

The subtraction in `(s - t)` will thus under/overflow and will create different outputs based on the order.
You could either transform `s` and `t` to `float32` before applying the subtraction:

``````s = torch.flatten(s).float()
t = torch.flatten(t).float()
``````

or you could index the `Dataset` which will normalize and cast the image tensors to `float32` by applying `ToTensor` on them:

``````s = train_data[i][0]
t = train_data[j][0]
``````
1 Like