How does one compute the normalized euclidean distance/similarity in a numerically stable way in a vectorized way in pytorch?

How does one compute the normalize euclidean distance (or normalized euclidean similarity) in a numerically stable way in a vectorized way in pytorch?

I think this is correct:

import torch.nn as nn


def ned(x1, x2, dim=1, eps=1e-8):
    ned_2 = 0.5 * ((x1 - x2).var(dim=dim) / (x1.var(dim=dim) + x2.var(dim=dim) + eps))
    return ned_2 ** 0.5

def nes(x1, x2, dim=1, eps=1e-8):
    return 1 - ned(x1, x2, dim, eps)

dim = 1  # apply cosine accross the second dimension/feature dimension

k = 4  # number of examples
d = 8  # dimension of feature space
x1 = torch.randn(k, d)
x2 = x1 * 3
print(f'x1 = {x1.size()}')
ned_tensor = ned(x1, x2, dim=dim)
print(ned_tensor)
print(ned_tensor.size())
print(nes(x1, x2, dim=dim))

output:

x1 = torch.Size([4, 8])
tensor([0.4472, 0.4472, 0.4472, 0.4472])
torch.Size([4])
tensor([0.5528, 0.5528, 0.5528, 0.5528])

can anyone confirm this?

Note dim=1 assumes dim=0 is the batch size. I’ve not tested this for anything else except the example above for 2D tensors/matrices.


Related, I do know how to compute cosine distance: CosineSimilarity — PyTorch 1.7.1 documentation


Related:


Updated:

Fixed implementation:

def ned_torch(x1: torch.Tensor, x2: torch.Tensor, dim=1, eps=1e-8) -> torch.Tensor:
    """
    Normalized eucledian distance in pytorch.

    Cases:
        1. For comparison of two vecs directly make sure vecs are of size [B] e.g. when using nes as a loss function.
            in this case each number is not considered a representation but a number and B is the entire vector to
            compare x1 and x2.
        2. For comparison of two batch of representation of size 1D (e.g. scores) make sure it's of shape [B, 1].
            In this case each number *is* the representation of the example. Thus a collection of reps
            [B, 1] is mapped to a rep of the same size [B, 1]. Note usually D does decrease since reps are not of size 1
            (see case 3)
        3. For the rest specify the dimension. Common use case [B, D] -> [B, 1] for comparing two set of
            activations of size D. In the case when D=1 then we have [B, 1] -> [B, 1]. If you meant x1, x2 [D, 1] to be
            two vectors of size D to be compare feed them with shape [D].

    https://discuss.pytorch.org/t/how-does-one-compute-the-normalized-euclidean-distance-similarity-in-a-numerically-stable-way-in-a-vectorized-way-in-pytorch/110829
    https://stats.stackexchange.com/questions/136232/definition-of-normalized-euclidean-distance/498753?noredirect=1#comment937825_498753
    """
    # to compute ned for two individual vectors e.g to compute a loss (NOT BATCHES/COLLECTIONS of vectorsc)
    if len(x1.size()) == 1:
        # [K] -> [1]
        ned_2 = 0.5 * ((x1 - x2).var() / (x1.var() + x2.var() + eps))
    # if the input is a (row) vector e.g. when comparing two batches of acts of D=1 like with scores right before sf
    elif x1.size() == torch.Size([x1.size(0), 1]):  # note this special case is needed since var over dim=1 is nan (1 value has no variance).
        # [B, 1] -> [B]
        ned_2 = 0.5 * ((x1 - x2)**2 / (x1**2 + x2**2 + eps)).squeeze()  # Squeeze important to be consistent with .var, otherwise tensors of different sizes come out without the user expecting it
    # common case is if input is a batch
    else:
        # e.g. [B, D] -> [B]
        ned_2 = 0.5 * ((x1 - x2).var(dim=dim) / (x1.var(dim=dim) + x2.var(dim=dim) + eps))
    return ned_2 ** 0.5

def nes_torch(x1, x2, dim=1, eps=1e-8):
    return 1 - ned_torch(x1, x2, dim, eps)