Hey there, I am trying to implement euclidian loss (from VGG paper). It is not really described there well, but what I am assuming is, that it just measures the euclidian distance between two coordinates and returns it as sort of a loss:

So, my deep CNN wants to predict two coordinates (output tensor of size `(batch_size, 1, 2)`

) lets say `out = [[0.3, 0.6]]`

where `batch_size = 1`

, and my `ground_truth = [[0.5, 0.7]]`

the euclidian loss should be `loss = math.sqrt((0.3 - 0.5)**2) + math.sqrt((0.6 - 0.7)**2)`

1st question: Do I understand the concept of euclidian loss right?

2nd question: Does the below implementation looks right to you? (first time I create a own loss function)

```
class EuclidianError(nn.Module):
"""Implements euclidian distance as an error"""
def forward(self, x_pred, x_ground_truth):
err = torch.zeros(1)[0]
for i in range(len(x_ground_truth)): # i is index of batch
for j in range(len(x_ground_truth[i])): # j is index of to predict coordinates within one item of a batch
for k in range(len(x_ground_truth[i][j])): # k is index of x, y position in coordinates
err += math.sqrt((x_ground_truth[i][j][k] - x_pred[i][j][k])**2)
return err
```

Thank you for your help!