How to compute the variance or MSE between two losses?

I want to know if my implementations for computing the variance or MSE between two losses are correct:

import torch
import numpy as np
from sklearn.metrics import mean_squared_error 

loss1 = [0.3, 0.2, 0.8]
loss2 = [0.25, 0.3, 0.72]

### MSE ###
loss = torch.nn.MSELoss()

sk_mse = mean_squared_error(loss1, loss2)
n_mse = np.square(np.subtract(loss1, loss2)).mean()
t_mse = loss(torch.tensor(loss1), torch.tensor(loss2))   #only holds with the assumption one loss is a target
output: 0.0063   #output for all of the above code

### Var ###
t_var = torch.var(torch.tensor([[loss1, loss2]]), unbiased=False)
n_var = np.var([loss1, loss2], dtype=np.float64)

output: 0.056681 #output for all of the above code