How to evaluate MarginRankingLoss and CosineEmbeddingLoss during testing

I am dealing with a Siamese Network for vectorised data and want to apply a Contrastive Loss through the MarginRankingLoss or CosineEmbeddingLoss functions.

This is my training loop, however I do not know how to properly evaluate the outputs from the model during test phase

Training

for batched_graph_1, batched_graph_2, labels in train_dataloader:
     pred1, pred2 = model(batched_graph_1, batched_graph_2)
     loss = loss_func(pred1, pred2, labels)
     scheduler.step(loss)
     optimizer.zero_grad()
     loss.backward()
     optimizer.step()

This is what I have so far:

Testing

for batched_graph_1, batched_graph_2, labels in test_dataloader:
    pred1, pred2 = model(batched_graph_1, batched_graph_2)
    pred = torch.cdist(pred1, pred2, p=2)
    y_pred += pred.tolist()
    y_true += labels.tolist()

Thank you for your help.