How can I calculate validation loss for faster RCNN?

Hi,

I followed this tutorial for object detection:
https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html

and their GitHub repository that contains the following train_one_epoch and evaluate functions:

However, I want to calculate losses during validation. I implemented this for evaluation loss, where to obtain losses, model.train() needs to be on:

@torch.no_grad()
def evaluate_loss(model, data_loader, device):
    val_loss = 0
    model.train()
    for images, targets in (data_loader):
        images = list(image.to(device) for image in images)
        targets = [{k: v.to(device) for k, v in t.items()} for t in targets]

        loss_dict = model(images, targets)

        losses = sum(loss for loss in loss_dict.values())

        # reduce losses over all GPUs for logging purposes
        loss_dict_reduced = utils.reduce_dict(loss_dict)
        losses_reduced = sum(loss for loss in loss_dict_reduced.values())
        val_loss += losses_reduced

    validation_loss = val_loss/ len(data_loader)

    return validation_loss

I then place it after the learning rate scheduler step:

 for epoch in range(args.num_epochs):
        # train for one epoch, printing every 10 iterations
        train_one_epoch(model, optimizer, train_data_loader, device, epoch, print_freq=10)
    
        # update the learning rate
        lr_scheduler.step()

        validation_loss = evaluate_loss(model, valid_data_loader, device=device)
        print("validation loss", validation_loss)

        # evaluate on the test dataset
        evaluate(model, valid_data_loader, device=device)

Does this look correct or can it interfere with training/produce inaccurate losses?