Mixed precision in evaluation

Hi, I have large evaluation data set, which is the same size as the training data set and I’m performing the validation phase during training to be able to control the behavior of the training process.

I’ve added automatic mixed precision in the training phase, but is it safe to wrap the validation step in the training process with amp.autocast() to speed up that forward propagation?
In general, is it safe/recommended to use mixed precision in model evaluation during the tuning process and if it is, what is the right way to implement?

for epoch in range(epochs):
     # Training phase
     train_loss, train_score = self.train_model(trainset)

     # Validation phase
     valid_loss, valid_score = self.valid_model(validset)

def valid_model(self, dataloader):

     for batch in tqdm(dataloader):
          # Evaluate with mixed precision
          if self.setting.mixed_precision:

               # Runs the forward pass with autocasting, including loss and score calculation
               with amp.autocast():
                    loss, score = self.validation_step(batch)

Yes, you can also use autocast during the validation step and wouldn’t need to apply the gradient scaling, since no gradient are calculated in this phase.