NaN values popping up during loss.backward()

Did you make sure that no inputs contain invalid values, e.g. by applying torch.isfinite(input)?
Are you seeing an invalid output (in the model output or loss) before anomaly detection raises the error?