When use F1score got "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!"

When my net was in validation step, I got this error.

Traceback (most recent call last):
  File "test_singal.py", line 575, in <module>
  File "/usr/local/miniconda3/envs/deep_packet/lib/python3.8/site-packages/click/core.py", line 1128, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/miniconda3/envs/deep_packet/lib/python3.8/site-packages/click/core.py", line 1053, in main
    rv = self.invoke(ctx)
  File "/usr/local/miniconda3/envs/deep_packet/lib/python3.8/site-packages/click/core.py", line 1395, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/miniconda3/envs/deep_packet/lib/python3.8/site-packages/click/core.py", line 754, in invoke
    return __callback(*args, **kwargs)
  File "test_singal.py", line 570, in main
    train_mct(config=config, train_data_path=train_path, val_data_path=val_path)
  File "test_singal.py", line 531, in train_mct
    F1_app = app_f1score(y_hat_app, y_app)
  File "/usr/local/miniconda3/envs/deep_packet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/miniconda3/envs/deep_packet/lib/python3.8/site-packages/torchmetrics/metric.py", line 206, in forward
    self.update(*args, **kwargs)
  File "/usr/local/miniconda3/envs/deep_packet/lib/python3.8/site-packages/torchmetrics/metric.py", line 267, in wrapped_func
    return update(*args, **kwargs)
  File "/usr/local/miniconda3/envs/deep_packet/lib/python3.8/site-packages/torchmetrics/classification/stat_scores.py", line 217, in update
    self.tp += tp
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Here is my code.

from torchmetrics import F1Score
app_f1score = F1Score(num_classes=17)
tra_f1score = F1Score(num_classes=12)
all_f1score = F1Score(num_classes=6)
val_loss = 0.0
val_steps = 0
for i, batch in enumerate(valloader, 0):
        with torch.no_grad():
            x_app = batch['feature'].float()
            y_app = batch['app_label'].long()
            y_tra, y_all, index = drop_na(batch)
            x_app, y_app, y_tra, y_all = x_app.to(device), y_app.to(device), y_tra.to(device), y_all.to(device)
            y_hat_app, y_hat_tra, y_hat_all = net(x_app, index)
            y_hat_app, y_hat_tra, y_hat_all = y_hat_app.to(device), y_hat_tra.to(device), y_hat_all.to(device)
            F1_app = app_f1score(y_hat_app, y_app)
            F1_tra = tra_f1score(y_hat_tra, y_tra)
            F1_all = all_f1score(y_hat_all, y_all)
            F1_score = (F1_app + F1_tra + F1_all) / 3.0

And some advice on what happened?
Thank you very much!

The device mismatch is raised in:

self.tp += tp

which seems to be called in torchmetrics.
Assuming you made sure that all tensors are passed to the right device, add print statements into the torchmetrics functions to see if the device changes there.