An error occured when backwards

I have move the model and the data to the same device, but I still get the following errors.

Traceback (most recent call last):
  File "train.py", line 418, in <module>
    main(args)
  File "train.py", line 298, in main
    train_loss, train_acc1, train_acc5 = train_one_epoch(model, criterion, optimizer, data_loader, device, epoch,
  File "train.py", line 69, in train_one_epoch
    scaler.scale(loss).backward()
  File "/home/lyd/.conda/envs/lyd/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/home/lyd/.conda/envs/lyd/lib/python3.8/site-packages/torch/autograd/__init__.py", line 173, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:1! (when checking argument for argument weight in method wrapper__native_batch_norm_backward)

Could you post a minimal, executable code snippet showing this error, please?