How to use PyTorch lightning?

for epoch in range(config['total_epochs']):
    model.train()
    train_loss = 0.0
    train_total = 0.0
    train_correct = 0.0

    for inputs, label_info in dataloader:
        combined_loss, reg_loss, class_loss = 0.0, 0.0, 0.0
        inputs = inputs.to(DEVICE)
        labels = label_info['label']
        labels = labels.to(DEVICE)
        merge_mask = label_info['Merge_transformation'].to(DEVICE)
        optimizer.zero_grad()
        outputs_reg, outputs_class = model(inputs)
        
        reg_loss = criterion_reg(outputs_reg, labels)
        reg_loss = reg_loss * (1 - merge_mask)

        class_loss = criterion_class(outputs_class, labels)
        class_loss = class_loss * merge_mask

        reg_loss = reg_loss.sum() / (1 - merge_mask).sum() if (1 - merge_mask).sum() > 0 else reg_loss.sum()
        class_loss = class_loss.sum() / merge_mask.sum() if merge_mask.sum() > 0 else class_loss.sum()

I want to useFabric/ PyTorch lightning to implement DDP, But how to send both labels and merge_mask to GPU ?
The fabric sample code :

  model.train()
  for epoch in range(20):
      for batch in dataloader:
          input, target = batch
-         input, target = input.to(device), target.to(device)
          optimizer.zero_grad()
          output = model(input, target)
          loss = torch.nn.functional.nll_loss(output, target.view(-1))
-         loss.backward()
+         fabric.backward(loss)
          optimizer.step()

is not helping me enough :frowning_face:

I have already tried to use fabric and it throws the below error

Exception has occurred: RuntimeError
Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!
  File "/home/neelamlab/ninad/MAE/basic_fabric.py", line 636, in <module>
    reg_loss = criterion_reg(outputs_reg, labels)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!