Problem with multitask back propagation?

I have a CTC and seq2seq multitask loss it shows the below errors when doing the loss.backward(), does anyone know how to debug this?

Traceback (most recent call last):
  File "mtl_train.py", line 351, in <module>
    train(opt)
  File "mtl_train.py", line 196, in train
    cost.backward()
  File "/DataStorage2/cheesiang_leow/artibrains/single_line/crnn_seq2seq_ocr_pytorch/venv/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/DataStorage2/cheesiang_leow/artibrains/single_line/crnn_seq2seq_ocr_pytorch/venv/lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: The size of tensor a (0) must match the size of tensor b (5990) at non-singleton dimension 2