Problem with multitask back propagation?

I have a CTC and seq2seq multitask loss it shows the below errors when doing the loss.backward(), does anyone know how to debug this?

Traceback (most recent call last):
  File "", line 351, in <module>
  File "", line 196, in train
  File "/DataStorage2/cheesiang_leow/artibrains/single_line/crnn_seq2seq_ocr_pytorch/venv/lib/python3.7/site-packages/torch/", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/DataStorage2/cheesiang_leow/artibrains/single_line/crnn_seq2seq_ocr_pytorch/venv/lib/python3.7/site-packages/torch/autograd/", line 132, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: The size of tensor a (0) must match the size of tensor b (5990) at non-singleton dimension 2