RuntimeError: The output of backward path has to be either tuple or tensor

Dear All:

My code runs perfectly on CPU but when I switched to an AWS P2 instance with GPU supported, I met below erorr:
Looks like something wrong with my backward path.

/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/autograd/anomaly_mode.py:70: UserWarning: Anomaly Detection has been enabled. This mode will increase the runtime and should only be enabled for debugging.
  warnings.warn('Anomaly Detection has been enabled. '
Warning: Error detected in CudnnRnnBackward. Traceback of forward call that caused the error:
  File "issue_recommender.py", line 327, in <module>
    train(FLAGS)
  File "issue_recommender.py", line 216, in train
    existing_model=None) # Setting existing model to None will force training
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/dl_base_model.py", line 505, in train_load_model
    self._train_model(corpus_txt=corpus_txt, jira_db=jira_db)
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/dl_base_model.py", line 518, in _train_model
    train_loss = self._train_epoch(epoch)
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/vae_model.py", line 462, in _train_epoch
    recon_x, mu, logvar = self.model(indice)
  File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/vae_model.py", line 71, in forward
    recon_x = self.decode(z)
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/vae_model.py", line 246, in decode
    outputs, _ = self.decoder_rnn(self.input_embedding, hidden) # (B,U,H)
  File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 727, in forward
    self.dropout, self.training, self.bidirectional, self.batch_first)
 (print_stack at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:60)
Traceback (most recent call last):
  File "issue_recommender.py", line 327, in <module>
    train(FLAGS)
  File "issue_recommender.py", line 216, in train
    existing_model=None) # Setting existing model to None will force training
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/dl_base_model.py", line 505, in train_load_model
    self._train_model(corpus_txt=corpus_txt, jira_db=jira_db)
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/dl_base_model.py", line 518, in _train_model
    train_loss = self._train_epoch(epoch)
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/vae_model.py", line 464, in _train_epoch
    loss.backward()
  File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/tensor.py", line 198, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/autograd/__init__.py", line 100, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: The output of backward path has to be either tuple or tensor

Copied part of my code here:

Thank you so much!

Hi,

Are you running with nightly build? If so can you set TORCH_SHOW_CPP_STACKTRACES=1 env variable and rerun your script?

Do you have any custom autograd Function by any chance?

Hi, albanD

I didn’t use nigthly build:

import torch
print(torch.version)
1.5.1

I set the env variable:

(python3) [ec2-user@ip-172-31-125-157 CN_JIRA_Analyzer]$ echo $TORCH_SHOW_CPP_STACKTRACES
1

But looks like no CPP stacktrace was printed out. Same output as above.
Also i never customize any autograd funcitons. The closest thing might be I use a backward hook to log grads during training,not sure if that matters.
Also my CUDA is 10.0, looks like a little bit older, FYI.

Sorry, typo, my CUDA is 10.1, I installed the matched version again still saw the same problem

The closest thing might be I use a backward hook to log grads during training,not sure if that matters.

Can you share the code of the hook? If the hook return something bad, it could cause the error above.

Hi, albanD

Just embarassingly realized that this is some assertion I added in my backward hook.

def get_activation_name_backward(self,name):
    def _hook_fn_backward(module, grad_input, grad_output):
        #print("In the backward path, global step is {}".format(self.global_step))
        #print("Backward hook of {}".format(name))
        if type(grad_output) is tuple:
            for grad_output_item in grad_output:
                _hook_fn_backward(module, grad_input, grad_output_item)
        elif torch.is_tensor(grad_output):
            # print("type of grad output is {}".format(type(grad_output)))
            grad_moddule_name = name + '_grad'
            #print(grad_moddule_name)
            if self.writer is not None:
                for grad in grad_output:
                    self.writer.add_histogram(tag=grad_moddule_name, values=grad, global_step=self.global_step)

            exception_flag = False
            if (~torch.isfinite(grad_output[0])).any():
                # print(torch.isfinite(output))
                exception_flag = True
                print("Detected Inf in Layer {}, backward path".format(name))
            if (grad_output[0] != grad_output[0]).any():
                exception_flag = True
                print("Detected NaN in Layer {}, backward path".format(name))

            if exception_flag:
                # print("NaN Detected! for {}".format(module.name))
                # print('Size of input is {}'.format(grad_input[0].shape))
                # print('Size of output is {}'.format(grad_output[0].shape))
                print('The parameters are as below:')
                for param in module.parameters():
                    print(param.data)
                np.savetxt('./recommender/log/grad_error.csv', grad_output[0].detach().numpy(), fmt="%.8f",
                           delimiter='\n')
                raise Exception("Inf or Nan detected in output, Can't proceed")
        else:
            **raise TypeError("The output of backward path has to be either tuple or tensor")**
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/autograd/anomaly_mode.py:70: UserWarning: Anomaly Detection has been enabled. This mode will increase the runtime and should only be enabled for debugging.
  warnings.warn('Anomaly Detection has been enabled. '
None
Warning: Error detected in CudnnRnnBackward. Traceback of forward call that caused the error:
  File "issue_recommender.py", line 327, in <module>
    train(FLAGS)
  File "issue_recommender.py", line 216, in train
    existing_model=None) # Setting existing model to None will force training
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/dl_base_model.py", line 507, in train_load_model
    self._train_model(corpus_txt=corpus_txt, jira_db=jira_db)
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/dl_base_model.py", line 520, in _train_model
    train_loss = self._train_epoch(epoch)
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/vae_model.py", line 462, in _train_epoch
    recon_x, mu, logvar = self.model(indice)
  File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/vae_model.py", line 71, in forward
    recon_x = self.decode(z)
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/vae_model.py", line 246, in decode
    outputs, _ = self.decoder_rnn(self.input_embedding, hidden) # (B,U,H)
  File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 727, in forward
    self.dropout, self.training, self.bidirectional, self.batch_first)
 (print_stack at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:60)
Traceback (most recent call last):
  File "issue_recommender.py", line 327, in <module>
    train(FLAGS)
  File "issue_recommender.py", line 216, in train
    existing_model=None) # Setting existing model to None will force training
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/dl_base_model.py", line 507, in train_load_model
    self._train_model(corpus_txt=corpus_txt, jira_db=jira_db)
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/dl_base_model.py", line 520, in _train_model
    train_loss = self._train_epoch(epoch)
  File "/home/ec2-user/Projects/CN_JIRA_Analyzer/CN_JIRA_Analyzer/recommender/vae_model.py", line 464, in _train_epoch
    loss.backward()
  File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/tensor.py", line 198, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/autograd/__init__.py", line 100, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: The output of backward path has to be either tuple or tensor

However, when I printed out the grad_output it turned out to be None. Do you know what could possibly cause grad_output to be None?