RuntimeError: grad_output must be contiguous when backward

When I’m modifying a model, a new module(m2) is added that generates a loss value and it will be merged with the previous modules (e.g. L=Lm1+lm2…)

RuntimeError: grad_output must be contiguous when backpropagating.When I run it, it throws the following exception, and when I remove the newly added module (m2), it works fine. Where’s the problem?

Thank you for your time and consideration!
Traceback (most recent call last):
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/”, line 1101, in del
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/”, line 1075, in _shutdown_workers
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/multiprocessing/”, line 140, in join
res = self._popen.wait(timeout)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/multiprocessing/”, line 45, in wait
if not wait([self.sentinel], timeout):
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/multiprocessing/”, line 921, in wait
ready =
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/”, line 415, in select
fd_event_list = self._selector.poll(timeout)
Traceback (most recent call last):
File “/home/xjc/pycharm/helpers/pydev/”, line 1741, in
File “/home/xjc/pycharm/helpers/pydev/”, line 1735, in main
globals =[‘file’], None, None, is_module)
File “/home/xjc/pycharm/helpers/pydev/”, line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File “/home/xjc/pycharm/helpers/pydev/_pydev_imps/”, line 18, in execfile
exec(compile(contents+"\n", file, ‘exec’), glob, loc)
File “/home/xjc/PycharmProjects/mmv2/mmdetection/tools/”, line 175, in
File “/home/xjc/PycharmProjects/mmv2/mmdetection/tools/”, line 171, in main
File “/home/xjc/PycharmProjects/mmv2/mmdetection/mmdet/apis/”, line 143, in train_detector, cfg.workflow, cfg.total_epochs)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/mmcv/runner/”, line 122, in run
epoch_runner(data_loaders[i], **kwargs)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/mmcv/runner/”, line 43, in train
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/mmcv/runner/”, line 298, in call_hook
getattr(hook, fn_name)(self)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/mmcv/runner/hooks/”, line 24, in after_train_iter
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/torch/”, line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/torch/autograd/”, line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: grad_output must be contiguous

Could you post a minimal code snippet to reproduce this issue?

You can add code snippets by wrapping them into three backticks ``` :wink:

Have you solve the problem, I meet the same error.