RuntimeError: grad_output must be contiguous when backward

Hi,
When I’m modifying a model, a new module(m2) is added that generates a loss value and it will be merged with the previous modules (e.g. L=Lm1+lm2…)

RuntimeError: grad_output must be contiguous when backpropagating.When I run it, it throws the following exception, and when I remove the newly added module (m2), it works fine. Where’s the problem?

Thank you for your time and consideration!
Traceback (most recent call last):
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py”, line 1101, in del
self._shutdown_workers()
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py”, line 1075, in _shutdown_workers
w.join(timeout=_utils.MP_STATUS_CHECK_INTERVAL)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/multiprocessing/process.py”, line 140, in join
res = self._popen.wait(timeout)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/multiprocessing/popen_fork.py”, line 45, in wait
if not wait([self.sentinel], timeout):
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/multiprocessing/connection.py”, line 921, in wait
ready = selector.select(timeout)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/selectors.py”, line 415, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt:
Traceback (most recent call last):
File “/home/xjc/pycharm/helpers/pydev/pydevd.py”, line 1741, in
main()
File “/home/xjc/pycharm/helpers/pydev/pydevd.py”, line 1735, in main
globals = debugger.run(setup[‘file’], None, None, is_module)
File “/home/xjc/pycharm/helpers/pydev/pydevd.py”, line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File “/home/xjc/pycharm/helpers/pydev/_pydev_imps/_pydev_execfile.py”, line 18, in execfile
exec(compile(contents+"\n", file, ‘exec’), glob, loc)
File “/home/xjc/PycharmProjects/mmv2/mmdetection/tools/train1.py”, line 175, in
main()
File “/home/xjc/PycharmProjects/mmv2/mmdetection/tools/train1.py”, line 171, in main
meta=meta)
File “/home/xjc/PycharmProjects/mmv2/mmdetection/mmdet/apis/train.py”, line 143, in train_detector
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py”, line 122, in run
epoch_runner(data_loaders[i], **kwargs)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py”, line 43, in train
self.call_hook(‘after_train_iter’)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/mmcv/runner/base_runner.py”, line 298, in call_hook
getattr(hook, fn_name)(self)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/mmcv/runner/hooks/optimizer.py”, line 24, in after_train_iter
runner.outputs[‘loss’].backward()
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/torch/tensor.py”, line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/home/xjc/anaconda3/envs/py37/lib/python3.7/site-packages/torch/autograd/init.py”, line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: grad_output must be contiguous

Could you post a minimal code snippet to reproduce this issue?

You can add code snippets by wrapping them into three backticks ``` :wink:

Have you solve the problem, I meet the same error.