Error in backward

I encountered a strange problem. When I use torch.cat to splice two tensors, I will encounter the following error, but adding the two tensors directly will not report an error. Why?

for i in range(num_levels):
            inds = target_lvls == i
            if inds.any():
                rois_ = rois[inds, :]
                roi_feats_t = self.roi_layers[i](feats[i], rois_)
                if self.use_expand_roi:
                    # expand rois
                    rois_expand_ = rois_expand[inds, :]
                    roi_feats_expand_t = self.roi_layers[i](feats[i], rois_expand_)
                    roi_feats[inds] = torch.cat([roi_feats_t,roi_feats_expand_t],1)
                    # roi_feats[inds] =  roi_feats_t + roi_feats_expand_t
                else:
                    roi_feats[inds] = roi_feats_t
            else:
                roi_feats += sum(
                    x.view(-1)[0]
                    for x in self.parameters()) * 0. + feats[i].sum() * 0.
Traceback (most recent call last):
  File "./tools/train.py", line 178, in <module>
    main()
  File "./tools/train.py", line 174, in main
    meta=meta)
  File "/home/zhaoxin/workspace/mmdetection/mmdet/apis/train.py", line 150, in train_detector
    runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
  File "/home/zhaoxin/workspace/mmcv/mmcv/runner/epoch_based_runner.py", line 125, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/home/zhaoxin/workspace/mmcv/mmcv/runner/epoch_based_runner.py", line 51, in train
    self.call_hook('after_train_iter')
  File "/home/zhaoxin/workspace/mmcv/mmcv/runner/base_runner.py", line 307, in call_hook
    getattr(hook, fn_name)(self)
  File "/home/zhaoxin/workspace/mmcv/mmcv/runner/hooks/optimizer.py", line 27, in after_train_iter
    runner.outputs['loss'].backward()
  File "/home/zhaoxin/anaconda3/envs/mmdetection/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/zhaoxin/anaconda3/envs/mmdetection/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: grad_output must be contiguous

Try adding .contiguous() right after the place where you slice the tensor.

rois_ = rois[inds, :].contiguous()

...

rois_expand_ = rois_expand[inds, :].contiguous()

It seems if your pytorch version is higher enough, noncontigous tensor can also be back-propagated for some operations.

@Naruto-Sasuke Thanks Naruto-Sasuke, the error is same.