Error in backward

I encountered a strange problem. When I use to splice two tensors, I will encounter the following error, but adding the two tensors directly will not report an error. Why?

for i in range(num_levels):
            inds = target_lvls == i
            if inds.any():
                rois_ = rois[inds, :]
                roi_feats_t = self.roi_layers[i](feats[i], rois_)
                if self.use_expand_roi:
                    # expand rois
                    rois_expand_ = rois_expand[inds, :]
                    roi_feats_expand_t = self.roi_layers[i](feats[i], rois_expand_)
                    roi_feats[inds] =[roi_feats_t,roi_feats_expand_t],1)
                    # roi_feats[inds] =  roi_feats_t + roi_feats_expand_t
                    roi_feats[inds] = roi_feats_t
                roi_feats += sum(
                    for x in self.parameters()) * 0. + feats[i].sum() * 0.
Traceback (most recent call last):
  File "./tools/", line 178, in <module>
  File "./tools/", line 174, in main
  File "/home/zhaoxin/workspace/mmdetection/mmdet/apis/", line 150, in train_detector, cfg.workflow, cfg.total_epochs)
  File "/home/zhaoxin/workspace/mmcv/mmcv/runner/", line 125, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/home/zhaoxin/workspace/mmcv/mmcv/runner/", line 51, in train
  File "/home/zhaoxin/workspace/mmcv/mmcv/runner/", line 307, in call_hook
    getattr(hook, fn_name)(self)
  File "/home/zhaoxin/workspace/mmcv/mmcv/runner/hooks/", line 27, in after_train_iter
  File "/home/zhaoxin/anaconda3/envs/mmdetection/lib/python3.7/site-packages/torch/", line 185, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/zhaoxin/anaconda3/envs/mmdetection/lib/python3.7/site-packages/torch/autograd/", line 127, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: grad_output must be contiguous

Try adding .contiguous() right after the place where you slice the tensor.

rois_ = rois[inds, :].contiguous()


rois_expand_ = rois_expand[inds, :].contiguous()

It seems if your pytorch version is higher enough, noncontigous tensor can also be back-propagated for some operations.

@Naruto-Sasuke Thanks Naruto-Sasuke, the error is same.