Backward incompatible changes in pytorch 1.5

As release note of pytorch 1.5 says, Tensor.clone , , Tensor.empty_like , and similar functions preserve stride information instead of returning contiguous tensors. When migrating to pytorch 1.5, I met “RuntimeError: view size is not compatible with input tensor’s size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(…) instead.” But there is no more information provided to debug which part of the code raise this error. How should I debug this kind of error?


Yes I think it comes from there.
Does this error happen during the forward or the backward pass?

  • If forward, can you give the stack trace that shows which function is faulty?
  • If backward, you can enable the anomaly mode (torch.autograd.set_detect_anomaly(True)) to get a warning that will tell you which forward function is responsible for the error in backward. And you can send it here.
1 Like

Thanks! The error happens during backward pass. I found it by enabling the anomaly mode, it is caused by using Tensor.zeros_like(). :grimacing:

1 Like

Ho nice, then given the doc, you can pass layout= to this function and you want torch.contiguous_format (list here) to make sure you can view it :slight_smile:

1 Like