How to inplace resize variables that require grad

I would like to resize inplace a variable that requires grad. Using t.resize_(size) fails with an error RuntimeError: cannot resize variables that require grad. Further, the solution mentioned in python - Resize PyTorch Tensor - Stack Overflow doesn’t work because since Pytorch1.1, it seems that is has been disallowed (RuntimeError: set_sizes_contiguous is not allowed on Tensor created from .data or .detach(), in Pytorch 1.1.0). Hence it fails with

RuntimeError: set_sizes_contiguous is not allowed on a Tensor created from .data or .detach().
If your intent is to change the metadata of a Tensor (such as sizes / strides / storage / storage_offset)
without autograd tracking the change, remove the .data / .detach() call and wrap the change in a with torch.no_grad(): block.

However, I had already done t.resize_(size) with a torch.no_grad() when it failed.

Is there a way to inplace resize a variable that requires grad? I will be resizing the tensor during backward pass again before calculating gradients, so I don’t think there will be a problem regarding gradients.

Thanks!

Sorry if my suggestion does not work, but you may try function “view” in this case.

You may find a lot of code using view for reshaping the tensors.