Help to initialize a tensor out of place

Hi all,

Inside one of my modules I have the following code to create a cost volume.
The cost is represented as a 3D volume of an image with each slice corresponds to a shift of d pixels. Pixels outside are set a zero in the volume.

def cost_volume(ref, tgt, max_disp):
    max_disp4 = max_disp // 4
    batch, channels, h, w = ref.size()
    cost = torch.zeros((batch, channels * 2, max_disp4, h, w), dtype=ref.dtype, device=ref.device,     requires_grad=False)

    cost[:, :channels, 0, :, :] = ref
    cost[:, channels:, 0, :, :] = tgt
    for d in range(1, max_disp4):
        cost[:, :channels, d, :, d:] = ref[:, :, :, d:]
        cost[:, channels:, d, :, d:] = tgt[:, :, :, :-d]

    return cost

When running in training mode, Pytorch complaints of not be able to compute the gradients due to some operations in-place, which is a fair error message.

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

I was wondering if an easy solution exists to convert this code from in-place to out-of-place? Or any trick to bypass this restriction for this scenario?

Thanks a lot for your help,

Well, if you check some dev answer:

An in-place operation is an operation that changes directly the content of a given Tensor without making a copy. Inplace operations in pytorch are always postfixed with a , like .add() or .scatter_(). Python operations like += or *= are also inplace operation

Try to add cost=cost.clone()
.clone() statement creates a copy of the data.

Thanks JuanFMontesinos for your help.
Unfortunately, this is the first thing that I tried and I have still the same error message.

I am using Pytorch 0.4.0:
cuda90 1.0 h6433d27_0 pytorch
pytorch 0.4.0 py36hdf912b8_0

you could modify the code to use torch.cat here I guess.