Pytorch Migration 0.3.1 to 1.0 (or 0.4.0)

Hello all, I have a code that works well on pytorch 0.3.1 but bug on the latest version. When I run my code, it tells me this error :

RuntimeError: shape mismatch: value tensor of shape [16] cannot be broadcast to indexing result of shape [16, 1]

In the old version I have no problem, when I compare I have this format ( version 0.3.1 )

>     self.values[oldest_indices] :  torch.Size([16, 1])
>     self.values[oldest_indices] :  
>         4
>         7
>        13
>        16
>        20
>        27
>        31
>        38
>        40
>        45
>        54
>        59
>        62
>        69
>        74
>        78
>     [torch.cuda.LongTensor of size 16x1 (GPU 0)]

But in the new version of Pytorch I have the above error. When I reshape I have this, but I don’t want reshape, because others bugs appears. ( last version ):

>     self.values[oldest_indices] :  torch.Size([16, 1])
>     self.values[oldest_indices] :  tensor([[ 4],
>             [ 6],
>             [12],
>             [18],
>             [22],
>             [27],
>             [34],
>             [37],
>             [40],
>             [49],
>             [51],
>             [57],
>             [60],
>             [66],
>             [71],
>             [77]], device='cuda:0')

How is it possible to get the same format as for the old version? Thank you in advance for your help :slight_smile:

NB : When I reshape I have others problems, with backward for example.

Can you use your_tensor.squeeze(1)? This will convert tensors of the form [[1],[2],[3]] to [1,2,3].

Yeah thank you, it’s work :slightly_smiling_face:

But when I change his form I have a new error with loss.backward, I have this :

RuntimeError: invalid argument 3: Index tensor must be either empty or have same dimensions as input tensor at /opt/conda/conda-bld/pytorch_1549630534704/work/aten/src/THC/generic/THCTensorScatterGather.cu:115

I try to change dimension of my torch.mean(xxx, dim= ), but it’s not the good way,I have this error too :

RuntimeError: grad can be implicitly created only for scalar outputs

Is there any way to see the full code? It is a little hard to deduce something usefull just from the error logs if there are multiple issues showing up.

Hello, thank you for your response, I find this code on this GIT :

At line 81 : loss.backward(), they say that the entrance must be the same as the exit.

Thank you ErikJ