Using target size that is different to the input size

Afternoon,

I have a Autoencoder working now, but at the start i get the following warning:

UserWarning: Using a target size (torch.Size([16, 2048])) that is different to the input size (torch.Size([16, 1, 2048])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.

16 is my batch size and the vector length is 2048, so i would expect to see 16,1,2048. I cannot understand where 16,2048 comes from…

how do I reshape the target size to 16,1,2048 to solve the probelm so that the warning goes away?

It looks like the input vector size is 16,2048, and using data=data.unsqueeze(1) changes this to 16,1,2048 but i still have the warning…

thanks,

Chaslie

If you are already changing the tensor shape in target then you shouldnt be getting the warning. Can you you post your code?

hi Tahir,

I get the warning after the loss function below:

for epoch in range(num_epochs):
    for data, target in train_loader:
        print("data=",data.shape)# this gives the size as [16,2048]
        data = data.cuda()
        z_loc, z_scale = model.Encoder(data)
        z = model.reparam(z_loc, z_scale)
        out = model.Decoder(z)
        loss = loss_fn(out, data, z_loc, z_scale)
        optimizer.zero_grad()

Which is after I have used the unsqueeze here:

    def forward(self, data):
        data=data.unsqueeze(1)
       print("data_mod=",data.shape) # this reshapes to [16,1,2048]