Output tensors in PyTorch

I am in the process of trying the model. when I visualize I get a zig-zag curve and when I checked the tensor shape I get this . I know it should be an issue with normalization. but how can I fix?


print(squeezed_output, target)
tensor([ 31.6040, -13.9625,  30.8022,  -7.8554,  32.3444,   0.2831,  32.4801,
         31.4368,  32.1724, -12.4416,  33.9613,   9.4832,  34.2695,  14.4209,
         35.4740, -21.1821,  35.2900, -23.2053,  34.1040,  -9.1866,  33.0392,
          0.6747,  32.8468, -18.1268,  34.3277,  -5.1512,  32.3085,  29.2341,
         33.6519, -12.5062,  32.1929,  -6.7705], grad_fn=<SqueezeBackward0>) tensor([1., 0., 1., 0., 1., 0., 1., 1., 1., 0., 1., 1., 1., 1., 1., 0., 1., 0.,
        1., 0., 1., 0., 1., 0., 1., 0., 1., 1., 1., 0., 1., 0.])

but I have already shaped my tensor when calculating the loss function

#loss = criterion(squeezed_output,target)
        loss = criterion(squeezed_output.view(-1,1),target.view(-1,1))
        valid_loss = loss.item() * data.size(0)`