My pre-trained model is giving negative and floating point value instead of positive (MNIST digit reognizer)

I have a test_dataloader and it contains 14000 tensors. My test images are 28000 and I am taking batch = 2. I have a pre-trained model and now I am trying to test my existing model.

However, my testing.py output is showing negative floating point number but it must be positive integer number. Full code Github Link

Could tell me what I have to change?

final_output = []
for i, data in enumerate(test_data):
    data = data.unsqueeze(1)
    output = model(data).cpu().detach().numpy()
    data = None

    final_output.append(output)
result = np.concatenate(final_output)

print((result))

The output is like this:

[[-4.397916    0.7076113   2.1967683  ...  0.06060949 -2.8013513
 -8.800405  ]
[-3.533296   -3.1798646  -5.6163416  ... -4.7265635  -1.8589627
  0.5682605 ]
[-1.8575612   3.9310014  -4.122321   ... -1.2687542  -3.5150855
 -5.7542324 ]
...
[-8.762509   -8.240637   -2.7152536  ... -1.5188062  -5.932935
  2.6340218 ]
[-1.9312052  -2.675097   -2.2223709  ... -1.4572031  -8.078956
 -4.047556  ]
[-1.9931098  -2.840486   -3.620531   ... -2.5536153  -1.735633
 -2.317892  ]]

Any kind of suggestion is appreciable.

I think you meant to do data = data.unsqueeze(0). Also, it might be softmax is not included in the pretrained model (as softmax is usually combined with the loss function, so we have to manually add softmax at test time).

@Kushaj after giving 0 getting the error

RuntimeError: Given groups=1, weight of size [32, 1, 3, 3], expected input[1, 2, 28, 28] to have 1 channels, but got 2 channels instead

Soved the problem using LongTensor