Analyse own images in MNIST format

Hello,
i wrote a method to classify my own created images in MNIST format.
Unfortunately it doesn’t return right results.

After training the NN, the implemented test method gives:
“Test set: Average loss: 0.0253, Accuracy: 9922/10000 (99%)” so the NN should be able to classify my images pretty well.

Here the code:

def analyse(sample):
    use_cuda = torch.cuda.is_available()
    hardware = torch.device("cuda" if use_cuda else "cpu")
    model = Net().to(hardware)
    transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize((0.1307,), (0.3081,))
    ])

    model.eval()
    image = Image.open(sample)
    image_tensor = transform(image)
    image_tensor_array = image_tensor.unsqueeze(0)
    # pil_image = transforms.ToPILImage(mode='L')(img_tensor)
    with torch.no_grad():
        data = Variable(image_tensor_array.cuda())
    # plt.imshow(pil_image)
    # plt.show()
    out = model(data)
    print(out.data.max(1, keepdim=True)[1]) 
    return str(out.data.max(1, keepdim=True)[1]) + "\n"

To avoid probably having the wrong format i tested my method with a original sample from MNIST.
It seems to give random results.

thanks for all help…
Dear

Could you explain your use case a bit more?
If I understand the problem correctly, you’ve trained a model on MNIST and get a 99% accuracy.
Now you would like to retrain the model on another dataset, with single-channel image tensors having a spatial size of 28x28, but the accuracy is bad?

PS: Variables are deprecated since PyTorch 0.4 so you can use tensors now. You shouldn’t use the .data attribute, as it might yield unwanted side effects.