Analyse own images in MNIST format

i wrote a method to classify my own created images in MNIST format.
Unfortunately it doesn’t return right results.

After training the NN, the implemented test method gives:
“Test set: Average loss: 0.0253, Accuracy: 9922/10000 (99%)” so the NN should be able to classify my images pretty well.

Here the code:

def analyse(sample):
    use_cuda = torch.cuda.is_available()
    hardware = torch.device("cuda" if use_cuda else "cpu")
    model = Net().to(hardware)
    transform = transforms.Compose([
        transforms.Normalize((0.1307,), (0.3081,))

    image =
    image_tensor = transform(image)
    image_tensor_array = image_tensor.unsqueeze(0)
    # pil_image = transforms.ToPILImage(mode='L')(img_tensor)
    with torch.no_grad():
        data = Variable(image_tensor_array.cuda())
    # plt.imshow(pil_image)
    out = model(data)
    print(, keepdim=True)[1]) 
    return str(, keepdim=True)[1]) + "\n"

To avoid probably having the wrong format i tested my method with a original sample from MNIST.
It seems to give random results.

thanks for all help…

Could you explain your use case a bit more?
If I understand the problem correctly, you’ve trained a model on MNIST and get a 99% accuracy.
Now you would like to retrain the model on another dataset, with single-channel image tensors having a spatial size of 28x28, but the accuracy is bad?

PS: Variables are deprecated since PyTorch 0.4 so you can use tensors now. You shouldn’t use the .data attribute, as it might yield unwanted side effects.

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)