PyTorch 1.0 - How to predict single images - mnist example?

Thanks for the answer.

I updated my code to:

single_loaded_img = test_loader.dataset.data[0]
single_loaded_img = single_loaded_img.to(device)
single_loaded_img = single_loaded_img[None, None]

out_predict = model(single_loaded_img)

this produced the following error:

RuntimeError: _thnn_conv2d_forward is not implemented for type torch.ByteTensor

So I tried to follow this thread:

so I changed it to:

single_loaded_img = test_loader.dataset.data[0]
single_loaded_img = single_loaded_img.to(device)
single_loaded_img = single_loaded_img[None, None]
single_loaded_img = single_loaded_img.type('torch.DoubleTensor')

out_predict = model(single_loaded_img)

but this returned:

RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'weight'

so I figured I had to do this:

single_loaded_img = test_loader.dataset.data[0]
single_loaded_img = single_loaded_img.to(device)
single_loaded_img = single_loaded_img[None, None]
single_loaded_img = single_loaded_img.type('torch.FloatTensor') # instead of DoubleTensor

out_predict = model(single_loaded_img)

and finally

print(out_predict)
pred = out_predict.max(1, keepdim=True)[1]
print(pred)

it’s working, looks a bit wrapped tensor([[7]]) but ok

I still wonder, bec you wrote:

where do I can find / see / read this, bec the docs of the nn.Conv2d only says that the input_channl an int is

CLASS torch.nn. Conv2d ( in_channels , out_channels , kernel_size , stride=1 , padding=0 , dilation=1 , groups=1 , bias=True )

and

  • in_channels ( int ) – Number of channels in the input image
1 Like