Checking a single input after a model is trained

Hi!

I’ve adapted the example classifier from the tutorial based on the CIFAR10 dataset to load my own, train and test it, and it’s working OK. However, now that I’ve the model finished I don’t know how to test just a single input. Here is the whole source code of my classifier. The part where I don’t really know what to do is the following:

  def check_image(self, filename):
    img = PIL.Image.open(filename)
    img_data = self.transform(img)
    self.net(img_data)  <--- Crash here

This method receives an image filename, then I open and transform it and pass it to the NN. However, when I do so it crashes with the following error message:

$ ./generic_images_classifier.py paintings_dataset/picasso/copy-32x32-1.jpg 
[Mon Dec  3 18:02:19 2018] Using device cuda:0
[Mon Dec  3 18:02:19 2018] Loading previously stored model
[Mon Dec  3 18:02:19 2018] GroundTruth:  zurbaran
[Mon Dec  3 18:02:19 2018] Predicted:  rubens
Traceback (most recent call last):
  File "./generic_images_classifier.py", line 261, in <module>
    main()
  File "./generic_images_classifier.py", line 254, in main
    trainer.check_image(filename)
  File "./generic_images_classifier.py", line 241, in check_image
    self.net(img_data)
  File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "./generic_images_classifier.py", line 42, in forward
    x = self.pool(F.relu(self.conv1(x)))
  File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/conv.py", line 301, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [6, 3, 5, 5], but got input of size [3, 32, 32] instead

So, my question is: how can I just test one single image against the model I’ve already trained?

Thanks in advace!

It seems your input is missing the batch dimension.
Generally you should pass the input as [batch_size, channels, height, width].
In your use case, if you load a single image it will be in the shape [c, h, w].
Just unsqueeze the input and pass it to the model:

img_data = img_data.unsqueeze(0)
self.net(img_data)

That was it. Thanks a lot again @ptrblck!

1 Like