Display a tensor image in matplotlib

I’m doing a project for Udacity’s AI with Python nanodegree.

I’m trying to display a torch.cuda.FloatTensor that I obtained from an image file path. Below that image will be a bar chart showing the top 5 most likely flower names with their associated probabilities.

path = 'flowers/test/1/image_06743.jpg' 

top5_probs, top5_class_names = predict(path, model,5)


flower_np_image = process_image(Image.open(path))
flower_tensor_image = torch.from_numpy(flower_np_image).type(torch.cuda.FloatTensor)
flower_tensor_image = flower_tensor_image.unsqueeze_(0)

axs = imshow(flower_tensor_image, ax = plt)
fig, ax = plt.subplots()
y_pos = np.arange(len(top5_class_names))
plt.barh(y_pos, list(reversed(top5_probs)))
plt.yticks(y_pos, list(reversed(top5_class_names)))
plt.ylabel('Flower Type')
plt.xlabel('Class Probability')

The imshow function was given to me as

def imshow(image, ax=None, title=None):
    if ax is None:
        fig, ax = plt.subplots()

    # PyTorch tensors assume the color channel is the first dimension
    # but matplotlib assumes is the third dimension
    image = image.transpose((1, 2, 0))

    # Undo preprocessing
    mean = np.array([0.485, 0.456, 0.406])
    std = np.array([0.229, 0.224, 0.225])
    image = std * image + mean

    # Image needs to be clipped between 0 and 1 or it looks like noise when displayed
    image = np.clip(image, 0, 1)


    return ax

But I get this output

[0.8310797810554504, 0.14590543508529663, 0.013837042264640331, 0.005048676859587431, 0.0027143193874508142]
['petunia', 'pink primrose', 'balloon flower', 'hibiscus', 'tree mallow']

TypeError                                 Traceback (most recent call last)
<ipython-input-17-f54be68feb7a> in <module>()
     12 flower_tensor_image = flower_tensor_image.unsqueeze_(0)
---> 14 axs = imshow(flower_tensor_image, ax = plt)
     15 axs.axis('off')
     16 axs.title(top5_class_names[0])

<ipython-input-15-9c543acc89cc> in imshow(image, ax, title)
      5     # PyTorch tensors assume the color channel is the first dimension
      6     # but matplotlib assumes is the third dimension
----> 7     image = image.transpose((1, 2, 0))
      9     # Undo preprocessing

TypeError: transpose(): argument 'dim0' (position 1) must be int, not tuple

<matplotlib.figure.Figure at 0x7f5855792160>

My predict function works, but the imshow just chokes with the call to transpose. Any ideas on how to fix this? I think it vaguely has something to do with converting back to a numpy array.

The notebook that I’m working on can be found at https://github.com/BozSteinkalt/ImageClassificationProject


Could you try to use .permute instead of .transpose?
In PyTorch transpose only takes two dimensions, while permute takes all dimensions.