How to convert images into tensor

Hello.

I stored multiple images as below.

    for name in image_names:
        images.append(cv2.imread("./train_mini/"+name))

And I’d like to use those images in CNN training later. However, when I stored those data in “torch.utils.data.TensorDataset” like below, it shows error “RuntimeError: can’t convert a given np.ndarray to a tensor - it has an invalid type. The only supported types are: double, float, int64, int32, and uint8.” So I checked the data type of images, and it was “object”.

    train = torch.utils.data.TensorDataset(torch.from_numpy(X_train), torch.from_numpy(Y_train))

How can I solve this problem? I am completely stuck…

It seems that your images is a list of np arrays? You should make them a large np array.

Thank you for the quick reply.

I apply

    images = np.array(images)

Does it generate a list of np arrays?

Yes, it should generate an np array containing a lot of images.

However, note that if some of your image loaded errors, i.e. are None, this line cannot cast the entire list into a byte array, and you will get an array of objects.

Thank you.

    print any(elem is None for elem in images)

However, this shows “False” which means there’s no None in “images”…