_getitem_() is not working when extracting images from paths in txt file

I have a lot of images data and I am trying to pack 8 images with 1 label interactively. After that, I will feed the data into the neural network accordingly. But I’m unable to extract the images from my customized Dataset(). Here is my code below

my file path looks like this:


the last line 0 is the label

and is there any way to put 8 images into 1 tensor with 1 label, and then feed into CNN?

I’m new to Pytorch and really need some help, thanks!

Yes there sure is :slight_smile: Instead of having imgs as a list, try having it as a deafultdict where the key is the index, and value is a list of images. But, you might want to rethink this a bit. The txt_path is just 8 images and one label right? You have several of these files I suppose? Then you want to put them all in the dataset, no?

This code should get you started.

from collections import defaultdict

dict_imgs = defaultdict(list)
with open(txt_path) as f:
  lines = f.read().splitlines()

dict_imgs[0] = lines # dictimgs will contain 8 image paths & one label

Hi Oli, Thanks for your reply, appreciate it!

I rethinked a little bit about it and I changed the format of the paths in the text file. Basically, I just put all 8 file paths in one line and separate them with a comma and put the label in the end. It looks like this:


I will put all the input images’ paths in this text file and read each line iteratively.

and instead of reading the list in initial I decided to do it in the getitem function because I have all 8 paths and label in one line.

class MyDataset(Dataset):

def __init__(self, txt_path,transform=None,target_transform = None):
    fh = open(txt_path, 'r')
    imgs = []
    for line in fh:
    self.imgs = imgs
    self.transform = transform
    self.target_transform = target_transform
def __getitem__(self, index):
    fn = self.imgs[index]
    fn = fn.rstrip()
    paths = fn.split(',')
    image_dir_1 = paths[0]
    image_dir_2 = paths[1]
    label = paths[8]
    img1 = Image.open(image_dir_1).convert('RGB')
    if self.transform is not None:
        img1 = self.transform(img1)
    img2 = Image.open(image_dir_2).convert('RGB')
    if self.transform is not None:
        img2 = self.transform(img2)
    img2 = torch.cat(img1,img2)
    return img2,label

I am using the first 2 images as an example, and it seems like working but I don’t know why I cannot use torch.cat() to combine two tensors properly.

Nice work :slight_smile:

To help you figure out the torch.cat stuff, do some printouts before and after the cat.


In this case I can see that your code is missing a parenthesis. It should be img2 = torch.cat((img1,img2)). Just a heads up, if you do a batch size with more than 1, you will probably want to write a custom collate function, just fyi :slight_smile: