Converting images to Pytorch tensors loses label data

I have a dataset of images. In the below code, I am trying to convert them to Pytorch tensors by first converting them to PIL images:

# choose the training and test datasets
train_data = os.listdir('data/training/')
testing_data = os.listdir('data/testing/')
train_tensors = []
test_tensors = []

# Print out some stats about the training and test data
print('Train data, number of images: ', len(train_data))
print('Test data, number of images: ', len(testing_data))

# The transformation call to resize images and transform them into Tensors
transform = transforms.Compose([
    transforms.RandomResizedCrop((120,120)),
    transforms.PILToTensor()
])

# Converting every train/test image to a PIL image and then to a Pytorch tensor
for train_image in train_data:
    img = Image.open('data/training/' + train_image)
    train_tensors.append(transform(img))

for test_image in testing_data:
    img = Image.open('data/testing/' + test_image)
    test_tensors.append(transform(img))

However, in this process, the labels are completely lost. This is the output of train_tensors

 [255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255]],

        [[255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255],
         ...,
         [255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255]],

        [[255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 254, 254],
         ...,
         [254, 254, 255,  ..., 254, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255]],

        [[255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255],
         ...,
         [255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255],
         [255, 255, 255,  ..., 255, 255, 255]]], dtype=torch.uint8)

When I use this tensor in a dataloader and try to extract the labels, I get a too many values to unpack error.

train_loader = DataLoader(train_tensors, batch_size=batch_size, shuffle=True)

dataiter = iter(train_loader)
images, labels = dataiter.__next__() #

How can I maintain my label data?

I don’t fully understand your question as it doesn’t seem you are loading labels at all.
In your currently posted code snippet you are loading images and transforming them to tensors, which seems to work fine.
The images, labels = dataiter.__next__() operation fails as the DataLoader will only return the passed train_tensors, which is a list of input tensors without any labels.