Trouble with indexing with ConcatDataset

Hello, I’m brand new to Pytorch and machine learning in general and I had a problem when using ConcatDataset with the MNIST dataset. I’m trying to copy certain parts of the MNIST train set and change them and then add them into the dataset. My original MNIST train_set is size 60000, and after I apply transforms.ToTensor(), my train set consists of 60000 tuples with form (image, label). However, when I try new_set = ConcatDataset(datset),
where datset = [train_set, modified_tuple] and modified_tuple is one tuple of form (image, label), my new_set doesn’t count the tuple as one item and so my new_set has size 60002 instead of 60001. Does anyone have any advice on how to fix this problem? I would like my new_set to be size 60001 where my modified tuple doesn’t become 2 separate items in my new_set.
Thank you!

Try to pass the tuple in a list as seen here:

dataset = datasets.MNIST(root='data', transform=transforms.ToTensor())
print(len(dataset))
> 60000
sample = dataset[0]

concat_dataset = torch.utils.data.ConcatDataset([dataset, [sample]])
print(len(concat_dataset))
> 60001
print(concat_dataset[60000] == sample)
> True

Thank you so much for the response! I was able to find a different solution, but this way is more concise.