What i want to do is to load the coco caption data set with torch.utils.DataLoader
.
I resized all coco images to the fixed size and saved them into data/train2014resized
directory.
import torch
import torchvision.datasets as dset
import torchvision.transforms as transforms
cap = dset.CocoCaptions(root = './data/train2014resized',
annFile = './data/annotations/captions_train2014.json',
transform=transforms.ToTensor())
print('Number of samples: ', len(cap))
img, target = cap[3] # this works well
train_loader = torch.utils.data.DataLoader(
cap, batch_size=1, shuffle=False, num_workers=1)
data_iter = iter(train_loader)
print (data_iter.next()) # this returns an error.
When I ran the code above, i got a huge RuntimeError message. The below is the bottom of the error message.
File "/Users/yunjey/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 75, in default_collate
return [default_collate(samples) for samples in transposed]
File "/Users/yunjey/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 71, in default_collate
elif isinstance(batch[0], collections.Iterable):
File "/Users/yunjey/anaconda2/lib/python2.7/abc.py", line 132, in __instancecheck__
if subclass is not None and subclass in cls._abc_cache:
File "/Users/yunjey/anaconda2/lib/python2.7/_weakrefset.py", line 75, in __contains__
return wr in self.data
RuntimeError: maximum recursion depth exceeded in cmp
What can i do for solving this problem?