def read_data(self, dataloader, labels=True):
if labels:
while True:
for img, label, _ in dataloader:
yield img, label
else:
while True:
for img, _, _ in dataloader:
yield img
The source code I’m trying to implement can be found here https://github.com/sinhasam/vaal
I’m also doing this in the Nvidia NGC pytorch container version 19.10 but the issue persists in the latest version.
Traceback (most recent call last):
File "../vaal/main.py", line 143, in <module>
main(args)
File "../vaal/main.py", line 73, in main
ndata, nlabel, _ = train_dataset[0]
ValueError: not enough values to unpack (expected 3, got 2)
When I use a custom dataset, however, as they do in the source code I get past this (getting the image and the label), but it freezes again at the same point.
Custom dataset:
class KFuji(Dataset):
def __init__(self, image_path, json_path):
self.kfuji = datasets.CocoDetection(root=image_path, annFile=json_path, transform=coco_transformer())
def __getitem__(self, index):
if isinstance(index, numpy.float64):
index = index.astype(numpy.int64)
data, target = self.kfuji[index]
return data, target, index
def __len__(self):
return len(self.kfuji)
I have the same problem (July 2022). Dataloader with multiple workers hangs after 1st batch. Every one in a while it runs fine. I iterated over the entire dataset without problems.
I got the same problem when setting num_workers>0. I also tried putting a timeout and skipping the batch eventually, but after the first time the timeout is reached, all next batch=next(dataloader_iter) instructions fail with the same timeout error.
I surely tried to go through the whole dataset once without problems.