Input types can't be cast to the desired output type Long

Hi, getting the following error when trying to generate a batch. I checked and all tensors to be stacked are torch.in64 as well as the tensor out to which they are to be stacked. any ideas?

File “/home/gamir/DER-Roei/alon/anaconda3/envs/open_clip/lib/python3.10/site-packages/torch/utils/data/”, line 652, in next
data = self._next_data()
File “/home/gamir/DER-Roei/alon/anaconda3/envs/open_clip/lib/python3.10/site-packages/torch/utils/data/”, line 1347, in _next_data
return self._process_data(data)
File “/home/gamir/DER-Roei/alon/anaconda3/envs/open_clip/lib/python3.10/site-packages/torch/utils/data/”, line 1373, in _process_data
File “/home/gamir/DER-Roei/alon/anaconda3/envs/open_clip/lib/python3.10/site-packages/torch/”, line 461, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File “/home/gamir/DER-Roei/alon/anaconda3/envs/open_clip/lib/python3.10/site-packages/torch/utils/data/_utils/”, line 302, in _worker_loop
data = fetcher.fetch(index)
File “/home/gamir/DER-Roei/alon/anaconda3/envs/open_clip/lib/python3.10/site-packages/torch/utils/data/_utils/”, line 52, in fetch
return self.collate_fn(data)
File “/home/gamir/DER-Roei/alon/anaconda3/envs/open_clip/lib/python3.10/site-packages/torch/utils/data/_utils/”, line 180, in default_collate
return [default_collate(samples) for samples in transposed] # Backwards compatibility.
File “/home/gamir/DER-Roei/alon/anaconda3/envs/open_clip/lib/python3.10/site-packages/torch/utils/data/_utils/”, line 180, in
return [default_collate(samples) for samples in transposed] # Backwards compatibility.
File “/home/gamir/DER-Roei/alon/anaconda3/envs/open_clip/lib/python3.10/site-packages/torch/utils/data/_utils/”, line 146, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: input types can’t be cast to the desired output type Long

Just to add that it appears to be happening only when the dataloader is multiprocess (i.e. num_workers > 0) and when the dataloader is single process everything works fine

1 Like

Did you solve the problem? I get the same error when I use torch-geometric.

Did you solve the problem? I met the same problem.

I met the same problem with pyg, and solved it by input edge_index as FloatTensor, and I transform it to Long type after the batch is feteched

I had this issue when I was modifying the labels of certain samples by overriding the dataloader’s getitem method. It seems like with multiple workers pytorch wasn’t able to cast tensors the dataloader was returning anymore. The fix was to ensure all the label tensors the dataloader returned had been cast to long with .type(torch.LongTensor) [at the end of the getitem method].

1 Like

I encountered this error while modifying labels in a custom Dataset class. I was incorporating mixup augmentation and randomly softening some labels. The problem arose because the class was inconsistently returning Long labels and Float labels (due to the softening from mixup).

To resolve this, I manually changed the label type before returning it from the __getitem__ method:

return data, label.float()