Error iterating dataloader, can't cast to the desired type

So while training a model at the nth iteration of the first epoch this error occurs. Any idea what it might be?

RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File “/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py”, line 287, in _worker_loop
data = fetcher.fetch(index)
File “/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py”, line 52, in fetch
return self.collate_fn(data)
File “/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py”, line 157, in default_collate
return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
File “/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py”, line 157, in
return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
File “/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py”, line 138, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: result type Double can’t be cast to the desired output type Byte

It seems the collate_fn tries to torch.stack the samples to a batch and fails to do so as there seems to be a dtype mismatch of double and Byte.
Print the dtypes of all samples before returning them in the __getitem__ and make sure they are consistent.

Such a situation indeed occured in may case with a dataloading pipeline including some albumentations transforms . By default, albumentations transforms are randomly applied. At the input of the pipeline you have uint8 tensors and some transforms may output a float64 tensor ; You may end up with some tensors being uint8, others float64, leading to a failure of the collate functions to stack the samples.

I do not know if that applies as well in your case.