RuntimeError when `num_workers` is larger

Hello there,
I’m trying to run some training on a network on a 72-core GPU so I’d prefer to use 24 as num_workers for the dataloader. When I’ve tested for smaller choices, e.g. 3 workers, I manage to run the code without any issue. When I however try to use 24 workers, I get a RuntimeError.

Traceback is as follows,

Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/local_disk0/.../torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/local_disk0/.../torch/utils/data/_utils/fetch.py", line 52, in fetch
    return self.collate_fn(data)
  File "/local_disk0/.../torch/utils/data/_utils/collate.py", line 175, in default_collate
    return [default_collate(samples) for samples in transposed]  # Backwards compatibility.
  File "/local_disk0/.../torch/utils/data/_utils/collate.py", line 175, in <listcomp>
    return [default_collate(samples) for samples in transposed]  # Backwards compatibility.
  File "/local_disk0/.../torch/utils/data/_utils/collate.py", line 140, in default_collate
    out = elem.new(storage).resize_(len(batch), *list(elem.size()))
RuntimeError: Trying to resize storage that is not resizable

Does anyone know why this happens?

I assume you meant CPU?

Are you able to reproduce the issue using a random tensor as the Dataset and by just increasing the number of workers?