DataLoader worker (pid(s) 2245) exited unexpectedly with Ubuntu on Windows

I have this error with dataloader when I try to run this python script on Ubuntu/Windows Terminal.

ERROR: Unexpected segmentation fault encountered in worker.
Traceback (most recent call last):
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 990, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/queue.py”, line 179, in get
self.not_empty.wait(remaining)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/threading.py”, line 306, in wait
gotit = waiter.acquire(True, timeout)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py”, line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 2245) is killed by signal: Segmentation fault.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “unet3d-lightning-tabular.py”, line 462, in
trainer.fit(model)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 740, in fit
self._call_and_handle_interrupt(
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 685, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 777, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 1199, in _run
self._dispatch()
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 1279, in _dispatch
self.training_type_plugin.start_training(self)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py”, line 202, in start_training
self._results = trainer.run_stage()
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 1289, in run_stage
return self._run_train()
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 1311, in _run_train
self._run_sanity_check(self.lightning_module)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 1375, in _run_sanity_check
self._evaluation_loop.run()
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/loops/base.py”, line 145, in run
self.advance(*args, **kwargs)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py”, line 110, in advance
dl_outputs = self.epoch_loop.run(dataloader, dataloader_idx, dl_max_batches, self.num_dataloaders)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/loops/base.py”, line 140, in run
self.on_run_start(*args, **kwargs)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py”, line 86, in on_run_start
self._dataloader_iter = _update_dataloader_iter(data_fetcher, self.batch_progress.current.ready)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/loops/utilities.py”, line 121, in _update_dataloader_iter
dataloader_iter = enumerate(data_fetcher, batch_idx)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/utilities/fetching.py”, line 199, in iter
self.prefetching(self.prefetch_batches)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/utilities/fetching.py”, line 258, in prefetching
self._fetch_next_batch()
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/pytorch_lightning/utilities/fetching.py”, line 300, in _fetch_next_batch
batch = next(self.dataloader_iter)
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 521, in next
data = self._next_data()
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 1186, in _next_data
idx, data = self._get_data()
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 1142, in _get_data
success, data = self._try_get_data()
File “/home/khoatruong1412/miniconda3/envs/tensorflow/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 1003, in _try_get_data
raise RuntimeError(‘DataLoader worker (pid(s) {}) exited unexpectedly’.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 2245) exited unexpectedly

I have tried many solutions such as changing num_workers to 4,8,16, wraping the model under if name == ‘main’:, changing the batch size, but so far nothing works.
My computer specs are:
Ryzen 7 3700x
32gb ram
rtx 3090

If you want to see the files and folders to have a better understanding of what is going on, please let me know. Thank you for your time!

I would say you need to check if your Dataset instance is correct.
Does it work using num_worker=0?

Hi ejguan, I just fixed it a few days ago. The issue was the way I defined the dataset. Thanks for viewing my post!