RuntimeError: DataLoader worker (pid(s) 12844) exited unexpectedly

I am getting the error on yolo v8. While calling the yolo model Data loader error. I am trying to run on the the GPU but getting the error. My model is:

Ultralytics YOLOv8.0.131 Python-3.11.4 torch-2.0.1 CUDA:0 (NVIDIA GeForce RTX 3050 Laptop GPU, 4096MiB)
yolo\engine\trainer: task=detect, mode=train, model=yolov8n.pt, data=coco.yaml, epochs=20, patience=50, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=None, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, show=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, vid_stride=1, line_width=None, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, boxes=True, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, v5loader=False, tracker=botsort.yaml, save_dir=runs\detect\train3

               from  n    params  module                                       arguments                     

0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
4 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]
5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
6 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]
7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
8 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]
9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, ‘nearest’]
11 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1]
12 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1]
13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, ‘nearest’]
14 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1]
15 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1]
16 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
17 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1]
18 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1]
19 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]
20 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1]
21 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1]
22 [15, 18, 21] 1 897664 ultralytics.nn.modules.head.Detect [80, [64, 128, 256]]
Model summary: 225 layers, 3157200 parameters, 3157184 gradients

Transferred 355/355 items from pretrained weights
AMP: running Automatic Mixed Precision (AMP) checks with YOLOv8n…
AMP: checks passed
train: Scanning C:\Users\vu1ad\Desktop\ReserachPlan\COCO-Dataset\train\labels.cache… 105 images, 3 backgrounds, 0 corrupt: 100%|██████████| 105/105 [00:00<?, ?it/s]
val: Scanning C:\Users\vu1ad\Desktop\ReserachPlan\COCO-Dataset\valid\labels.cache… 50 images, 0 backgrounds, 0 corrupt: 100%|██████████| 50/50 [00:00<?, ?it/s]
Plotting labels to runs\detect\train3\labels.jpg…
C:\Users\vu1ad\anaconda3\envs\Yolo8\Lib\site-packages\seaborn\axisgrid.py:118: UserWarning: The figure layout has changed to tight
self._figure.tight_layout(*args, **kwargs)
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 57 weight(decay=0.0), 64 weight(decay=0.0005), 63 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs\detect\train3
Starting training for 20 epochs…

  Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size

0%| | 0/7 [00:10<?, ?it/s]

and Error is coming:

Empty Traceback (most recent call last)
File ~\anaconda3\envs\Yolo8\Lib\site-packages\torch\utils\data\dataloader.py:1132, in _MultiProcessingDataLoaderIter._try_get_data(self, timeout)
1131 try:
→ 1132 data = self._data_queue.get(timeout=timeout)
1133 return (True, data)

File ~\anaconda3\envs\Yolo8\Lib\queue.py:179, in Queue.get(self, block, timeout)
178 if remaining <= 0.0:
→ 179 raise Empty
180 self.not_empty.wait(remaining)

Empty:

The above exception was the direct cause of the following exception:

RuntimeError Traceback (most recent call last)
Cell In[4], line 3
1 model = YOLO()
----> 3 model.train(data=“coco.yaml” , epochs = 20 )

File ~\anaconda3\envs\Yolo8\Lib\site-packages\ultralytics\yolo\engine\model.py:373, in YOLO.train(self, **kwargs)
371 self.model = self.trainer.model
372 self.trainer.hub_session = self.session # attach optional HUB session
→ 373 self.trainer.train()
374 # Update model and cfg after training
375 if RANK in (-1, 0):

File ~\anaconda3\envs\Yolo8\Lib\site-packages\ultralytics\yolo\engine\trainer.py:192, in BaseTrainer.train(self)
190 ddp_cleanup(self, str(file))
191 else:
→ 192 self._do_train(world_size)

File ~\anaconda3\envs\Yolo8\Lib\site-packages\ultralytics\yolo\engine\trainer.py:315, in BaseTrainer._do_train(self, world_size)
313 self.tloss = None
314 self.optimizer.zero_grad()
→ 315 for i, batch in pbar:
316 self.run_callbacks(‘on_train_batch_start’)
317 # Warmup

File ~\anaconda3\envs\Yolo8\Lib\site-packages\tqdm\std.py:1178, in tqdm.iter(self)
1175 time = self._time
1177 try:
→ 1178 for obj in iterable:
1179 yield obj
1180 # Update and possibly print the progressbar.
1181 # Note: does not call self.update(1) for speed optimisation.

File ~\anaconda3\envs\Yolo8\Lib\site-packages\ultralytics\yolo\data\build.py:38, in InfiniteDataLoader.iter(self)
36 “”“Creates a sampler that repeats indefinitely.”“”
37 for _ in range(len(self)):
—> 38 yield next(self.iterator)

File ~\anaconda3\envs\Yolo8\Lib\site-packages\torch\utils\data\dataloader.py:633, in _BaseDataLoaderIter.next(self)
630 if self._sampler_iter is None:
631 # TODO(Bug in dataloader iterator found by mypy · Issue #76750 · pytorch/pytorch · GitHub)
632 self._reset() # type: ignore[call-arg]
→ 633 data = self._next_data()
634 self._num_yielded += 1
635 if self._dataset_kind == _DatasetKind.Iterable and
636 self._IterableDataset_len_called is not None and
637 self._num_yielded > self._IterableDataset_len_called:

File ~\anaconda3\envs\Yolo8\Lib\site-packages\torch\utils\data\dataloader.py:1328, in _MultiProcessingDataLoaderIter._next_data(self)
1325 return self._process_data(data)
1327 assert not self._shutdown and self._tasks_outstanding > 0
→ 1328 idx, data = self._get_data()
1329 self._tasks_outstanding -= 1
1330 if self._dataset_kind == _DatasetKind.Iterable:
1331 # Check for _IterableDatasetStopIteration

File ~\anaconda3\envs\Yolo8\Lib\site-packages\torch\utils\data\dataloader.py:1284, in _MultiProcessingDataLoaderIter._get_data(self)
1282 elif self._pin_memory:
1283 while self._pin_memory_thread.is_alive():
→ 1284 success, data = self._try_get_data()
1285 if success:
1286 return data

File ~\anaconda3\envs\Yolo8\Lib\site-packages\torch\utils\data\dataloader.py:1145, in _MultiProcessingDataLoaderIter._try_get_data(self, timeout)
1143 if len(failed_workers) > 0:
1144 pids_str = ', '.join(str(w.pid) for w in failed_workers)
→ 1145 raise RuntimeError(‘DataLoader worker (pid(s) {}) exited unexpectedly’.format(pids_str)) from e
1146 if isinstance(e, queue.Empty):
1147 return (False, None)

RuntimeError: DataLoader worker (pid(s) 12844) exited unexpectedly

Already installed the CUDA library:
import torch
torch.cuda.is_available()

TRUE

I was Also Facing This Runtime Error and One Solution To This is Setting the “KMP_DUPLICATE_LIB_OK” Environment Variable True and Then Trying Running It May Fix the Error.

import os
os.environ[“KMP_DUPLICATE_LIB_OK”]=“TRUE”