DataLoader worker is killed by signal: Segmentation fault

I am using 3090ti for training. I have chosen lmdb for reading the data. but during the training phase the dataloader is randomly interrupted by signals and generates segment errors.

Traceback (most recent call last):
  File "", line 138, in <module>
  File "", line 109, in train
    train_loss, train_top1_acc, train_topk_acc, global_iter = trainer.train()
  File "/home/chem/rsmiles/utils/", line 101, in train
    loss_token = self.criterion_tokens(pred_token_logit, gt_token_label)
  File "/home/chem/.miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/chem/rsmiles/utils/", line 24, in forward
    indices = torch.Tensor([[torch.arange(len(label))[i].item(),
  File "/home/chem/.miniconda3/envs/py38/lib/python3.8/site-packages/torch/utils/data/_utils/", line 66, in handler
RuntimeError: DataLoader worker (pid 21706) is killed by signal: Segmentation fault.

Sometimes the computer will just shut down.But the strange thing is that if I remove the code for training and iterate only the dataset, it will not generate any error. Also if I open two processes, one to iterate only the dataset and one to train, both processes will run normally.

Check the stacktrace via:

gdb --args python args

to narrow down which code causes the segfault.