Training from saved chekpoints failing on multiple gpus

I am using pytorch-lightining for training . i trained a model using 4 gpus and now i want to resume the trainig from save checkpoints but i am getting following traceback error when i set number of devices >2 gpus.

Traceback (most recent call last):
File “”, line 70, in, dm)#,ckpt_path=“./580sdysk/checkpoints/Content_Emb-epoch=09-val_loss=0.01.ckpt”)
File “/lib/python3.8/site-packages/pytorch_lightning/trainer/”, line 608, in fit
File “/lib/python3.8/site-packages/pytorch_lightning/trainer/”, line 36, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File “/lib/python3.8/site-packages/pytorch_lightning/strategies/launchers/”, line 113, in launch
File “/lib/python3.8/site-packages/torch/multiprocessing/”, line 197, in start_processes
while not context.join():
File “/lib/python3.8/site-packages/torch/multiprocessing/”, line 140, in join
raise ProcessExitedException(
torch.multiprocessing.spawn.ProcessExitedException: process 1 terminated with signal SIGKILL