Training from saved chekpoints failing on multiple gpus

I am using pytorch-lightining for training . i trained a model using 4 gpus and now i want to resume the trainig from save checkpoints but i am getting following traceback error when i set number of devices >2 gpus.

Traceback (most recent call last):
File “train.py”, line 70, in
trainer.fit(model, dm)#,ckpt_path=“./580sdysk/checkpoints/Content_Emb-epoch=09-val_loss=0.01.ckpt”)
File “/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 608, in fit
call._call_and_handle_interrupt(
File “/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py”, line 36, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File “/lib/python3.8/site-packages/pytorch_lightning/strategies/launchers/multiprocessing.py”, line 113, in launch
mp.start_processes(
File “/lib/python3.8/site-packages/torch/multiprocessing/spawn.py”, line 197, in start_processes
while not context.join():
File “/lib/python3.8/site-packages/torch/multiprocessing/spawn.py”, line 140, in join
raise ProcessExitedException(
torch.multiprocessing.spawn.ProcessExitedException: process 1 terminated with signal SIGKILL