Evaluation on validation dataset stuck

I am working on GitHub - facebookresearch/mmf: A modular framework for vision & language multimodal research from Facebook AI Research (FAIR) and using grid features from resnet-50 on coco dataset

hardware details
2 gpus each with 11 gb memory
16 gb RAM in total

other details
gloo backend for training on 2 gpus
8 batch size
num_workers=2

I am training MOVIE_MCAN model with the following command
CUDA_VISIBLE_DEVICES=0,1 mmf_run config=projects/movie_mcan/configs/vqa2/defaults.yaml \model=movie_mcan \dataset=vqa2 \run_type=train \training.num_workers=2

it starts training:

2021-03-12T15:46:12 | mmf.utils.general: Total Parameters: 254918110. Trained Parameters: 254918110
2021-03-12T15:46:12 | mmf.trainers.core.training_loop: Starting training…
2021-03-12T15:52:30 | mmf.trainers.callbacks.logistics: progress: 5100/236000, train/vqa2/triple_logit_bce: 20.1549, train/vqa2/triple_logit_bce/avg: 20.1549, train/total_loss: 20.1549, train/total_loss/avg: 20.1549, max mem: 9993.0, experiment: run, epoch: 1, num_updates: 100, iterations: 100, max_updates: 236000, lr: 0.00001, ups: 0.27, time: 06m 17s 344ms, time_since_start: 06m 51s 100ms, eta: 1863h 06m 21s 108ms
2021-03-12T15:57:56 | mmf.trainers.callbacks.logistics: progress: 5200/236000, train/vqa2/triple_logit_bce: 20.1549, train/vqa2/triple_logit_bce/avg: 21.8194, train/total_loss: 20.1549, train/total_loss/avg: 21.8194, max mem: 9993.0, experiment: run, epoch: 2, num_updates: 200, iterations: 200, max_updates: 236000, lr: 0.00001, ups: 0.31, time: 05m 26s 277ms, time_since_start: 12m 17s 392ms, eta: 1610h 16m 12s 756ms

after every 1000 iterations it evaluates the model on Validation_set of coco dataset

2021-03-12T16:41:25 | mmf.trainers.callbacks.checkpoint: Checkpoint time. Saving a checkpoint.
2021-03-12T16:42:04 | mmf.trainers.callbacks.logistics: progress: 1000/236000, train/vqa2/triple_logit_bce: 20.1549, train/vqa2/triple_logit_bce/avg: 20.0570, train/total_loss: 20.1549, train/total_loss/avg: 20.0570, max mem: 9993.0, experiment: run, epoch: 2, num_updates: 1000, iterations: 1000, max_updates: 236000, lr: 0.00001, ups: 0.28, time: 05m 51s 002ms, time_since_start: 56m 23s 258ms, eta: 1726h 17m 24s 444ms
2021-03-12T16:42:05 | mmf.trainers.core.training_loop: Evaluation time. Running on full validation set…

and here the model seems to be stuck. I have waited for 10 hours but no proceeding. Upon interruption it gives me following details

^CTraceback (most recent call last):
File “/home/anaconda3/envs/mmf/bin/mmf_run”, line 33, in
sys.exit(load_entry_point(‘mmf’, ‘console_scripts’, ‘mmf_run’)())
File “/home/new-mmf/mmf-master/mmf_cli/run.py”, line 118, in run
nprocs=config.distributed.world_size,
File “/home/anaconda3/envs/mmf/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method=‘spawn’)
File “/home/anaconda3/envs/mmf/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 158, in start_processes
while not context.join():
File “/home/anaconda3/envs/mmf/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 78, in join
timeout=timeout,
File “/home/anaconda3/envs/mmf/lib/python3.6/multiprocessing/connection.py”, line 911, in wait
ready = selector.select(timeout)
File “/home/anaconda3/envs/mmf/lib/python3.6/selectors.py”, line 376, in select
fd_event_list = self._poll.poll(timeout)
KeyboardInterrupt
^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
File “/home/anaconda3/envs/mmf/lib/python3.6/multiprocessing/popen_fork.py”, line 28, in poll
Exception ignored in: <bound method _MultiProcessingDataLoaderIter.del of <torch.utils.data.dataloader._MultiProcessingDataLoaderIter object at 0x7f387d4815f8>>
Traceback (most recent call last):
File “/home/anaconda3/envs/mmf/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 1101, in del
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
File “/home/anaconda3/envs/mmf/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 1075, in _shutdown_workers
w.join(timeout=_utils.MP_STATUS_CHECK_INTERVAL)
File “/home/anaconda3/envs/mmf/lib/python3.6/multiprocessing/process.py”, line 124, in join
res = self._popen.wait(timeout)
File “/home/anaconda3/envs/mmf/lib/python3.6/multiprocessing/popen_fork.py”, line 47, in wait
if not wait([self.sentinel], timeout):
File “/home/anaconda3/envs/mmf/lib/python3.6/multiprocessing/connection.py”, line 911, in wait
ready = selector.select(timeout)
File “/home/anaconda3/envs/mmf/lib/python3.6/selectors.py”, line 376, in select
fd_event_list = self._poll.poll(timeout)
KeyboardInterrupt:

Can you tell me what is causing this to be stuck at this point. I have tried with python 3.6,7,8 but nothing worked. Also i checked the gpu utilization through “nvidia-smi” and mostly the gpu utilization is at 0% and other times it shows some number like 80% or 90%. Could you please help me regarding what is going wrong here.

python -m torch.utils.collect_env
Collecting environment information…
PyTorch version: 1.6.0+cu101
Is debug build: No
CUDA used to build PyTorch: 10.1

OS: Ubuntu 16.04.7 LTS
GCC version: (Ubuntu 5.5.0-12ubuntu1~16.04) 5.5.0 20171010
CMake version: version 3.18.2

Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: Tesla K80
GPU 1: Tesla K80

Nvidia driver version: 430.64
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.2
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.2
/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7

Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] pytorch-lightning==1.1.6
[pip3] torch==1.6.0+cu101
[pip3] torchtext==0.5.0
[pip3] torchvision==0.7.0+cu101
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.0.130 0
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py36he8ac12f_0
[conda] mkl_fft 1.2.0 py36h23d657b_0
[conda] mkl_random 1.1.1 py36h0573a6f_0
[conda] numpy 1.19.2 py36h54aff64_0
[conda] numpy-base 1.19.2 py36hfa32c7d_0
[conda] pytorch-lightning 1.1.6 pypi_0 pypi
[conda] torch 1.4.0+cu100 pypi_0 pypi
[conda] torchtext 0.5.0 pypi_0 pypi
[conda] torchvision 0.7.0+cu101 pypi_0 pypi