Pytorch issue--why embeddings throwing me an index error in the middle of training?

/pytorch/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [192,0,0], thread: [127,0,0] Assertion srcIndex < srcSelectDimSize failed.
0
TITAN RTX
Memory Usage:
Allocated: 0.0 GB
Cached: 0.0 GB
Traceback (most recent call last):
File “/users/sbhatta9/emnlp_code/train_ebrwt.py”, line 331, in
train(args)
File “/users/sbhatta9/emnlp_code/train_ebrwt.py”, line 317, in train
valid_bleu = run_eval_bleu(epoch, (rebatch(b, field_names) for b in valid_iter), model_par, field_names)
File “/users/sbhatta9/emnlp_code/train_ebrwt.py”, line 187, in run_eval_bleu
hypo_scores = torch.stack([model(getattr(batch, field).cuda()) for field in field_names[1:1 + sample_count]]).view(sample_count, -1).t()
File “/users/sbhatta9/emnlp_code/train_ebrwt.py”, line 187, in
hypo_scores = torch.stack([model(getattr(batch, field).cuda()) for field in field_names[1:1 + sample_count]]).view(sample_count, -1).t()
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl
result = self.forward(*input, **kwargs)
File “/users/sbhatta9/emnlp_code/train_ebrwt.py”, line 71, in forward
_, sample_hidden = self.bert_model(hypo_sample)
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl
result = self.forward(*input, **kwargs)
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py”, line 845, in forward
attention_mask=attention_mask, head_mask=head_mask)
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl
result = self.forward(*input, **kwargs)
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py”, line 707, in forward
embedding_output = self.embeddings(input_ids, position_ids=position_ids, token_type_ids=token_type_ids)
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl
result = self.forward(*input, **kwargs)
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py”, line 255, in forward
embeddings = words_embeddings + position_embeddings + token_type_embeddings
Why this error means in the middle of the training?

Pasting the code here will give a better idea.
Maybe the id for a train (or validation) example token is not found in the vocab. You may want to double-check the vocab creation.