Best way to continue the pre-training of a BERT model

Hello,

I am using the PyTorch version of Hugging Face library’s BERT model and I want to continue the pre-training of the model it in a domain specific dataset, before fine-tune it.

What is the best way to do it?

General tips about pre-training are much appreciated.