Does additional pre-training of BERT mean the same as fine-tuning it on MaskedLM?

I want to fine-tune the BERT model on Sequence Classification after additional pre-training on MaskedLM. Can I do MaskedLM fine-tuning first then sequence classification fine-tuning on same model? Would that add 2 additional layers on top of the pre-trained model?
I do not understand it clearly. I am new to this, I have tried reading some blogs and forums but I’m still not very clear.
I will be thankful for any kind of clarifiation.