Can I configure a tokenizer based on particular model configuration?

The purpose here, I have two pretrained models, source model and target model which are trained based on masked language modeling with RoBERTa-base with source data and target data for domain adaptation.
Now, when I tokenize the source data with pretrained model, I want to use that model configured tokenizer.

And, initialize the target model with source model weights so that target model has some information about the source model. And also want to configure the tokenizer based on target model configuration.

pretrained_model = ‘roberta-pretrain-HON-24T_final’
pretrained_path = f’models/{pretrained_model}’
config = AutoConfig.from_pretrained(f’{pretrained_path}/config.json’)
src_model = AutoModel.from_pretrained(pretrained_path, config=config)
src_encoder = AutoTokenizer.from_pretrained(src_model)

pretrained_model = ‘roberta-pretrain-ICWSM-85T_final’
pretrained_path = f’models/{pretrained_model}’
config = AutoConfig.from_pretrained(f’{pretrained_path}/config.json’)
tgt_model = AutoModel.from_pretrained(pretrained_path, config=config)
tgt_encoder = AutoTokenizer.from_pretrained(src_model)

tgt_encoder.load_state_dict(src_encoder.state_dict())

I want to do the same for classifier, I was able to do with classifier, but not able to do for the encoder.
How to do that?