Fine Tuning Network with New Dataset

Hi,
I am training a neural network on task A using a set of word embeddings, I would like to re-use the model for task B, using a different set of word embeddings.

I have a neural network trained for task A using a set of word embeddings, for which I save the model state dict for and optimizer state dict:

...
torch.save(self.best_opt_wts)
torch.save(self.best_model_wts)
...

Next, I want to reload the model to train for task B (transfer learning), with a new set of input embeddings.
I first set up the model using the set of new embeddings:

class RNN(nn.Module):
	def __init__(self, options):
		...
		self.embedding = nn.Embedding.from_pretrained(torch.load(options['embeddings_loc']))

Then later in my training script, I attempt to load the state dict into the model:

self.model.load_state_dict(torch.load(self.args.ft_model))

However, I get this error:

size mismatch for input_encoder.embedding.weight: copying a param with shape torch.Size([75323, 300]) from checkpoint, the shape in current model is torch.Size([16694, 300])

It looks like since the encoder needs to be reshaped somehow to be used with the new embeddings.
What is the best way to do this to preserve the previous model?
Is this a valid strategy for fine-tuning a model of Task A for Task B?