Training a neural network after capturing more data

I apologize in advance if this question is trivial. I am not an expert in machine learning, but I use it occasionally.

My question is whether a neural network should be trained from scratch after capturing new sets of measurements. Simply put, suppose a neural network is trained with a dataset and the trained model is saved. At a later time when the dataset is enhanced with new measurements, can the saved model be used to train on the newly enhanced dataset? Or, should the network be trained using the untrained model from scratch?

To me using the pre-trained model makes more sense as the datasets are still of the same type. They are just from different measurements. In a related query, can I train my model with a portion of my dataset then train another portion and so on?

I feel training from scratch might be the safer option. Here’s my argument:

Say your new data is about 10% o the original data, but “looks” very different (i.e., the data comes from a different distribution). Now, if you would continue training your pre-trained model with only the new data, it will start to fit only the new data and “un-learn” the previous data.

Say you’ve trained your initial model with the initial data for 100 epochs, then I feel you should continue training using only the 10% new data not more then another 10 epoch. But this is just my crude intuition. I’d be happy to here from an expert. This is an interesting question.