A question about splitting training dataset when do training

Suppose I have training dataset with the size of 50 million, I would like to train it with a batch_size and epoch.

If I separate the dataset into 5 subset with 10 million each one. and I train my model like this: first I train my model with subset_1 under the same batch_size and epoch as I mentioned before, and get Model_1. then I train Model_1 with subset_2 under the same batch_size and epoch, repeat this loop until I get the final Model.

Will this influence the precision of my final Model compared to training on the whole dataset at each epoch with the same batch_size.

Your workflow of splitting the data into non-overlapping parts and then training the model on each part in a sequential manner would be equal to training the model on the entire dataset.
Of course you can’t expect to see bitwise-identical values as the pseudorandom number generator calls would be different, but besides that there shouldn’t be any difference (assuming the data is properly shuffled and the same transformations are used).