How to keep consistency for data preprocess between PyTorch model training and serving?

Given the source tabular data, it’s common that we need preprocess the data and then feed the preprocessed result into the PyTorch model.The preprocessing methods contain OneHot encoding, Bucketize, Standardize, Normalize, Hash and so on.

At training stage, we can preprocess the data using sklearn.preprocessing package or Pandas API or other solutions.
For serving stage, how can we deploy the consistent preprocess logic directly on production without rewrite the preprocessing code?