GPU MinMaxScaler, training and inference?

The sklearn MinMaxScaler works great. After it is fit() to the data, transform() can be called to normalize data before passing to my torch model for inference, and inverse_transform() after the inference call to get back to original (unscaled) data range.

This does not work with GPU however. What is the best way to use MinMaxScaler backed by GPU calcluations for fit(), transform(), and inverse_transform() without messing up training? I am using an Adam optimizer and an unsure about including or not including MinMaxScaling in backward() for adjusting weights.

You could re-implement the MixMaxScaler in pure PyTorch and use it as a transformation in your Dataset.

I don’t understand this point, as the MinMaxScaler is used to normalize the input data and does neither change any weights nor is it used in the backward pass.