How to dynamically scale input of unknown magnitude?

I am building a RL agent that operates in a set of unknown environments.

The sensor inputs given by these environments are dynamically generated, so it is very hard for me to tell in advance how large they are going to be. Some inputs might be in the range [0;1], others in [10000; 100000000]. Some might have low variance, but others could have very high variance.

This make sit difficult to normalize the inputs.

Sampling from the environments is expensive, so it would be wasteful to wait for many iterations of sampling before I know how to scale the inputs. Worse, some inputs might only return high values in special circumstances that are only encountered at runtime.

Is there any mechanism in pytorch I can use as a preprocessor to normalize this completely unknown input data effectively? Preferably, this should not be too expensive to compute, because most of the inputs should be sane already. There are only a few inputs with crazy scaling.

There is also a second problem related to this one: Some inputs are categorical strings, and the categories are not known in advance. Currently, I just hash each string into a fixed-size vector, since this also works for data that hasn’t been seen before. I wonder if there is a better way to do it, or if pytorch has a function that already does this, so I don’t have to write the code for it myself, since it seems like a standard feature.