I am well known with that a “normal” neural network should use normalized input data so one variable does not have a bigger influence on the weights in the NN than others.
But what if you have a Qnetwork where your training data and test data can differ a lot and can change over time in a continous problem?
My idea was to just run a normal run without normalization of input data and then see the variance and mean from the input datas of the run and then use the variance and mean to normalize my input data of my next run.
But what is the standard to do in this case?
Best regards Søren Koch
There is no standard as far as I know. What I usualy do is this :
( this is from https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Online_algorithm )
def __init__(self, num_inputs):
self.n = torch.zeros(num_inputs)
self.mean = torch.zeros(num_inputs)
self.mean_diff = torch.zeros(num_inputs)
self.var = torch.zeros(num_inputs)
def observe(self, x):
self.n += 1.
last_mean = self.mean.clone()
self.mean += (x-self.mean)/self.n
self.mean_diff += (x-last_mean)*(x-self.mean)
self.var = torch.clamp(self.mean_diff/self.n, min=1e-2)
def normalize(self, inputs):
obs_std = torch.sqrt(self.var)
return (inputs - self.mean)/obs_std
Then each time I get a new state, I just do:
new_state = normalizer.normalize(new_state)
new_state must be a simple tensor,
if it's a variable, use new_state.data
thanks! btw. what is you data obs data type ? it is just because i get a error due to i am using a list
Ah I see, here there is a weird thing since the input of
observe is a variable while the input of
normalize is a tensor. Let me correct it so everything must be simple tensor (input and output).
Do we agree that this kind of “online” normalization is not injective ? In the sense that two distinct inputs that are observed and normalized at different time may be mapped to the same output value.
Furthermore, this mapping / filtering / normalization is not guaranteed to be monotonic (especially in the beginning when very few data have been observed).
A bit of a late reply, but:
this kind of normalization made me struggle for quite some time. Using it, my DQN applied on cartpole could not learn properly. So you definitely need to be very careful.
I will from now on either not normalize at all or freeze the update of the normalization parameters after some initial steps.