Hey, I’m trying to use BatchNorm1d in a simple neural network of a bunch of fully connected layers with a 1-dimensional input. My code is similar to the code in the tutorials, but it doesn’t run for some reason. I get the following error:
RuntimeError: running_mean should contain 1 elements not 32
Searching around for related terms hasn’t turned up much. When I try setting track_running_stats to false in the declaration, I get the error that track_running_stats isn’t a valid setting. I’m pretty confused and would appreciate some help.
Pictures of my code:
You should be looking at the docs for the version of pytorch that you have installed, not the docs for the development version.
If you have pytorch 0.3.1, then the docs are at http://pytorch.org/docs/0.3.1/
What shape is the input data?
Oh, thanks for pointing that out. I didn’t know documentation defaulted to the development version.
A single example of my input data is just a vector of 17 numerical features. I’m passing in a minibatch of 100 examples, so the final input to the network is should be 100 x 16. The batchnorm layer takes 32 activations from the first layer, which I think is where the 32 in the error message is coming from.
I have the exact same problem. Did you find a solution? Could you please post how you solved the problem, if you did.
what the size of the tensor going to batch normalization
maybe you are trying to use batch normalization without giving it a batch
when you do not have a batch freeze batch normalization using:
state = torch.from_numpy(state).float()
NetworkName.eval() # Sets the module in evaluation mode (no batchnorm or dropout)
with torch.no_grad(): # no need to calculate gradients now, we are not training (save resources)
action_values = NetworkName(state)
NetworkName.train() # # Sets the module back in training mode
when you have a batch use:
We had the same error message but it was related to the difference in the batch dimensions of the linear layer and the batch normalization layer. As far as I understand from documentation,
If your batch has more than two dimensions and is processed by a linear layer, features are in the last dimension. To pass it to the batch normalization layer, you can flatten the tensor and reshape it after batch normalisation.
x = linear(x)
x = batchnorm1d(x.flatten(0,-2)).reshape(x.shape)
I know you stated that your batch only has two dimensions. But I came across this thread while searching for a solution to my problem and perhaps my answer is helpful to someone.