BatchNorm1d ValueError: expected 2D or 3D input (got 1D input)

Hi all!

I’m trying to implement batch normalizing to my Neural Network, but I always get the same error. I looked through internet and couldn’t find a satisfying answer for my problem.

Here’s My code:
import …

HIDDEN_LAYER = 32

class DeepQNetwork(nn.Module):

def __init__(self, no_inputs, no_outputs):
    super(DeepQNetwork,self).__init__()

    self.lin1 = nn.Linear(no_inputs,HIDDEN_LAYER)
    self.lin2 = nn.Linear(HIDDEN_LAYER,HIDDEN_LAYER)
    self.lin3 = nn.Linear(HIDDEN_LAYER,no_outputs)

    self.bn = nn.BatchNorm1d(num_features=HIDDEN_LAYER)

def forward(self, x):
    print('input = {}'.format(x))
    output = Variable(x)
    output = self.lin1(self.bn(output))
    output = F.relu(output)
    output = self.lin2(self.bn(output))
    output = F.relu(output)
    output = self.lin3(output)

    return F.relu(output)

output of the code:

input = tensor([ 0.0011, 0.0148, 0.0056, -0.0481], device=‘cuda:0’)

Traceback (most recent call last):

File “/gymcartpole/venv/DeepQ.py”, line 28, in forward
output = self.lin1(self.bn(output))
File “/gymcartpole/venv/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 489, in call
result = self.forward(*input, **kwargs)
File “/gymcartpole/venv/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py”, line 60, in forward
self._check_input_dim(input)
File “/gymcartpole/venv/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py”, line 169, in _check_input_dim
.format(input.dim()))
ValueError: expected 2D or 3D input (got 1D input)

please help me :slight_smile:

nn.BatchNorm1d expects an input of the shape [batch_size, channels] or [batch_size, channels, length].
Currently you are just passing a tensor with a single dimension to the layer.
If your data has 4 features, you should add the batch dimension using:

input = input.unsqueeze(0)

before passing it to your model.

2 Likes

Hi @ptrblck
I have the same problem in my inference steps. I already change my model to model.eval() and .unsqueeze(0), but the error still exist.
My inference code like this:

Discriminator = torch.load('disc.pth', map_location=torch.device('cpu'))
Discriminator.eval()
embededSeq = Embedding.EmbedOne('sample data')
embededSeq = torch.tensor(embededSeq).float()
embededSeq = embededSeq.unsqueeze(0)
score = PosDiscriminator(embededSeq).detach().numpy()[0]
......

and my class is like this:

class Discriminator(nn.Module):
    def __init__(self, sequenceLength):
        super(Discriminator,self).__init__()
        self.batchnorm1 = nn.BatchNorm1d(sequenceLength)
        self.batchnorm2 = nn.BatchNorm1d(2*sequenceLength)
        self.linear1 = nn.Linear(sequenceLength, 2*sequenceLength)
        self.conv2 = nn.Conv1d(1, 1,kernel_size=3, stride=1, padding=1)
        self.conv3 = nn.Conv1d(1, 1,kernel_size=3, stride=1, padding=1)
        self.linear4 = nn.Linear(2*sequenceLength, 1)
        self.relu = nn.ReLU(0.01)
        self.sigmoid = nn.Sigmoid()

    def forward(self, x):
        out = self.batchnorm1(x)
        out = self.linear1(out)
        out = self.relu(out)
        out = self.batchnorm2(out)
        out = out.unsqueeze(1)
        out = self.conv2(out)
        out = self.sigmoid(out)
        out = self.conv3(out)
        out = self.relu(out)
        out = out.squeeze()
        out = self.batchnorm2(out)
        out = self.linear4(out)
        out = self.sigmoid(out)
        return out

Moreover, when I added one more time of .unsqueeze(0) into my input of inference like this

embededSeq = embededSeq.unsqueeze(0).unsqueeze(0)

the error turned into “running_mean should contain 1 elements not 289”, where 289 is my sequenceLength in execution.
May I ask what’s wrong with my code?

Thank you

Your shapes are still incorrect so don’t just unsqueeze dimensions randomly, but make sure the input has the shape [batch_size, channels, seq_len].
Here is an example of your error:

bn = nn.BatchNorm1d(289)

# works
x = torch.randn(2, 289, 1) # [batch_size=2, channels=289, seq_len=1]
out = bn(x)

# fails with your error
x = torch.randn(2, 1, 289) # [batch_size=2, channels=1, seq_len=289]
out = bn(x)
# RuntimeError: running_mean should contain 1 elements not 289

As you can see the channel dimension in the batchnorm layer is set to 289, which already does not correspond to:

However, this setup yields the same error you’ve posted.
Assuming your input has a shape of e.g. [batch_size=2, channels=16, seq_len=289], this would work:

bn = nn.BatchNorm1d(16)
x = torch.randn(2, 16, 289)
out = bn(x)
1 Like