ValueError: expected 2D or 3D input (got 1D input)

I am using a BatchNormalization 1d in my code and shows me this error.

def __init__(self, x_dim, h_dim1, h_dim2,h_dim3,z_dim):
    super(VAE, self).__init__()
    self.x_dim = x_dim
    
    # encoder part
    self.fc1 = nn.Linear(x_dim, h_dim1)
    self.fc2 = nn.Linear(h_dim1, h_dim2)
    self.fc3 = nn.Linear(h_dim2,h_dim3)
    self.fc31 = nn.Linear(h_dim3, z_dim)
    self.fc32 = nn.Linear(h_dim3, z_dim)
    self.dropout = nn.Dropout(0.5)
    
    self.Encoder = nn.Sequential(
        self.fc1,
        nn.Dropout(0.5),
        nn.ReLU(),
        nn.BatchNorm1d(h_dim1),
        self.fc2,
        nn.Dropout(0.5),
        #nn.Dropout(0.1),
        nn.ReLU(),
        nn.BatchNorm1d(h_dim2),
        self.fc3,
        nn.Dropout(0.5),
        nn.ReLU(),
        nn.BatchNorm1d(h_dim3)
    )

Hi,

What is the shape of your input tensor? According to the docs, nn.BatchNorm1d expects at minimum a 2D input tensor (batch_size x num_features). It says: “Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension)”. From your error message, I would assume you are missing a batch dimension, but I may be wrong. If you could provide an example of an input tensor, we could help you debug (if the missing batch dimension is not the problem).

The shape is 1 by 3000

where can i add that batch dimension

Ok, you need a batch size larger than 1, otherwise you cannot do batch statistics! I tried your code with a batch size larger than 1, and everything was working fine.

You can simply concatenate multiple examples together with torch.cat, like in this small example:

    x = torch.rand(size=(1, 3000), dtype=torch.float32)
    y = torch.rand(size=(1, 3000), dtype=torch.float32)
    z = torch.cat((x, y), dim=0)

I do not know how you load your data, but if you are not using PyTorch’s torch.utils.data.Dataset and torch.utils.data.DataLoader classes, you could as it does the concatenation of multiple examples into a batched tensor for you (see tutorial).

.