IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) when using batchNorm

I got the error above when I use the custom batchNorm exactly at this line
when I checked the size of the input in my model I get torch.Size([0]) during the training and torch.Size([1000, 3 ,32, 32]) at test time knowing that the model train with no error w/o batchNorm or with the batchNorm of torch and gives good accuracy. Can you please explain to me how is that possible if the input size is 0 and how to make it work with the custom batchNorm. thank you

class Net(nn.Module):

    def __init__(self, conv1_dim=100, conv2_dim=150, conv3_dim=250, conv4_dim=500):
        super(Net, self).__init__()
        self.conv4_dim = conv4_dim

        self.conv1 = nn.Conv2d(3, conv1_dim, 5, stride=1, padding=2)
        self.conv2 = nn.Conv2d(conv1_dim, conv2_dim, 3, stride=1, padding=2)
        self.conv3 = nn.Conv2d(conv2_dim, conv3_dim, 3, stride=1, padding=2)
        self.conv4 = nn.Conv2d(conv3_dim, conv4_dim, 3, stride=1, padding=2)

        self.pool = nn.MaxPool2d(2, 2)

        self.fc1 = nn.Linear(conv4_dim * 3 * 3, 270) # 3x3 is precalculated and written, you need to do it if you want to change the # of filters
        self.fc2 = nn.Linear(270, 150)
        self.fc3 = nn.Linear(150, 10)

        self.normalize1 = mbn.MyBatchNorm2d(conv1_dim)
        self.normalize2 = mbn.My.BatchNorm2d(conv2_dim)
        self.normalize3 = mbn.MyBatchNorm2d(conv3_dim)
        self.normalize4 = mbn.MyBatchNorm2d(conv4_dim)

    def forward(self, x):
        x = self.pool(F.relu(self.normalize1((self.conv1(x))))) # first convolutional then batch normalization then relu then max pool
        x = self.pool(F.relu(self.normalize2((self.conv2(x)))))
        x = self.pool(F.relu(self.normalize3((self.conv3(x)))))
        x = self.pool(F.relu(self.normalize4((self.conv4(x)))))

        x = x.view(-1, self.conv4_dim * 3 * 3) # flattening the features
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)

        return x

for loss I tested with both nll_loss and cross_entropy

This line of code checks the size of the channel dimension, while you input seems to be empty.
Make sure to pass valid inputs to the method, as empty inputs are not supported in this manual batch norm version.