Some questions about Maxpool

Hi,I want to my layer has different size. When I use the nn.MaxPool2d([2,1]),which mean that the height of layer’s output will reduce to half and the width will keep same size, I get NAN of this layer.
However, I use the nn.MaxPool2d([2,2]),the layer’output has the normal value. Is there some error about the parameter of MaxPool2d?
thanks for someone‘s solution.

Could you post the input you are using so that we could try to reproduce this issue?

Thank you for your reply. I will give the structure of my network.

def __init__(self,input_channel,output_channel):
        super(Net, self).__init__()

        self.convD0 = nn.Sequential(nn.Conv2d(input_channel, 64, 5,1,2),
                                    nn.ReLU(),
                                    nn.BatchNorm2d(64)
                                    )

        self.convD1 = nn.Sequential(nn.Conv2d(64, 128, 3,1,1),
                                    nn.ReLU(),
                                    nn.BatchNorm2d(128),
                                    nn.MaxPool2d([2,1]))
        self.convD2 = nn.Sequential(nn.Conv2d(128, 256, 3,1,1),
                                    nn.ReLU(),
                                    nn.BatchNorm2d(256))
        self.convD3 = nn.Sequential(nn.Conv2d(256,128,5,1,[0,2]),
                                    nn.ReLU(),
                                    nn.BatchNorm2d(128))
        self.convD4 = nn.Sequential(nn.Conv2d(128,64,3,1,[0,1]),
                                    nn.ReLU(),
                                    nn.BatchNorm2d(64))
        self.convD5 = nn.Sequential(nn.Conv2d(64,32,5,1,[0,2]),
                                    nn.ReLU(),
                                    nn.BatchNorm2d(32))
        self.convD6 = nn.Sequential(nn.Conv2d(32,16,3,1,1),
                                    nn.ReLU(),
                                    nn.BatchNorm2d(16))       
        self.convD7 = nn.Sequential(nn.Conv2d(16,output_channel,3,1,1),
                                    nn.ReLU()
                                    )      
def forward(self, inputs):
        x0 = inputs.unsqueeze(dim=1)  
        x1_d = self.convD0(x0)
        x2_d = self.convD1(x1_d)
        x3_d = self.convD2(x2_d)
        x4_d = self.convD3(x3_d)
        x5_d = self.convD4(x4_d)
        x6_d = self.convD5(x5_d)
        x7_d = self.convD6(x6_d)
        x8_d = self.convD7(x7_d)
        output = x8_d.squeeze()      
       return output 
 The shape of input is [batch_size,140,33 ],  and, the value of the input range from 1.3280e-16 to 5.4188e-08, which are some very small value.  Toward to the output, the shape of it  is [batch_size, 60,33], the discrete value of output range from 0 to 140. 
 In order to get the output's shape, I use the MaxPool2d([2,1]) behind convD1 to obtain the half size of height. Then I can get the final size of output by using the convD3、convD4 and convD5. However, the value of MaxPool2D layer gets the NAN, when I use [2,1] as parameter. To check the parameter, I try to set the parameter as [2,2], which can obtain the normal value like 0.39.
 Can you give me some advise about the struture of my network to get the ideal output? Thank you.