Why batchnorm1d and batchnorm2d two APIs are introduced?

Hello, Gurus

could anybody provides some clues why nn.batchnorm1d and nn.batchnorm2d two apis introduced? isn’t batchnorm2d is the superset of batchnorm1d?

I made an experiment that inputs (20, 100) shape of FloatTensor to batchnorm1d. It gets same result compared with the one I got from batchnorm2d, in which I pass the same data and just reshape it to (20, 100, 1, 1) .

m = nn.batchnorm1d(100, affine = False)
n = nn.batchnorm2d(100, affine = False)
input_m = autograd.Variable(torch.FloatTensor(20, 100))
input_n = input_m.view(20, 100, 1, 1)
output_m = m(input_m)
output_n = n(input_n).view(20, 100)

the output_m is equal to output_n. so I think batchnorm2d is enough, why torch introduce batchnorm1d? is there special concerns I missed?

Another API design question is why num_features parameter is mandatory? the current implementation is not allowed other values except C. so it looks like redundant?