Hello,
the signature of BatchNorm1d()
is torch.nn.BatchNorm1d(num_features, ...)
.
whereas
num_features
– C from an expected input of size (N,C,L) or from input of size (N,L)
and
Input: (N,C) or (N,C,L)
Currently my input is a timeseries sampled at 2048 Hz and I take 1s of it i.e. I have 2048 numbers. I currently feed the network a tensor of size [1, 2048] and use
torch.nn.BatchNorm1d(1)
Now while the results are bad (but that’s probably because my NN is random/bad atm) I’m wondering if I use it correctly.
Of course, I have 2048 features and not 1 but then the docs say that the input has to be of the shape (N, C) and num_features is actually C meaning that if I put in a tensor of shape [1, 2048] I actually have num_features=2048.
Which confuses me, because I did set num_features=1. So the question arises: Why the need to set num_features anyway if it is deduced automatically from the input shape?
What’s going on here?
Also what is N, C, L to begin with?