Expected more than 1 value in eval() mode

I have a 1D convolutional model with batch norm layers, and it doesn’t seem to like me predicting on a single sample, even when in eval mode. Predicting on more than 1 sample works fine, but I get the following error from predicting on a single sample:

ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 1024])

I’ve seen lots of posts about handling this error when not in eval mode, which makes sense, but I haven’t seen anything on handling this error when already in eval mode.

Here is a section of the model architecture showing one of the batch norm layers:

(conv6): Conv1d(256, 256, kernel_size=(3,), stride=(1,), padding=(1,))
(maxp6): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=2560, out_features=1024, bias=True)
(bn1): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(drop1): Dropout(p=0.5, inplace=False)

Any ideas?

Since you are not using the running estimates via track_running_stats=False, the batchnorm layer has to calculate the batch statistics even in eval() mode.
From the docs:

track_running_stats: a boolean value that when set to True, this
module tracks the running mean and variance, and when set to False,
this module does not track such statistics and always uses batch
statistics in both training and eval modes. Default: True

If you are using this mode, you would need to provide more than a single input value.

Interesting… Thanks for pointing this out. The issue that I’ve had with setting track_running_stats to True is that I get the “Unexpected key(s)” error when loading the state dictionary at a later time:

Unexpected key(s) in state_dict: “bn1.running_mean”, “bn1.running_var”, “bn1.num_batches_tracked”, “bn2.running_mean”, “bn2.running_var”, “bn2.num_batches_tracked”, “bn3.running_mean”, “bn3.running_var”, “bn3.num_batches_tracked”

I’ve seen reports of this before, and most people suggest to either remove the keys (which I can do, but then I seem to be in my current predicament again) or use DataParallel. I don’t believe I’m using DataParallel when training/saving, though… I merely call torch.save(state, file_name). Any suggestions on how to best save and load the model while turning on track_running_stats? I’ve tried converting my model to DataParallel when loading, but that didn’t seem to resolve the key errors.

Okay, I figured out why we weren’t able to load models with track_running_stats on. We having some saving/loading functions that weren’t appropriately handling this setting. It would save with the correct setting for track_running_stats, but it would ignore this variable when loading and default to turning it off. Fixing this resolves the issue. Thanks again!