Error when resizing batch

I coached a model with batch = 500. I saved the training data and want to download from the model for training with batch = 100, but I get an error from batchNorm1d.
Can I get around this error so as not to lose my stored data during a workout?

It is difficult for me to ask the right question, but when they understand me, I get a good answer.
I cannot continue to explore the network until I find the answer to my question.
From the beginning, I coached the network for one feature per cycle. This happened a very long time. Now I submit to the network a batch of features. It works well. But I need to test the network by serving one feature at a time. But when I load load_state_dict and try to submit features one by one, I get an error. Since during training, nn.BatchNorm1d took the size of the party, and during the test I submit a size one. How do I solve this problem?

I used bn1d, it gave me an error. But I increased the dimension to 4D and used bn2d. The error has gone.

Call model.eval() before passing single samples to the model.
This will make sure to use the running stats in the batch norm layers and should avoid this error.

PS: don’t forget to add a batch dimension even to a single sample.

1 Like

I tried two options, but I don’t know if they behave the same way.

  1. out = out.contiguous (). View (1, batch, 1, wn1) -> self.bn1 = nn.BatchNorm2d (batch, affine = True)
  2. out = out.contiguous (). View (1,1, batch, wn1) -> self.bn1 = nn.BatchNorm2d (batch, affine = True)
    In the first case, BatchNorm2d takes batch and does the normalization for each example. In the second case, BatchNorm2d takes 1 and makes normalization for all examples. The second case allows me to change batch without problems. Training in the first case and in the second is the same.

The second example should fail, since the specified channels in bn1 (batch) do not equal the passed channels (1).

If you are dealing with a temporal signal in the shape [batch_size, channels, sequence_length], I would stick to nn.BatchNorm1d and just call model.eval() for the test/validation case.

Reshaping your data such that the batch size is in dim1, is most likely not what you want.

2. out = out.contiguous (). View (1,1, batch, wn1) -> self.bn1 = nn.BatchNorm2d (1, affine = True)

That would be syntactically correct.
However, I still think you shouldn’t reshape the tensor to get the batch size in dim1 in your current implementation.

Is nn.BatchNorm1d and model.eval() not working?
If so, could you post the stack trace so that we could have a look?

Help understand this solution. How can I add a batch size if I have one sample?
Do I need to add the batch size with which I train the model or set the size to 1?

For a single sample you should set the batch size to 1 using data = data.unsqueeze(0).

1 Like