Unbatched vs Batched size=1

Hello everyone,
I want to use LSTM for gesture classification. I wonder if there is any difference between unbatched input and batched input with size set to 1. I think the resulting neural network should be exactly same when we feed it with unbatched input or batched input and batch size = 1. Is this speculation correct? If the answer is no, what’s the difference between them?

Thanks.

Both approaches will yield the same output with a different shape (with and without a batch dimension) as seen e.g. in this small test:

lstm = nn.LSTM(2, 2)

x = torch.randn(2, 1, 2)
out = lstm(x)
print(out)

out2 = lstm(x[:, 0])
print(out2)
1 Like

Thanks for the answer. So in both cases we have used SGD, right?

Assuming you would calculate a loss, the corresponding gradients, and then update the parameters of the model, then you could call this approach a stochastic gradient descent.
In the current code snippet none of this is done (only the forward pass), so I guess you meant a complete training loop.

1 Like

Thank you for the clarification. Yes I meant a complete training loop. Based on your answer and comment, it seems that both approaches are the same as SGD. Is my understanding correct?