LSTM zeroing hidden state

Hello,
I have a question about zeroing the hidden state before each epoch.
Some code do None re initialization and othser do self.h = [torch.zeros(2, bs, n_hidden) for _ in range(n_layers)]
Do both are equivalent?
I am mostly interested because the second one, needs a fixed batch size.