RNNs with inputs padded on the left

I have the following problem:

lstm = nn.LSTM(batch_first=True, input_size=15, hidden_size=30)
a = torch.zeros((2,4,15))
output, _ = lstm(a)

In the code above, imagine that my input vector a is padded on the left.
The first hidden state of the LSTM is always zero, but since my input is padded on the left, the actual first hidden state will be non-zero. How do I make it actually zero??
Note: I cannot pad it on the right side, my architecture needs it this way