RuntimeError: bad allocation in LSTMAE

Hi,
when I run LSTMAE I get this error ,what is the problem?

Traceback (most recent call last):
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\multivariate anomaly detection for evnt logs\LSTAME.py”, line 258, in
train_loss = train(epoch, model, optimizer)
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\multivariate anomaly detection for evnt logs\LSTAME.py”, line 230, in train
recon_data = model(batch_data)
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py”, line 1130, in _call_impl
return forward_call(*input, **kwargs)
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\multivariate anomaly detection for evnt logs\models.py”, line 302, in forward
encoded_input, hidden = self.encoder(input)
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py”, line 1130, in _call_impl
return forward_call(*input, **kwargs)
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\multivariate anomaly detection for evnt logs\models.py”, line 267, in forward
encoded_input, hidden = self.lstm(input, (h0, c0))
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py”, line 1130, in _call_impl
return forward_call(*input, **kwargs)
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\rnn.py”, line 769, in forward
result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
RuntimeError: bad allocation

this is forward function:

def forward(self, input):
encoded_input, hidden = self.encoder(input)
decoded_output = self.decoder(encoded_input, hidden)
return decoded_outpu

and in class EncoderLSTM:

def forward(self, input):
tt = torch.cuda if self.isCuda else torch
h0 = Variable(tt.FloatTensor(self.num_layers, input.size(0), self.hidden_size).zero_(), requires_grad=False)
c0 = Variable(tt.FloatTensor(self.num_layers, input.size(0), self.hidden_size).zero_(), requires_grad=False)
encoded_input, hidden = self.lstm(input, (h0, c0))
return encoded_input, hidden

can I get help?