Autoencoder to generate non image data

Hello! I’m trying to build an autoencoder to generate a syntethic dataset, based on my real one. My dataset is composed of two features that are connected (they are both related). Although to ensure data consistency in the generation of consecutive samples, I think it’s necessary to do something like a sliding window approach, for the AE to learn this. The shape of my numpy array (windowed) is (892,10), in each 892 is the number of samples (windows) and 10 is the number of features of each window (5 of each of the features).

With this dataprocessing, now I’m ready to train the AE. I tried something like this:

train_dataset = TensorDataset(torch.from_numpy(numpy_array).double())
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

class Autoencoder(nn.Module):
    def __init__(self, feats, window_length):
        super(Autoencoder, self).__init__()
        self.encoder = nn.LSTM(feats, 4, batch_first=False)  
        self.decoder = nn.LSTM(4, feats, batch_first=False)

    def forward(self, x):
        encoded, _ = self.encoder(x)
        decoded, _ = self.decoder(encoded)
        return decoded

model = Autoencoder(numpy_array.shape[1], window_length)
model = model.double()
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters())

# Training loop
for epoch in range(epochs):
    model.train()
    train_loss = 0.0
    for inputs in train_loader:
        optimizer.zero_grad()
        inputs = [item.double() for item in inputs]
        outputs = model(inputs[0])
        loss = criterion(outputs, inputs[0])  
        loss.backward()
        optimizer.step()
        train_loss += loss.item()
    print(outputs)
    print(f'Epoch {epoch + 1}/{epochs}, Train Loss: {train_loss / len(train_loader)}')

torch.save(model.state_dict(), 'autoencoder_model.pth')

Output of outputs:
tensor([[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]],
dtype=torch.float64, grad_fn=)

Do you have any idea to fix this? Or a new idea to generate new data, in pytorch, preferably with an Autoencoder?