Understanding of LSTM

Hi there,
I am trying to implement a LSTM network which uses batch of noise as input, and get output with same size. The input size is 1(single number), time step is 10, which means the input noise and output shape are both (batch_size, 10, 1).

Therefore I built my network as following:

class GeneratorNet(torch.nn.Module):

    def __init__(self, hidden_dim, z_dim):
        super(GeneratorNet, self).__init__()

        self.rnn = nn.LSTM(
            input_size=1,
            hidden_size=32,
            num_layers=1,
            batch_first=True,
        )
        self.out = nn.Sequential(
            nn.Linear(32,10),
            nn.Sigmoid())

    def forward(self, z):
        r_out, (h_n, h_c) = self.rnn(z, None)
        out = self.out(r_out[:, -1, :])
        return out

The problem is that if I use different random input, the output is almost same. Here is the code I test result

generator = GeneratorNet()
for i in range(1, 5):
    # the size of noise is (1,10,1), batch_size is 1.
    noise = torch.randn(size=(1,10,1))
    # the output size is (1, 10, 1)
    out = generator(noise)
    plt.plot(out.detach().numpy()[0])
plt.show()

And here is the output result I get:
Screen Shot 2022-05-05 at 1.20.15 PM
Does anyone can explain why the output is so similar?