How can I feed multiple-signals into an LSTM GAN generator?

I read a paper in which the authors used a generator that comprises of two LSTM layers(100 cells each), a dropout layer and a fully connected layer for generating simultaneous signals (2-lead ECG). I try to copy their approach and feed a 750x2 signals (random noise) in an LSTM generator, hoping to get out synthetic signals similar to the ones that I feed to the Discriminator (real). However, I just cannot understand how would that be possible without any upsamplig layer. Moreover I have the following error when I try to feed (750,2) as input_size to the LSTM layer.
My code is below:

class Generator(nn.Module):
def init(self, seq_len, features):
super(Generator, self).init()
self.seq_len, self.features_signals = seq_len, features

    self.disc = nn.Sequential(
        
        nn.LSTM(input_size=(seq_len,features), hidden_size=100, num_layers=2, batch_first=True),
        nn.Dropout(0.5)
   
    )
def forward(self, x):
    return self.disc(x)

device = “cuda” if torch.cuda.is_available() else “cpu”
critic = Generator(750,2).to(device)

The error:

TypeError Traceback (most recent call last)
Input In [561], in <cell line: 2>()
1 device = “cuda” if torch.cuda.is_available() else “cpu”
----> 2 critic = Generator(750,2).to(device)

Input In [560], in Generator.init(self, seq_len, features)
3 super(Generator, self).init()
4 self.seq_len, self.features_signals = seq_len, features
7 self.disc = nn.Sequential(
8
----> 9 nn.LSTM(input_size=(seq_len,features), hidden_size=100, num_layers=2, batch_first=True),
10 nn.Dropout(0.5)
11
12 )

File Z:\1938759\envs\pytorch\lib\site-packages\torch\nn\modules\rnn.py:673, in LSTM.init(self, *args, **kwargs)
672 def init(self, *args, **kwargs):
→ 673 super(LSTM, self).init(‘LSTM’, *args, **kwargs)

File Z:\1938759\envs\pytorch\lib\site-packages\torch\nn\modules\rnn.py:89, in RNNBase.init(self, mode, input_size, hidden_size, num_layers, bias, batch_first, dropout, bidirectional, proj_size, device, dtype)
86 real_hidden_size = proj_size if proj_size > 0 else hidden_size
87 layer_input_size = input_size if layer == 0 else real_hidden_size * num_directions
—> 89 w_ih = Parameter(torch.empty((gate_size, layer_input_size), **factory_kwargs))
90 w_hh = Parameter(torch.empty((gate_size, real_hidden_size), **factory_kwargs))
91 b_ih = Parameter(torch.empty(gate_size, **factory_kwargs))

TypeError: empty(): argument ‘size’ must be tuple of ints, but found element of type tuple at pos 2

Initialize the nn.LSTM with an int value in the input_size argument as you are currently passing a tuple to it.

1 Like

Thank you, it worked! Please accept my apologies for answering so late!

1 Like