The error message sounds a bit weird as apparently it’s being thrown while trying to create some internal tensors using the hidden_size and input_size for the shape definition.
Anyway, try to pass the input_size as a single value and it should work.
You could flatten them or pass them through another linear layer to create a single feature.
However, I’m not sure what the best approach would be so let’s wait for some experts on this topic.
I have similar error as well. Not sure how to solve it:
Traceback (most recent call last):
File “train.py”, line 53, in
model = Autoencoder( trainseq = X_train[-1,:,:].shape,testseq = tensor_y.shape, hidden_dim = 1, target_len= tensor_x.shape[1], batch_size = tensor_x.shape[1])
File “/home/ubuntu/Desktop/Mol/models.py”, line 113, in init
self.encoder = Encoder(input_size = self.input_size, hidden_dim = self.hidden_dim,num_layers = self.num_layers,batch_size = self.batch_size)
File “/home/ubuntu/Desktop/Mol/models.py”, line 44, in init
self.bi_lstm1 = nn.LSTM(self.input_size, self.hidden_dim, bidirectional=True)#batch_first = True)
File “/home/ubuntu/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 590, in init
super(LSTM, self).init(‘LSTM’, *args, **kwargs)
File “/home/ubuntu/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 87, in init
w_ih = Parameter(torch.Tensor(gate_size, layer_input_size))
TypeError: new(): argument ‘size’ must be tuple of ints, but found element of type tuple at pos 2
It seems you are running into the same issue as the original author of this topic, so you should also pass the sizes as ints:
input_size = (10, 1)
hidden_dim = 10
model = nn.LSTM(input_size, hidden_dim, bidirectional=True)
> TypeError: empty(): argument 'size' must be tuple of ints, but found element of type tuple at pos 2
input_size = 10
model = nn.LSTM(input_size, hidden_dim, bidirectional=True) # works