Fully connected layer in forward()

Hi, I am wondering if it is possible to place self.fc = nn.Linear(hidden_size, x.shape[1]) in the forward(self,x). Because I use Time Series cross validation to split the dataset, which means the seq_len for train_dataset changes over time.

class LSTM(nn.Module):
def init(self, input_size, hidden_size, num_layers, p):
super(LSTM,self).init()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.dropout = nn.Dropout(p)
self.lstm = nn.LSTM(input_size = input_size,
hidden_size = hidden_size,
num_layers = num_layers,
batch_first = True,
dropout = p)

def forward(self,x):
self.fc = nn.Linear(self.hidden_size,x.shape[1])
h0 = torch.zeros(self.num_layers, x.shape[0], self.hidden_size)
c0 = torch.zeros(self.num_layers, x.shape[0], self.hidden_size)
output, (hn,cn) = self.lstm(x, (h0,c0))
x = self.fc(hn[self.num_layers-1,:,:])
return x

You could recreate a module in the forward method but would then use this randomly initialized layer which won’t be trained which sounds wrong.
A common approach to allow for variable input shapes would be to use e.g. adaptive pooling layers which are creating a defined shape as their output.

I figure out this problem in a different approach, but still thanks for the reply