How do I realize something similar to keras TimeDistributedDense (https://github.com/fchollet/keras/issues/1029) in pytorch?
Because of pytorch’s dynamic graph, you don’t need TimeDistributedDense
like in Keras. You can just use Linear
. LSTM networks become very straight forward. See the tutorial for pos tagging with LSTM: http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging
1 Like
# 24 fc timedistributed
num = 24
fc = nn.ModuleList([nn.Linear(8, 1) for i in range(num)])
# forward pass
x = np.zeros(64, 24, 8)
outs=[]
for i in range(x.shape[1]):
outs.append(fc[i](x[:, i, :].unsqueeze(1)))
outs=torch.cat(outs, axis=1)
Hi, could you further explain more about it?