Seq2Seq tutorial

Hi,

I’m new to pytorch and have been following the many tutorials available. In the seq2seq tutorial, the encoder’s foward() method iterates over all GRU’s layer. I’m somewhat confused by this : I thought the RNN constructor took the amount of layers as a parameter. I therefore assumed you would not need to explicitly iterate over the RNN’s layers.

My question is : is this the proper way of coding a multi-layers rnn ? If not, how should one proceed ?

Thanks,
Lucas

(link to tutorial : http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html)

I think layers is a parameter of RNN, in this tutorial, it just explicitly show you how multi layer RNN works, you can just give a layer parameter to RNN.

Ok cool! will try that, thanks :slight_smile:

This is non-standard (you could call it a mistake), but it works well - essentially reusing a single RNN layer multiple times as a form of shared weights. If you use the n_layers parameter of an RNN each layer has its own weights, which is the normal way to do it.

1 Like

I made a mistake, you are right, different layers have their own weights.