What kind of attention mechanism used in Seq2Seq tutorial

Hi, i’ve read the seq2seq translation tutorial and i still dont know what kind of attention mechanism author use in the tutorial. It kinda confused me about Bahdanau way and Luong way. Are there any better explaination about the code in tutorial ?

I find this Seq2Seq example more useful. As far as I can tell, it expands on the linked tutorial by using different attention mechanisms (Bahdanau & Luong).

The tutorial uses only the Bahdanau approach.

thks for your support :slight_smile: