Why the alignment score function (in seq2seq attention model) in the tutorial seems different from thoes in papers?

I am learning attention mechanism.
https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html
In the tutorial, the alignment score is calculated based on the decoder’s input and hidden state.
However, I read several papers about attention. They used current target hidden
state ht with each source hidden state hs to compute the alignment score, such as this one, entitled Effective Approaches to Attention-based Neural Machine Translation. I do not know why.