Two optimizer in seq2seq translation tutorial

In http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html, there are two optimizer: encoder_optimizer and decoder_optimizer. What’s the reason to do that? I think one optimizer should be enough.

Yes, we can use only one optimizer, and I think the authors already answered this question on Github, https://github.com/spro/practical-pytorch/issues/34