Seems like there is a mistake in pytorch tutorials

I found that the example program given in RL tutorial has a mistake —there is no activation fuction such as Relu in the fully connect layer. So after iterations, the duration is getting worse…

(http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html#sphx-glr-intermediate-reinforcement-q-learning-py)

after the last fully connected layer, you generally do not put ReLU.