Teacher force training

I am trying to do a seq2seq prediction. For this, I have an LSTM layer followed by a fully connected layer. I employ Teacher training during the training phase and would like to skip this (I maybe wrong here) during testing phase. I have not found a direct way of doing this so I have taken the approach shown below.

def forward(self, inputs, future=0, teacher_force_ratio=0.2, target=None):
	outputs = []
	for idx in range(future):
		rnn_out, _ = self.rnn(inputs)
		output = self.fc1(rnn_out)
		if self.teacher_training:
			new_input = output if np.random.random() >= teacher_force_ratio else target[idx]
			new_input = output
		inputs = new_input

I use a bool variable teacher_training to check if Teacher training is needed or not. Is this correct? If it is, is there a better way of doing it? Thanks.

Your approach is fine, if you want to set the teacher_training argument independently.
Alternatively, you could also use the internal self.training flag, which will be changed by calling model.train() or model.eval().