To use script annotation mode to convert Pytorch model into C++, we replace
class MyModule(torch.nn.Module): with
class MyModule(torch.jit.ScriptModule):. But how shall we deal with below droptout layer?
class RNNDropout(nn.Dropout): """ Dropout layer for the inputs of RNNs. Apply the same dropout mask to all the elements of the same sequence in a batch of sequences of size (batch, sequences_length, embedding_dim). """ def forward(self, sequences_batch): """ Apply dropout to the input batch of sequences. Args: sequences_batch: A batch of sequences of vectors that will serve as input to an RNN. Tensor of size (batch, sequences_length, emebdding_dim). Returns: A new tensor on which dropout has been applied. """ ones = sequences_batch.data.new_ones(sequences_batch.shape, sequences_batch.shape[-1]) dropout_mask = nn.functional.dropout(ones, self.p, self.training, inplace=False) return dropout_mask.unsqueeze(1) * sequences_batch