How to use script annotation for dropout layer?

To use script annotation mode to convert Pytorch model into C++, we replace class MyModule(torch.nn.Module): with class MyModule(torch.jit.ScriptModule):. But how shall we deal with below droptout layer?

class RNNDropout(nn.Dropout):
    Dropout layer for the inputs of RNNs.

    Apply the same dropout mask to all the elements of the same sequence in
    a batch of sequences of size (batch, sequences_length, embedding_dim).

    def forward(self, sequences_batch):
        Apply dropout to the input batch of sequences.

            sequences_batch: A batch of sequences of vectors that will serve
                as input to an RNN.
                Tensor of size (batch, sequences_length, emebdding_dim).

            A new tensor on which dropout has been applied.
        ones =[0],
        dropout_mask = nn.functional.dropout(ones, self.p,,
        return dropout_mask.unsqueeze(1) * sequences_batch

I would define an RNNDropout class based on ScriptModule as well. p should be dealt with as a python-defined constant.

Best regards


1 Like

Since we are not going to use RNNDropout layer in test/prediction phase, does this mean we do not need to convert the RNNDropout into ScriptModule when converting pytorch into C++ used in test/prediction phase? Thus, we can use original class RNNDropout(nn.Dropout): in both training and testing phase?

When tracing, this should work if you just don’t call it.
For scripting, you might need a backup plan.