Dropconnect result warning in rnn

I am trying to use drop connect in rnn, but just to receive a warning:

‘UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greately increasing memory usage. To compact weights again call flatten_parameters()’

here is the weight dropout code:

def _weight_drop(module, weights, dropout):
    """
    Thanks to pytorchnlp, here is the LICENSE:https://hub.fastgit.org/PetrochukM/PyTorch-NLP/blob/master/LICENSE
    """

    for name_w in weights:
        w = getattr(module, name_w)
        del module._parameters[name_w]
        module.register_parameter(name_w + '_raw', nn.Parameter(w))

    original_module_forward = module.forward

    def forward(*args, **kwargs):
        for name_w in weights:
            raw_w = getattr(module, name_w + '_raw')
            w = F.dropout(raw_w, p=dropout, training=module.training)
            setattr(module, name_w, w)

        return original_module_forward(*args, **kwargs)

    setattr(module, 'forward', forward)

And in rnn module, I write something like this:

        if wdrop:
            weights = ["weight_hh_l" + str(i) for i in range(self.n_layers)]
            if bidirectional:
                weights_reverse = ["weight_hh_l" +
                                   str(i) + "_reverse" for i in range(self.n_layers)]
                weights = weights+weights_reverse
                _weight_drop(self.rnn, weights, wdrop)

Notice: It’s ok to run my model(verified), but just annoyed for me to see the warning! I have googled this problem and find flatten parameters, but after add "rnn.flatten_parameters() " still can not avoid this warning.

So, my question is why it happens, and how to eliminate it?