When I train a jit-script RNN I get the warning:
Warning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters(). (_cudnn_impl at /pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1266)
If I try to filter the warning with warnings.filterwarnings('ignore')
or something like that, the warning persists. Is this intended behaviour?