Libtorch problem : To compact weights again call flatten_parameters()

I load a model file in C++, and it runs well on cpu. But when I use CUDA, it gives me this warning and then crashes.

Warning: RNN module weights are not part of single contiguous chunk of memory. 
This means they need to be compacted at every call, possibly greatly increasing memory usage. 
To compact weights again call flatten_parameters(). (_cudnn_impl at ..\aten\src\ATen\native\cudnn\RNN.cpp:1249)

Now I don’t have any clue to solve this, could you give me some advice?

1 Like

I meet the same problem, do you solve it ?