When I want to script a model, it comes a warning message: RuntimeWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters(). But if I don’t script it and just run it the warning message won’t appear. I wonder whether it is a BUG? And Could it be solved?
I have the same problem with this Github issue when run the model in C++ on GPU. If I don’t run it on GPU it will work well.
My pytorch version is 1.1. CUDA version is 9.0, CUDNN is 7.3.0. Can anyone give me a help?