Recently I deployed a program using libtorch. The program run as expected but its gives me a warning.
Warning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
Can we please get an answer to this? I’m getting this message flooding my log files, which is not even a real warning, but just a heads-up about an upcoming change:
Warning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (_interp_output_size at /pytorch/caffe2/../torch/csrc/api/include/torch/nn/functional/upsampling.h:60)
To make it worse, there is no recompute_scale_factor parameter in UpsampleOptions, so I can’t really disable it through configuration when using the UpSample layer. So my hands are really tied here! Thanks in advance.