How do I disable libtorch warning

Recently I deployed a program using libtorch. The program run as expected but its gives me a warning.

Warning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().

How do I disable the warning ?

Can we please get an answer to this? I’m getting this message flooding my log files, which is not even a real warning, but just a heads-up about an upcoming change:

Warning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.  (_interp_output_size at /pytorch/caffe2/../torch/csrc/api/include/torch/nn/functional/upsampling.h:60)

To make it worse, there is no recompute_scale_factor parameter in UpsampleOptions, so I can’t really disable it through configuration when using the UpSample layer. So my hands are really tied here! Thanks in advance.

Could you pass the scale_factor as an integer or are you relying on the float value?

CC @yf225 the docs seem to miss the recompute_scale_factor argument from the last PR (maybe I’m missing another WIP PR :wink: ).

My scale factors are already integers, and I get the warnings anyway.

But I guess the main question here is broader: isn’t there any way to disable TORCH_WARN messages via a runtime switch?

I do have the same question. Is there any option to control the verbosity of TORCH logs?

1 Like