How to suppress Constant folding Warning?

During training process, my console is swamped by Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. I tried a few variants of

warnings.filterwarnings(
    "once",
    message="Constant folding not applied",
)

in both util.py and train.py, to no effect.

Digging through the code, I have traced a lot of those warnigs to torch._C._jit_pass_onnx_graph_shape_type_inference(graph, params_dict, _export_onnx_opset_version) on line 252 of C:\Users\Dzenan\miniconda3\envs\deep_learning\Lib\site-packages\torch\onnx\utils.py. My PyTorch Version is 1.11.0. I assume that torch._C calls C or C++ code (Python debugger does not step into it). Maybe origination in C/C++ code makes those warnings funky, so they don’t get the same treatment as other Python warnings?

I invoke it via:

torch.onnx.export(
    model,
    dummy_input,
    model_path,
    export_params=True,
    opset_version=11,
    do_constant_folding=False,
    input_names=["input"],
    output_names=["output"],
    dynamic_axes={"input": {0: "batch_size"}, "output": {0: "batch_size"}},
)

Stack trace and locals at line 252:


Does anyone have a suggestion?

There is also an open issue at GitHub. I tried suggestions from Stack Overflow, to no effect.

I see that some people looked at this - thank you. Are there really no suggestions?

As far as I can tell, this problem is PyTorch-specific. Is this forum not the best place to look for answer?

While the issue seems to be raised by PyTorch, I believe the ONNX code owners might not be looking into the discussion board a lot.
I don’t know why the warning is still raised even after you’ve used do_constant_folding=False, so feel free to comment on the open GitHub issue so that the code owners could see it.
A brute-force approach would be to rip out the warning from here and rebuild a custom PyTorch version in case you cannot filter it out.

1 Like

You can supress theese warnings hard way, (but not so hard as ripping code from PyTorch):

import sys
import os

class SuppressStream(object): 

    def __init__(self):
        self.orig_stream_fileno = sys.stderr.fileno()

    def __enter__(self):
        self.orig_stream_dup = os.dup(self.orig_stream_fileno)
        self.devnull = open(os.devnull, 'w')
        os.dup2(self.devnull.fileno(), self.orig_stream_fileno)

    def __exit__(self, type, value, traceback):
        os.close(self.orig_stream_fileno)
        os.dup2(self.orig_stream_dup, self.orig_stream_fileno)
        os.close(self.orig_stream_dup)
        self.devnull.close()

with SuppressStream():
    torch.onnx.export( model, ...

Thanks to
In python, how to capture the stdout from a c++ shared library to a variable - Stack Overflow