I have seen in several places examples using PYTORCH_FUSION_DEBUG=1
to retrieve the source of the fused kernels (for example here [JIT] Fusion of Dropout without constant is_training parameter is unsuccessful · Issue #24032 · pytorch/pytorch · GitHub ), I am assuming this dumps it to stdout, but when run with this variable set I see nothing.
Could anyone give advice on how to get this working? Perhaps I need to compile PyTorch with specific flags set?
tom
(Thomas V)
September 17, 2021, 6:01pm
2
PyTorch has 3 fusers (legacy, NNC/TensorExpr fuser, and cuda/nvFuser), the PYTORCH_FUSION_DEBUG
only worked for the old (now legacy) fuser.
For the newer fusers, you probably could get some info from the logging facility described here:
// `TorchScript` offers a simple logging facility that can enabled by setting an
// environment variable `PYTORCH_JIT_LOG_LEVEL`.
// Logging is enabled on a per file basis. To enable logging in
// `dead_code_elimination.cpp`, `PYTORCH_JIT_LOG_LEVEL` should be
// set to `dead_code_elimination.cpp` or, simply, to `dead_code_elimination`
// (i.e. `PYTORCH_JIT_LOG_LEVEL=dead_code_elimination`).
// Multiple files can be logged by separating each file name with a colon `:` as
// in the following example,
// `PYTORCH_JIT_LOG_LEVEL=dead_code_elimination:guard_elimination`
// There are 3 logging levels available for your use ordered by the detail level
// from lowest to highest.
// * `GRAPH_DUMP` should be used for printing entire graphs after optimization
// passes
// * `GRAPH_UPDATE` should be used for reporting graph transformations (i.e.
// node deletion, constant folding, etc)
// * `GRAPH_DEBUG` should be used for providing information useful for debugging
This file has been truncated. show original
Best regards
Thomas
1 Like
Great, thanks for the quick reply!