See GitHub issue #166582 on the main PyTorch repo.
TLDR: I have an int8 matmul on a custom ASIC (in simulation) that I am compiling to replace all linear layers of any PyTorch model. However, the model is trained using QAT from TorchAO. When running the compiled model, it runs into.
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] Backend compiler exception
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] Explanation: Backend compiler _backend failed with aten._local_scalar_dense.default
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1]
W1029 18:26:40.182000 156731 torch/dynamo/exc.py:593] [24/0_1] While executing %dut_matmul_sync : [num_users=1] = call_function[target=torch_backend.dut_matmul_sync](args = (tb, %dequantize_per_tensor_default_2, %dequantize_per_tensor_default, %l_self_modules_fc1_parameters_bias), kwargs = {})
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] Original traceback:
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] None
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] Use tlparse to see full graph.. Adding a graph break.
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] Hint: Report an issue to the backend compiler repo.
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1]
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] Developer debug context: Backend: _backend
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] Exception:aten._local_scalar_dense.default
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1]
W1029 18:26:40.182000 156731 torch/dynamo/exc.py:593] [24/0_1] While executing %dut_matmul_sync : [num_users=1] = call_function[target=torch_backend.dut_matmul_sync](args = (tb, %dequantize_per_tensor_default_2, %dequantize_per_tensor_default, %l_self_modules_fc1_parameters_bias), kwargs = {})
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] Original traceback:
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] None
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] Use tlparse to see full graph. (https://github.com/pytorch/tlparse?tab=readme-ov-file#tlparse-parse-structured-pt2-logs)
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] Traceback:
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] File “<eval_with_key>.464”, line 23, in forward
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] return pytree.tree_unflatten((dequantize_per_tensor_default_4,), self._out_spec)
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1]
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1]
W1029 18:26:40.182000 156731 torch/_dynamo/exc.py:593] [24/0_1] For more details about this graph break, please visit: https://meta-pytorch.github.io/compile-graph-break-site/gb/gb0219.html