I encountered a version issue while running PyTorch. The code runs smoothly with PyTorch 1.6, but it doesn't work with PyTorch 1.13.

The code that is causing the error is as follows,Does anyone know how to resolve this issue without downgrading PyTorch? My GPU is a 4060, and I’ve tried lower versions of PyTorch, but they don’t work correctly.

Traceback (most recent call last):
File “D:\桌面\SA-MASAC-master\train_masac.py”, line 158, in
main(args)
File “D:\桌面\SA-MASAC-master\train_masac.py”, line 152, in main
ddpg_continuous(config)
File “D:\桌面\SA-MASAC-master\train_masac.py”, line 115, in ddpg_continuous
config.mini_batch_size, config.alpha)
File “D:\桌面\SA-MASAC-master\robust_masac.py”, line 576, in init
self.actor = RobustActorNet(obs_dim, hid_shape, act_dim).to(config.DEVICE)
File “D:\桌面\SA-MASAC-master\robust_masac.py”, line 254, in init
self.a_net = BoundedModule(self.a_net, (torch.empty(size=(1, state_dim)),), device=Config.DEVICE)
File “D:\桌面\SA-MASAC-master\auto_LiRPA\bound_general.py”, line 35, in init
self._convert(model, global_input)
File “D:\桌面\SA-MASAC-master\auto_LiRPA\bound_general.py”, line 459, in _convert
nodesOP, nodesIn, nodesOut, template = self._convert_nodes(model, global_input)
File “D:\桌面\SA-MASAC-master\auto_LiRPA\bound_general.py”, line 296, in _convert_nodes
nodesOP, nodesIn, nodesOut, template = parse_module(model, global_input_cpu)
File “D:\桌面\SA-MASAC-master\auto_LiRPA\parse_graph.py”, line 170, in parse_module
trace_graph = torch.onnx.optimize_trace(trace, torch.onnx.OperatorExportTypes.ONNX)
File "D:\anaconda3\envs\PT\lib\site-packages\torch\onnx_init
.py", line 394, in _optimize_trace
return utils._optimize_graph(graph, operator_export_type)
File “D:\anaconda3\envs\PT\lib\site-packages\torch\onnx\utils.py”, line 278, in _optimize_graph
graph, params_dict, symbolic_helper.is_caffe2_aten_fallback()
TypeError: _jit_pass_onnx_unpack_quantized_weights(): incompatible function arguments. The following argument types are supported:
1. (arg0: torch::jit::Graph, arg1: Dict[str, IValue], arg2: bool) → Dict[str, IValue]

TorchScript is in maintenance mode, so remove scripting or tracing from your model and try to use torch.compile instead in the recent PyTorch release.

hello

I also encountered the same problem, have you solved it?

TypeError: _jit_pass_onnx_unpack_quantized_weights(): incompatible function arguments. The following argument types are supported:
1. (arg0: torch::jit::Graph, arg1: Dict[str, IValue], arg2: bool) → Dict[str, IValue]