yolov10.pt file to .onnx file

When I was trying to export the YOLOv10 model to ONNX format, I encountered an error. Here are the detailed steps and error information:

Environment Information

I’m using Ultralytics YOLOv8.1.34, with Python version 3.8.5 and PyTorch version 1.10.0, running on an Intel Core™ i5 - 9300H 2.40GHz CPU.

Operation Steps and Error Messages

First, I executed the following command to start the export process:

收起

bash

yolo mode=export model=yolov10-main/yolov10n.pt format=onnx simplify=True opset=12

During the execution, some basic information about the YOLOv10n model was displayed initially, like it has 285 layers, 2,762,608 parameters, etc. But then, when it came to the ONNX export part, the following error occurred:

plaintext

ONNX: starting export with onnx 1.17.0 opset 12...
ONNX: export failure ❌ 2.6s: Exporting the operator amax to ONNX opset version 12 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.
Traceback (most recent call last):
  File "D:\Anaconda3\envs\yolov10\lib\runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "D:\Anaconda3\envs\yolov10\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "D:\Anaconda3\envs\yolov10\Scripts\yolo.exe\__main__.py", line 7, in <module>
  File "D:\yolo\yolov10-main\ultralytics\cfg\__init__.py", line 587, in entrypoint
    getattr(model, mode)(**overrides)  # default args from model
  File "D:\yolo\yolov10-main\ultralytics\engine\model.py", line 590, in export
    return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
  File "D:\Anaconda3\envs\yolov10\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "D:\yolo\yolov10-main\ultralytics\engine\exporter.py", line 290, in __call__
    f[2], _ = self.export_onnx()
  File "D:\yolo\yolov10-main\ultralytics\engine\exporter.py", line 138, in outer_func
    raise e
  File "D:\yolo\yolov10-main\ultralytics\engine\exporter.py", line 133, in outer_func
    f, model = inner_func(*args, **kwargs)
  File "D:\yolo\yolov10-main\ultralytics\engine\exporter.py", line 378, in export_onnx
    torch.onnx.export(
  File "D:\Anaconda3\envs\yolov10\lib\site-packages\torch\onnx\__init__.py", line 316, in export
    return utils.export(model, args, f, export_params, verbose, training,
  File "D:\Anaconda3\envs\yolov10\lib\site-packages\torch\onnx\utils.py", line 107, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "D:\Anaconda3\envs\yolov10\lib\site-packages\torch\onnx\utils.py", line 724, in _export
    _model_to_graph(model, args, verbose, input_names,
  File "D:\Anaconda3\envs\yolov10\lib\site-packages\torch\onnx\utils.py", line 497, in _model_to_graph
    graph = _optimize_graph(graph, operator_export_type,
  File "D:\Anaconda3\envs\yolov10\lib\site-packages\torch\onnx\utils.py", line 216, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)
  File "D:\Anaconda3\envs\yolov10\lib\site-packages\torch\onnx\__init__.py", line 373, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)
  File "D:\Anaconda3\envs\yolov10\lib\site-packages\torch\onnx\utils.py", line 1028, in _run_symbolic_function
    symbolic_fn = _find_symbolic_in_registry(domain, op_name, opset_version, operator_export_type)
  File "D:\Anaconda3\envs\yolov10\lib\site-packages\torch\onnx\utils.py", line 982, in _find_symbolic_in_registry
    return sym_registry.get_registered_op(op_name, domain, opset_version)
  File "D:\Anaconda3\envs\yolov10\lib\site-packages\torch\onnx\symbolic_registry.py", line 125, in get_registered_op
    raise RuntimeError(msg)
RuntimeError: Exporting the operator amax to ONNX opset version 12 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.

In summary, when I tried to export the YOLOv10 model to ONNX format with the specified opset version 12, the system reported that the ‘amax’ operator is not supported for export to ONNX opset version 12. And it suggests that I can request support or submit a pull request on the PyTorch GitHub.

I don’t know which PyTorch version you are using but opset version 12 sounds quite old. E.g. based on this PR opset 21 was implemented so you might need to update your stack.