Hi!
I am trying to use YOLOv8 with my Jetson Orin. I am trying to run python code but this error occurs:
WARNING TensorRT requires GPU export, automatically assigning device=0
Ultralytics YOLOv8.0.199 Python-3.8.10 torch-2.1.0
Traceback (most recent call last):
File “export.py”, line 29, in
main()
File “export.py”, line 25, in main
model.export(format = ‘engine’)
File “/media/jetson_orin/orin_ssd/yolov8env/lib/python3.8/site-packages/ultralytics/engine/model.py”, line 313, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File “/media/jetson_orin/orin_ssd/yolov8env/lib/python3.8/site-packages/torch/utils/_contextlib.py”, line 115, in decorate_context
return func(*args, **kwargs)
File “/media/jetson_orin/orin_ssd/yolov8env/lib/python3.8/site-packages/ultralytics/engine/exporter.py”, line 172, in call
self.device = select_device(‘cpu’ if self.args.device is None else self.args.device)
File “/media/jetson_orin/orin_ssd/yolov8env/lib/python3.8/site-packages/ultralytics/utils/torch_utils.py”, line 119, in select_device
raise ValueError(f"Invalid CUDA ‘device={device}’ requested."
ValueError: Invalid CUDA ‘device=0’ requested. Use ‘device=cpu’ or pass valid CUDA device(s) if available, i.e. ‘device=0’ or ‘device=0,1,2,3’ for Multi-GPU.
torch.cuda.is_available(): False
torch.cuda.device_count(): 0
os.environ[‘CUDA_VISIBLE_DEVICES’]: *
See Start Locally | PyTorch for up-to-date torch install instructions if no CUDA devices are seen by torch.
I tried all suggestions that I read on this forum but nothing changed.
Thanks for your reply and help!