Yolox continous issues without cuda

I want to train model with cpu only for yolox

I am having errors like this:

lib/python3.14/site-packages/torch/cuda/__init__.py", line 417, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")

AssertionError: Torch not compiled with CUDA enabled

when running:

CUDA_VISIBLE_DEVICES="" python tools/train.py -f ../yolox_custom.py -b 

I want to train model with only cpu, is there a way I can do so?

You would need to make sure your PyTorch script is written in a device-agnostic way as it currently seems to explicitly call into the torch.cuda namespace trying to initialize an unavailable device.

HI mate:

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.


import os
import torch

torch.cuda.is_available = lambda: False  # ← ADD THIS FIRST LINE!

from yolox.exp import Exp as MyExp


class Exp(MyExp):
    def __init__(self):
        super(Exp, self).__init__()
        self.depth = 0.33
        self.width = 0.50
        self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

        
        self.num_classes = 1                    # Your "mc" class
        self.data_dir = "../images"             # Path to your images folder
        self.train_ann = "train_coco.json"      # Your train JSON
        self.val_ann = "val_coco.json"          # Your val JSON  
        self.exp_name = 'mc_detector'           # Output folder name

        
        # Force CPU training (CPU-only PyTorch fix)
        self.no_aug_epochs = 0  
        self.fp16 = False
        self.amp_training = False

I have this code that I run but I don’t know what I need to do exactly?

Instead of trying to override methods you could define the device as e.g.:

device = "cuda" if torch.cuda.is_available() else "cpu"

and later only use the device argument in your code e.g. via:

x = torch.randn(size, device=device)

Where am I supposed to put this in, which file?