Load trained yolov5 model out side the detect.py

I’m going to develop a flask web application using yolov5 trained model. so as described in the doc, it works fine with the command-line argument, what I tried was I tried to apply the oop concept and create a model object for use with every single frame.

class Model(object):


    def __init__(self, weights, save_img=False):

        self.view_img = True,
        self.save_txt = False,
        self.imgsz = 640
        self.device = select_device()
        print(self.device, "llll")
        self.output = "output"
        if os.path.exists(self.output):
            shutil.rmtree(self.output)  # delete output folder
        os.makedirs(self.output)  # make new output folder
        self.half = self.device.type != 'cpu'  # half precision only supported on CUDA

        # Load model
        self.model = torch.load(weights, map_location=self.device)['model'].float()  # load to FP32
        self.model.to(self.device).eval()
        if self.half:
            self.model.half()  # to FP16

In side Camera.py i tried to create model using below line
model = Model('weights/best.pt')

but now I’m getting below error in the stack

 File "C:\Users\D.ShaN\Documents\projects\python\fyp\helmet_covers_yolo\flask_app\app.py", line 4, in <module>
    from camera import Camera
  File "C:\Users\D.ShaN\Documents\projects\python\fyp\helmet_covers_yolo\flask_app\camera.py", line 8, in <module>
    from detect_image import detect_image
  File "C:\Users\D.ShaN\Documents\projects\python\fyp\helmet_covers_yolo\flask_app\detect_image.py", line 12, in <module>
    model = Model("weights/best.pt")
  File "C:\Users\D.ShaN\Documents\projects\python\fyp\helmet_covers_yolo\flask_app\model.py", line 27, in __init__
    self.model = torch.load(weights, map_location=self.device)['model'].float()  # load to FP32
  File "C:\Users\D.ShaN\AppData\Local\conda\conda\envs\fyp\lib\site-packages\torch\serialization.py", line 594, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "C:\Users\D.ShaN\AppData\Local\conda\conda\envs\fyp\lib\site-packages\torch\serialization.py", line 853, in _load
    result = unpickler.load()
ModuleNotFoundError: No module named 'models'

ass describe in Yolo I tried to execute below line in the same manner with the same weight parameters in both cases (command line and model = Model("weights/best.pt"))

self.model = torch.load(weights, map_location=self.device)['model'].float()

any suggestions or solutions

thank you

It seems you are trying to load the model directly, which is not the recommended way since it may break in various ways, if you don’t keep the file structure equal.
The recommended way would be to save and load the state_dict as described here, which would avoid these import errors.

1 Like