Sequence inference

Hello guys, I’ve trained two YOLOv7 models for different purposes but I want to chain them in a better model (they can only be chained since the input of the second model is a cropped image of the input of the first one). So my question is, can I load both models in the same GPU and call one or the other depending on which image I have ( original input or cropped image after the first detection )?

Thank you mister Wolf, I’ll try the approch today ! I’m using the yolov7 respository from WongKinYiu (GitHub - WongKinYiu/yolov7: Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors) and in the code they trace the model, so after tracing both of them they will be well allocated in the same GPU right?

    # Initialize
    set_logging()
    device = select_device(opt.device)
    half = device.type != 'cpu'  # half precision only supported on CUDA

    # Load model
    model = attempt_load(weights, map_location=device)  # load FP32 model
    stride = int(model.stride.max())  # model stride
    imgsz = check_img_size(imgsz, s=stride)  # check img_size

    if trace:
        model = TracedModel(model, device, opt.img_size) # 

    if half:
        model.half()  # to FP16

So I just do it for both models and let it shine? :smiley: