CPU usage too high when multiple inference using multiprocessing in torch

I have a problem with use torch.multiprocessing modules.

In my code,

import torch.multiprocessing as mp

class MultiProcessing:
     def run_inference(self):
          # Load detection model
          # Model is YOLOR(detection model)
          model = MyModel()
          ...
          p1 = mp.Process(target=self.do_detection, args=(model,))
          p2 = mp.Process(target=self.do_detection, args=(model,))
          p1.start()
          p2.start()
          p1.join()
          p2.join()

     def do_detection(self, model):
          model = model.cuda()
          vid_cap = cv2.VideoCapture("<VideoFile>")
          while True:
               # Read video
               _, img = vid_cap.read()
               # Input video frame to detection model
               pred = model(img)
               # Visualize detection result
               ...
if __name__ == '__main__':
     inference = MultiProcessing()
     inference.run_inference()
# My CPU / GPU spec
CPU: i7-11700K, GPU: RTX 3070

When run this code, I only run two processes, CPU usage 100% but GPU usage approximately under 50%. due to CPU usage, the detection process is too slow

Why is it happen? and How can i solve this problem?