<JIT Trace> Redundant prim::ListUnpack and prim::ListConstruct

I have a customized C++ operation batch_score_nms which returns a vector<at::Tensor> with size (Batchsize * 3). When I trace this graph, I get below output:

  %470 : Tensor[] = torch_ipex::batch_score_nms(%bboxes, %probs, %3, %2) # /home/lesliefang/pytorch_1_7_1/ssd-rn34/frameworks.ai.pytorch.cpu-models/inference/others/cloud/single_stage_detector/pytorch/utils.py:169:0
  %471 : Tensor, %472 : Tensor, %473 : Tensor = prim::ListUnpack(%470)
  %477 : Tensor[] = prim::ListConstruct(%471, %472, %473)
  return (%477)

I am a little confused when we need to ListUnpack and tensor list and pack it again for output? It seems Redundant .