ModuleNotFoundError: No module named 'torch._prims_common'

Hello. I am new to using Pytorch. I am setting up yolo nas for deepstream as per marcoslucianops deepstream yolo repo for yolo nas.
While generating the onnx model (python3 export_yolonas.py -m yolo_nas_s -w yolo_nas_s_coco.pth --dynamic), I get the following error.

$python3 export_yolonas.py -m yolo_nas_s -w yolo_nas_s_coco.pth --dynamic
Traceback (most recent call last):
File “export_yolonas.py”, line 7, in
import torch.nn as nn
File “/opt/nvidia/deepstream/deepstream-6.2/.venv/lib/python3.8/site-packages/torch/nn/init.py”, line 1, in
from .modules import * # noqa: F403
File “/opt/nvidia/deepstream/deepstream-6.2/.venv/lib/python3.8/site-packages/torch/nn/modules/init.py”, line 1, in
from .module import Module
File “/opt/nvidia/deepstream/deepstream-6.2/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 8, in
from torch._prims_common import DeviceLikeType
ModuleNotFoundError: No module named ‘torch._prims_common’

The torch details are as below
Name: torch
Version: 2.3.1
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3
Location: /opt/nvidia/deepstream/deepstream-6.2/.venv/lib/python3.8/site-packages
Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas-cu12, nvidia-cuda-cupti-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-runtime-cu12, nvidia-cudnn-cu12, nvidia-cufft-cu12, nvidia-curand-cu12, nvidia-cusolver-cu12, nvidia-cusparse-cu12, nvidia-nccl-cu12, nvidia-nvtx-cu12, sympy, triton, typing-extensions
Required-by: data-gradients, imagededup, super-gradients, torchaudio, torchmetrics, torchvision

I cannot reproduce the issue using torch==2.3.1:

python -c "import torch; print(torch.__version__); from torch._prims_common import DeviceLikeType; print(DeviceLikeType)"
2.3.1+cu121
typing.Union[str, torch.device, int]

still getting the same error