Problems encountered in cross compiling PyTorch

hello,
I am trying to cross compile pytorch that supports CUDA and CUDNN; I have encountered some issues, even though I am using CMAKE_CROSSCOMPLING=TRUE, it seems that I still execute some build products during the build process. The build logs are as follows. Can I get some information or help?

root@19447bd751c2:/pytorch# USE_CUDA=ON USE_CUDNN=1 USE_CUSPARSELT=OFF USE_EIGEN_FOR_BLAS=ON USE_GFLAGS=OFF USE_GLOG=OFF USE_GLOO=1 USE_MKL=OFF USE_MKLDNN=OFF USE_MPI=0 USE_NCCL=1 USE_NNPACK=1 USE_OPENMP=ON USE_ROCM=OFF USE_ROCM_KERNEL_ASSERT=OFF BLAS=Eigen TORCH_CUDA_ARCH_LIST="8.7" CC=aarch64-linux-gnu-gcc CXX=aarch64-linux-gnu-g++ CUDA_HOME=/cuda/jetson_lib/cuda-12.6 CUDA_NVCC_EXECUTABLE=/cudatool/bin/nvcc CUDAHOSTCXX=aarch64-linux-gnu-g++ CUDNN_LIB_DIR=/cuda/jetson_lib/cudnn/lib CUDNN_INCLUDE_DIR=/cuda/jetson_lib/cudnn/include CMAKE_CUDA_ARCHITECTURES=87 CMAKE_CUDA_COMPILER=/cudatool/bin/nvcc CMAKE_CROSSCOMPILING=TRUE python3 setup.py bdist_wheel   
fatal: detected dubious ownership in repository at '/pytorch'
To add an exception for this directory, call:

        git config --global --add safe.directory /pytorch
Building wheel torch-2.7.0a0+gitUnknown
-- Building version 2.7.0a0+gitUnknown
-- Checkout nccl release tag: v2.26.2-1
cmake -GNinja -DBLAS=Eigen -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_CROSSCOMPILING=TRUE -DCMAKE_CUDA_ARCHITECTURES=87 -DCMAKE_CUDA_COMPILER=/cudatool/bin/nvcc -DCMAKE_INSTALL_PREFIX=/pytorch/torch -DCMAKE_PREFIX_PATH=/usr/local/python3.12/lib/python3.12/site-packages -DCUDA_HOST_COMPILER=aarch64-linux-gnu-g++ -DCUDA_NVCC_EXECUTABLE=/cudatool/bin/nvcc -DCUDNN_INCLUDE_DIR=/cuda/jetson_lib/cudnn/include -DCUDNN_LIBRARY=/cuda/jetson_lib/cudnn/lib -DPython_EXECUTABLE=/usr/local/python3.12/bin/python3 -DTORCH_BUILD_VERSION=2.7.0a0+gitUnknown -DTORCH_CUDA_ARCH_LIST=8.7 -DUSE_CUDA=ON -DUSE_CUDNN=1 -DUSE_CUSPARSELT=OFF -DUSE_EIGEN_FOR_BLAS=ON -DUSE_GFLAGS=OFF -DUSE_GLOG=OFF -DUSE_GLOO=1 -DUSE_MKL=OFF -DUSE_MKLDNN=OFF -DUSE_MPI=0 -DUSE_NCCL=1 -DUSE_NNPACK=1 -DUSE_NUMPY=True -DUSE_OPENMP=ON -DUSE_ROCM=OFF -DUSE_ROCM_KERNEL_ASSERT=OFF /pytorch
-- /usr/bin/aarch64-linux-gnu-g++ /pytorch/torch/abi-check.cpp -o /pytorch/build/abi-check
/pytorch/build/abi-check: 10: Syntax error: EOF in backquote substitution
CMake Warning at cmake/CheckAbi.cmake:24 (message):
  Could not run ABI Check: 2
Call Stack (most recent call first):
  CMakeLists.txt:72 (include)


-- Determined _GLIBCXX_USE_CXX11_ABI=0
-- Could not find ccache. Consider installing ccache to speed up compilation.
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- SVE support detected.
-- Compiler supports SVE extension. Will build perfkernels.
CMake Warning at cmake/Modules/FindCUDAToolkit.cmake:957 (message):
  Could not find librt library, needed by CUDA::cudart_static
Call Stack (most recent call first):
  cmake/public/cuda.cmake:59 (find_package)
  cmake/Dependencies.cmake:44 (include)
  CMakeLists.txt:868 (include)


-- PyTorch: CUDA detected: 12.6
-- PyTorch: CUDA nvcc is: /cudatool/bin/nvcc
-- PyTorch: CUDA toolkit directory: /cuda/jetson_lib/cuda-12.6
-- PyTorch: Header version is: /pytorch/build/CMakeFiles/CMakeTmp/cmTC_02323: 1: ELF�@�e@8: not found
/pytorch/build/CMakeFiles/CMakeTmp/cmTC_02323: 2: �: not found
/pytorch/build/CMakeFiles/CMakeTmp/cmTC_02323: 11: Syntax error: Unterminated quoted string

CMake Error at cmake/public/cuda.cmake:118 (message):
  FindCUDA says CUDA version is (usually determined by nvcc), but the CUDA
  headers say the version is /pytorch/build/CMakeFiles/CMakeTmp/cmTC_02323:
  1: ELF�@�e@8: not found

  /pytorch/build/CMakeFiles/CMakeTmp/cmTC_02323: 2: �: not found

  /pytorch/build/CMakeFiles/CMakeTmp/cmTC_02323: 11: Syntax error:
  Unterminated quoted string

  .  This often occurs when you set both CUDA_HOME and CUDA_NVCC_EXECUTABLE
  to non-standard locations, without also setting PATH to point to the
  correct nvcc.  Perhaps, try re-running this command again with
  PATH=/cuda/jetson_lib/cuda-12.6/bin:$PATH.  See above log messages for more
  diagnostics, and see https://github.com/pytorch/pytorch/issues/8092 for
  more details.
Call Stack (most recent call first):
  cmake/Dependencies.cmake:44 (include)
  CMakeLists.txt:868 (include)


-- Configuring incomplete, errors occurred!

The problem was solved by setting CMAKE_SYSTEM-NAME=Linux. Simply setting CMAKE_CROSSCOMPLING=TRUE without setting CMAKE_SYSTEM-NAME will not take effect