It seems that you are trying to build libtorch using cmake, the error means that the cpp file related to the function cannot be found. One way to work around this is to download the distributed binary: from https://pytorch.org/, pick C++ language.
could you use an online translator to translate your post to the English version please, so that other users could help you?
From Google translate:
Hello,
From /usr/lib/aarch64-linux-gnu/ plus CUDA and CUDNN, you can see that you did not use a standard x86_64 architecture computer to compile, you are likely to use NVIDIA Jetson TX2? source code to compile (Definitely not done with hardware virtualization technology such as KVM, hardware virtualization technology cannot reach the level of virtual designated graphics card)
The reason mentioned by the person above is that there is no problem. There is indeed a cpp file that cannot be found, but the solution is definitely not to download the official libtorch (the official libtorch is compiled by the x86_64 architecture compilation system…)
So,… if you are not in a hurry, a very simple way is to remove CUDNN and just compile with CUDA. (If I remember correctly, CUDNN does not seem to be required? Is it optional?)
If only the content of CUDA is correct… (at least there is a CUDA as a basis) It is not too late to look at it. (I only know the specific details)
emmm… sorry for that, I just want to reply the answer… (since i found that the question by @smartadpole is written in chinese. I will use english next time.
here is my reply to @smartadpole and @Lin_Jia, I translated by myself, in english.
hello @smartadpole
from /usr/lib/aarch64-linux-gnu, and CUDA/CUDNN, i can find that you are not using computers with x86_64 architecture to compile from source, and you may use NVIDIA Jetson TX2 to compile from source. (probably not use KVM to virtualize the hardware, since NVIDIA Graphic Cards cannot be virtualized)
the reply from @Lin_Jia:
the reason is right, since .cpp cannot be found. BUT, the solution is not downloading libtorch which is officially provided in pytorch.org , which is also precompiled in x86_64 architecture operating systems, which does not fit ARM-based devices.
So… one solution is that compiling again, with CUDNN excluded. (Just use CUDA is okay) At least, pytorch with cuda can also be accelerated.