Error :nvcc fatal : Unknown option '-generate-dependencies-with-compile' ninja: build stopped: subcommand failed

When i follow the README.md,
Try : python setup.py build develop.
Then i got some erros , which i could not overcome for a few weeks . The main erro have been described in the tittle of the issue. Otherwise all these attempts are under Windows not linux .
ALL logs has been displayed as follows:

(clrnet) D:\Li_Yong_Hang\CLRNet-main\CLRNet-main>python setup.py build develop
running build
running build_py
running egg_info
writing clrnet.egg-info\PKG-INFO
writing dependency_links to clrnet.egg-info\dependency_links.txt
writing requirements to clrnet.egg-info\requires.txt
writing top-level names to clrnet.egg-info\top_level.txt
reading manifest file ‘clrnet.egg-info\SOURCES.txt’
adding license file ‘LICENSE’
writing manifest file ‘clrnet.egg-info\SOURCES.txt’
running build_ext
C:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\utils\cpp_extension.py:311: UserWarning:

                           !! WARNING !!

!!!
Your compiler (cl 19.00.24210) may be ABI-incompatible with PyTorch!
Please use a compiler that is ABI-compatible with GCC 5.0 and above.
See ABI Policy and Guidelines.

See Instructions for installing GCC >= 4.9 for PyTorch Extensions · GitHub
for instructions on how to install GCC 5 or higher.
!!!

                          !! WARNING !!

warnings.warn(ABI_INCOMPATIBILITY_WARNING.format(compiler))
building ‘clrnet.ops.nms_impl’ extension
C:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\cuda_init_.py:104: UserWarning:
NVIDIA GeForce RTX 3080 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the NVIDIA GeForce RTX 3080 GPU with PyTorch, please check the instructions at Start Locally | PyTorch

warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
Emitting ninja build file D:\Li_Yong_Hang\CLRNet-main\CLRNet-main\build\temp.win-amd64-3.8\Release\build.ninja…
Compiling objects…
Allowing ninja to set a default number of workers… (overridable by setting the environment variable MAX_JOBS=N)
[1/1] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\nvcc --generate-dependencies-with-compile --dependency-output D:\Li_Yong_Hang\CLRNet-main\CLRNet-main\build\temp.win-amd64-3.8\Release\clrnet/ops/csrc\nms_kernel.obj.d --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IC:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\include -IC:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\include\TH -IC:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\include\THC “-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include” -IC:\Users\ps.conda\envs\clrnet\include -IC:\Users\ps.conda\envs\clrnet\include “-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include” “-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include” “-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt” “-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared” “-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um” “-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt” “-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt” -c D:\Li_Yong_Hang\CLRNet-main\CLRNet-main\clrnet\ops\csrc\nms_kernel.cu -o D:\Li_Yong_Hang\CLRNet-main\CLRNet-main\build\temp.win-amd64-3.8\Release\clrnet/ops/csrc\nms_kernel.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=nms_impl -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75
FAILED: D:/Li_Yong_Hang/CLRNet-main/CLRNet-main/build/temp.win-amd64-3.8/Release/clrnet/ops/csrc/nms_kernel.obj
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\nvcc --generate-dependencies-with-compile --dependency-output D:\Li_Yong_Hang\CLRNet-main\CLRNet-main\build\temp.win-amd64-3.8\Release\clrnet/ops/csrc\nms_kernel.obj.d --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IC:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\include -IC:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\include\TH -IC:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\include\THC “-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include” -IC:\Users\ps.conda\envs\clrnet\include -IC:\Users\ps.conda\envs\clrnet\include “-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include” “-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include” “-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt” “-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared” “-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um” “-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt” “-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt” -c D:\Li_Yong_Hang\CLRNet-main\CLRNet-main\clrnet\ops\csrc\nms_kernel.cu -o D:\Li_Yong_Hang\CLRNet-main\CLRNet-main\build\temp.win-amd64-3.8\Release\clrnet/ops/csrc\nms_kernel.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=nms_impl -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75
nvcc fatal : Unknown option ‘-generate-dependencies-with-compile’
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File “C:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\utils\cpp_extension.py”, line 1667, in _run_ninja_build
subprocess.run(
File “C:\Users\ps.conda\envs\clrnet\lib\subprocess.py”, line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command ‘[‘ninja’, ‘-v’]’ returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “setup.py”, line 100, in
setup(name=‘clrnet’,
File “C:\Users\ps.conda\envs\clrnet\lib\site-packages\setuptools_init_.py”, line 153, in setup
return distutils.core.setup(**attrs)
File “C:\Users\ps.conda\envs\clrnet\lib\distutils\core.py”, line 148, in setup
dist.run_commands()
File “C:\Users\ps.conda\envs\clrnet\lib\distutils\dist.py”, line 966, in run_commands
self.run_command(cmd)
File “C:\Users\ps.conda\envs\clrnet\lib\distutils\dist.py”, line 985, in run_command
cmd_obj.run()
File “C:\Users\ps.conda\envs\clrnet\lib\distutils\command\build.py”, line 135, in run
self.run_command(cmd_name)
File “C:\Users\ps.conda\envs\clrnet\lib\distutils\cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File “C:\Users\ps.conda\envs\clrnet\lib\distutils\dist.py”, line 985, in run_command
cmd_obj.run()
File “C:\Users\ps.conda\envs\clrnet\lib\site-packages\setuptools\command\build_ext.py”, line 79, in run
_build_ext.run(self)
File “C:\Users\ps.conda\envs\clrnet\lib\distutils\command\build_ext.py”, line 340, in run
self.build_extensions()
File “C:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\utils\cpp_extension.py”, line 708, in build_extensions
build_ext.build_extensions(self)
File “C:\Users\ps.conda\envs\clrnet\lib\distutils\command\build_ext.py”, line 449, in build_extensions
self._build_extensions_serial()
File “C:\Users\ps.conda\envs\clrnet\lib\distutils\command\build_ext.py”, line 474, in _build_extensions_serial
self.build_extension(ext)
File “C:\Users\ps.conda\envs\clrnet\lib\site-packages\setuptools\command\build_ext.py”, line 202, in build_extension
_build_ext.build_extension(self, ext)
File “C:\Users\ps.conda\envs\clrnet\lib\distutils\command\build_ext.py”, line 528, in build_extension
objects = self.compiler.compile(sources,
File “C:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\utils\cpp_extension.py”, line 681, in win_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File “C:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\utils\cpp_extension.py”, line 1354, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File “C:\Users\ps.conda\envs\clrnet\lib\site-packages\torch\utils\cpp_extension.py”, line 1683, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

It looks like you installed a version of PyTorch that was built with CUDA < 11.0, which would not be compatible with an RTX 3080, which is an sm86 GPU that requires CUDA >= 11.0. It looks like the CUDA extension is also being compiled with an nvcc that is packaged with CUDA 10.1, which could also be too old (hence the Unknown option error).

I would recommend upgrading both your version of PyTorch e.g., via pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 and also upgrading e.g., to CUDA 11.8 on your system.

Hi Eqy, thank you for your reply and professional recomments. Indeed, i noticed the version of CUDA is too old , i also want to update my CUDA and pytorch as you said.
But the project on the github requires CUDA 10.2,( Turoad/CLRNet: Pytorch implementation of our paper “CLRNet: Cross Layer Refinement Network for Lane Detection” (CVPR2022 Acceptance). (github.com)
So, under this kind of situation , if i install the new version of CUDA and pytorch ,i am afraid the project can not be tested correctly . On the other hand , if i follow the README.md, which means keep the CUDA 10.2 ,it would not be compatible with an RTX 3080.
I am so confused, could you give me some advice ? I really need someone likes you to tell me how to take both sides into account.
I am looking forward to your reply .
Best wishes

The README of the repository only states that it was tested with CUDA 10.2, but does not say that it is a strict requirement. I would not expect any real incompatibility of the github repo you linked with CUDA 11+, as the only CUDA code appears to be some custom kernels implementing nms (nonmaximum suppression?). However, CUDA 10.x will definitely not work with an RTX 3080, so from my perspective the only option is to upgrade to CUDA 11+ and check for any compatibility issues afterward (which should be minor if any).

Thank you eqy for your useful advice, i followed your steps and the truble is solved.