Errors with Compiling and Importing cudnn_extension

I am trying to compile the cudnn_extension.cpp

The setup.py file does not refer to this extension.

So, I created my own setup.py , copying from * Custom C++ and CUDA Extensions:

setup.py

from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CppExtension

setup(
  name='cudnn_extension',
  ext_modules=[
    CppExtension(
      'cudnn_extension', 
      ['cudnn_extension.cpp'], 
      )
    ],
  cmdclass={
      'build_ext': BuildExtension
  })

Problem

I am using Google Colab, and I am getting the following error

/content/scripts/pytorch_extensions/cpp/cudnn_extension# python setup.py install
running install
running bdist_egg
running egg_info
writing cudnn_extension.egg-info/PKG-INFO
writing dependency_links to cudnn_extension.egg-info/dependency_links.txt
writing top-level names to cudnn_extension.egg-info/top_level.txt
/usr/local/lib/python3.7/dist-packages/torch/utils/cpp_extension.py:370: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
  warnings.warn(msg.format('we could not find ninja.'))
reading manifest file 'cudnn_extension.egg-info/SOURCES.txt'
writing manifest file 'cudnn_extension.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'cudnn_extension' extension
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-Y7dWVB/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-Y7dWVB/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.7/dist-packages/torch/include -I/usr/local/lib/python3.7/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.7/dist-packages/torch/include/TH -I/usr/local/lib/python3.7/dist-packages/torch/include/THC -I/usr/include/python3.7m -c cudnn_extension.cpp -o build/temp.linux-x86_64-3.7/cudnn_extension.o -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=cudnn_extension -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /usr/local/lib/python3.7/dist-packages/torch/include/ATen/Parallel.h:140:0,
                 from /usr/local/lib/python3.7/dist-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                 from /usr/local/lib/python3.7/dist-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                 from /usr/local/lib/python3.7/dist-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                 from /usr/local/lib/python3.7/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
                 from /usr/local/lib/python3.7/dist-packages/torch/include/torch/extension.h:4,
                 from cudnn_extension.cpp:10:
/usr/local/lib/python3.7/dist-packages/torch/include/ATen/ParallelOpenMP.h:87:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
 #pragma omp parallel for if ((end - begin) >= grain_size)
 
In file included from /usr/include/cublas_v2.h:65:0,
                 from /usr/local/lib/python3.7/dist-packages/torch/include/ATen/cuda/Exceptions.h:3,
                 from cudnn_extension.cpp:12:
/usr/include/cublas_api.h:72:10: fatal error: driver_types.h: No such file or directory
 #include "driver_types.h"
          ^~~~~~~~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
/content/scripts/pytorch_extensions/cpp/cudnn_extension# 

Am I missing any environment variable?

These are the ones on my session.

/content/scripts/pytorch_extensions/cpp/cudnn_extension# echo $
$_                             $GCS_READ_CACHE_BLOCK_SIZE_MB  $NVIDIA_REQUIRE_CUDA
$BASH                          $GLIBCPP_FORCE_NEW             $NVIDIA_VISIBLE_DEVICES
$BASH_ALIASES                  $GLIBCXX_FORCE_NEW             $OLDPWD
$BASH_ARGC                     $GROUPS                        $OPTERR
$BASH_ARGV                     $HISTCMD                       $OPTIND
$BASH_CMDS                     $HISTCONTROL                   $OSTYPE
$BASH_COMMAND                  $HISTFILE                      $PATH$BASH_LINENO                   $HISTFILESIZE                  $PIPESTATUS
$BASHOPTS                      $HISTSIZE                      $PPID
$BASHPID                       $HOME                          $PS1
$BASH_SOURCE                   $HOSTNAME                      $PS2
$BASH_SUBSHELL                 $HOSTTYPE                      $PS4
$BASH_VERSINFO                 $IFS                           $PWD
$BASH_VERSION                  $LANG                          $PYTHONPATH
$CLOUDSDK_CONFIG               $LAST_FORCED_REBUILD           $PYTHONWARNINGS
$CLOUDSDK_PYTHON               $LD_LIBRARY_PATH               $RANDOM$COLAB_GPU                     $LD_PRELOAD                    $SECONDS$COLUMNS                       $LESSCLOSE                     $SHELL$COMP_WORDBREAKS               $LESSOPEN                      $SHELLOPTS$CUDA_VERSION                  $LIBRARY_PATH                  $SHLVL$CUDNN_VERSION                 $LINENO                        $TBE_CREDS_ADDR$DATALAB_SETTINGS_OVERRIDES    $LINES                         $TBE_EPHEM_CREDS_ADDR$DEBIAN_FRONTEND               $LS_COLORS                     $TERM$DIRSTACK                      $MACHTYPE                      $TF_FORCE_GPU_ALLOW_GROWTH
$__EGL_VENDOR_LIBRARY_DIRS     $MAILCHECK                     $TMUX
$ENV                           $NCCL_VERSION                  $TMUX_PANE
$EUID                          $NO_GCE_CHECK                  $UID$GCE_METADATA_TIMEOUT          $NVIDIA_DRIVER_CAPABILITIES    
/content/scripts/pytorch_extensions/cpp/cudnn_extension# echo $

Based on this solution, include the CUDNN path.

setup(
  name='cudnn_extension',
  ext_modules=[
    CppExtension(
      'cudnn_extension', 
      ['cudnn_extension.cpp'], 
      )
    ],
  include_dirs="/usr/local/cuda/targets/x86_64-linux/include",
  cmdclass={
      'build_ext': BuildExtension
  })

I tried a fancy way to avoid hard coded the location, but not even CUDA_PATH was defined.

Compilation is fine. However, there is still an error trying to import the module.

>>> import torch
>>> import cudnn_extension
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: /usr/local/lib/python3.7/dist-packages/cudnn_extension-0.0.0-py3.7-linux-x86_64.egg/cudnn_extension.cpython-37m-x86_64-linux-gnu.so: undefined symbol: cudnnGetErrorString
>>>