Building quick and dirty bindings to cuDNN benchmark cache

I am trying to write an extension to get/set the cuDNN cache in pytorch/Conv_v7.cpp at 42e098323011eb8464b6238ffb011d0bb1b9ac2c · pytorch/pytorch · GitHub. The idea being that for a homogenous fleet we can save the benchmarking time by simply setting the cache to settings that are known to be optimal (same GPU, same system).

To do so I need to have access to the cache which what I am trying to patch manually into PyTorch. At first I tried to simply add a header file to expose the ATen/native/cudnn/Conv_v7.cpp file, but it seems like nothing from the cuDNN directory makes it to the final include directory. Why?

To keep the changes to a minimum in PyTorch itself, I am trying to use a cpp_extension to call my newly-created header file. Which so far seems to work to access anything in ATen/native, but not in the subdirectories.

/context, below is the TLDR:

  • How to export the header files in ATen/native/cudnn when building PyTorch from source?
  • Is there a better way to get/set the BenchmarkCache that I haven’t found yet.

Thanks