How to reach the exact source code in Pytorch?

I wish to implement fourier domain based convolution, for which I wished to compare the complexity of my approach with the inbuilt conv2d layer’s complexity.
In the docs, the following formula is given for computing convolutional layer -

To get to the source code, I followed the ‘source’ button on the right of the formula which lead to the following page -

This code used ‘functional.conv2d’, but the definition on the functional conv2d page did not have a link to the source code.

Some answers online suggested this is the entry point to the code for conv2d -

But this link further points to mkldnn and cudnn implementations, which maybe implemented in the following files -


I tried to probe the mkldnn file further but it points to some algorithm called -
‘ideep::algorithm::convolution_direct’, which seems to be defined in some Intel github account.

So I could not get to any code in which we can see the implementation of the convolution function directly - as in in terms of plain multiplication and addition.

Could anyone kindly suggest can/ how can we get to the base code in Pytorch?
Or can we assume that the implementation directly uses the given formula in torch.nn.conv2d to compute convolution?
Thanks in advance.


Like most open-source libraries that PyTorch uses, this is “vendored” as a git submodule (cloning with --recursive gets you these, too, or use git submodule update --init --recursive and git submodule sync or some combination). Then the code you are most likely interested in is under third_party/ideep/mkl-dnn/src/cpu/ in your PyTorch checkout.

Best regards



I might should open up a new post, but I just want to ask if you can help me to find where is the cudnn version of this at::cudnn_convolution, I searched the whole project but found no definition. I know it will eventually call into cudnn library’s APIs, but I just need to find the missing link part. Guess it might be some code auto generated?
Here is where it gets called:

    } else {
      output = at::cudnn_convolution(
          input.contiguous(backend_memory_format), weight,
          params.padding, params.stride, params.dilation, params.groups, params.benchmark, params.deterministic, params.allow_tf32);
      if (bias.defined()) {
        output.add_(reshape_bias(input.dim(), bias));

Thank you very much.

The cuDNN calls for the currently used v7 API can be found here.

Thanks a lot. Now I can pull them all together.