Does torch.ops.aten.convolution_backward use cudnn if available?

I understand this is a generated function so I’m having a hard time seeing where it is implemented to make sure if it would use cudnn or not. In general what is a good way to navigate the source code, and how can I find the answer to whether it does use cudnn or not?

A plain GitHub search would point you to this code which is used to dispatch to different backends.
grep -r convolution_backward could also be a good approach to search for the funciton definition.

A proper way to verify if cuDNN is used or not is to profile the code and to check the used kernel.

1 Like

Thanks, this is very helpful! Are there any similar implementations for pooling gradients utilizing CuDNN?

Edit: Upon using grep it seems the only mention of cudnnPoolingBackward is in the caffe library. Is there any reason it isn’t used (ie is PyTorch’s native CUDA implementation as performant)?

cuDNN provides pooling layers, which are not used in PyTorch in the quantization backend if I’m not mistaken. PyTorch eager uses a native pooling layer.