Handle bias when doing convolution with cudnn backend

Hi there,
I want to write a custom convolution_backward_overrideable function with cudnn backend.
It should return a tuple with std::make_tuple(grad_input,grad_weight, grad_bias).

The weight and input could be handled by using cudnn_convolution_backward_input and cudnn_convolution_backward_weight.

But according to 31524, handling bias inside convolution is deprecated. However, I could not find examples or docs about how to handle bias in this case.

Is there any example on how to handle the bias? Or Is letting bias to 0 is acceptable?

You can add the bias directly to the output tensor as seen here.

Pardon me. I don’t quite understand this.
I want to return a tuple with three variables (grad_input,grad_weight, grad_bias), but adding the bias to output may means to handle bias out of this custom convolution_backward_overrideable behaviour, this is a little difference than I expected.
Could you please explain a little more about how to handle this? Thank you very much!

The bias is handled outside of the cudnn call, as it’s faster to let PyTorch add it explicitly and calculate its gradients.
If you want to use the cudnn backward call, you can still use cudnnConvolutionBackwardFilter and cudnnConvolutionBackwardData manually.