How to handle convolution backward bias in cuDNN backend

From the source code, I found that PyTorch removed handling bias inside cudnn_convolution_backward because of 31524 , It just states that bias should be handled outside cudnn_convolution_backward.

The thing is , I could not find a place telling me how to handle the bias (or should I handle it manually?)
Could someone please tell me about how could I handle it? If I just handle it the same as the implementation before , what is the point of moving it out of cuDNN implementation?

The advantages are mentioned in the PR:

If you want to copy the current approach used in PyTorch, you could add the bias manually as seen here.

Thanks for your help! I want to ask one more thing: how about the backward of bias?

The bias should also be updated in the backward pass. In former implementation, PyTorch used cudnn_convolution_backward_bias , but now they deleted.

1 Like

The manual bias addition should be tracked by Autograd and the backward operation would thus call into the captured bias backward function.