Changing bias sign in Conv2d from positive to negative in the source code will affect convbackward?

    def __init__(self, in_channels, out_channels, kernel_size, stride=1,
                 padding=0, dilation=1, groups=1,
                 bias=True, padding_mode='zeros'):
        kernel_size = _pair(kernel_size)
        stride = _pair(stride)
        padding = _pair(padding)
        dilation = _pair(dilation)
        super(Conv2d, self).__init__(
            in_channels, out_channels, kernel_size, stride, padding, dilation,
            False, _pair(0), groups, bias, padding_mode)

    def conv2d_forward(self, input, weight):
        if self.padding_mode == 'circular':
            expanded_padding = ((self.padding[1] + 1) // 2, self.padding[1] // 2,
                                (self.padding[0] + 1) // 2, self.padding[0] // 2)
            return F.conv2d(F.pad(input, expanded_padding, mode='circular'),
                            weight, self.bias, self.stride,
                            _pair(0), self.dilation, self.groups)
        return F.conv2d(input, weight, self.bias, self.stride,
                        self.padding, self.dilation, self.groups)

    def forward(self, input):
        return self.conv2d_forward(input, self.weight)

this is the official Conv2d function.

If we change the sign of “self.bias” in 4th to last line i.e
from

return F.conv2d(input, weight, self.bias, self.stride,
                        self.padding, self.dilation, self.groups)

to

return F.conv2d(input, weight, (-1*self.bias), self.stride,
                        self.padding, self.dilation, self.groups)

will this affect the backward function?

Given that the formula is conv_out = input *correl* weight + bias (*correl* being the correlation operator), this will not change the ∂conv_out / ∂weight nor the ∂conv_out / ∂input. It will flip the sign in ∂conv_out / ∂bias. It will almost certainly change ∂loss / ∂conv_out as that now is computed at a totally different location.

Best regards

Thomas

Hey Thomas
Thanks for your quick response.
I realised that the sign of ∂ conv_out / ∂ bias SHOULD change according to the equation conv_out = input *correl* weight + bias

but what I want to know is- if i make a new function Conv2d_modified with the mentioned changes and then when I call backward() on the loss function ,then will the REQUIRED sign change of ∂ conv_out / ∂ bias will take place?

Yeah, so PyTorch will track your computation and then backpropagate through it. The idea of the functional interface is that you can do wild computation around the standard functions.
For example, @ptrblck and I did this when we implemented the style GAN’s weight-scaling for convolutions (and linear layers, which have the commentary around it).

Best regards

Thomas

Thanks Thomas
appreciate your quick responses. My doubt is cleared now.