Oh, Now I am bit confused. I have explained my use case in the Appendix. But I am now more worried about my understanding of PyTorch rather than this particular case. So kindly guide me in the following general question and the exact implementation of the paper listed in Appendix I shall manage if I get this point correct.
In the forward definition of a network model say there is some activation ‘x’
def forward(self,x):
...
x = x+1 or say torch.log(x) or say 5*x
...
I am sure backpropagation will easily happen in this case through x
Then why it should not happen in the following case through the weights and bias
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
conv = nn.Conv2d(1, 6, 3)
self.weight = **nn.Parameter**(conv.weight)
self.bias = nn.Parameter(conv.bias)
def modify_conv_weights(self, weights, bias):
weights = weights+1
bias = 15 + torch.log(bias)
return weights, bias
def forward(self, x):
x = F.conv2d(x, self.weight, self.bias)
self.weight, self.bias = modify_conv_weights(self.weight, self.bias)
return x
What my understanding says PyTorch should be able to easily backpropagate in this case also.
Yours sincerely
APPENDIX
Kindly ignore the exact details mentioned in this paper. I am confident I shall implement the new layer defined in this paper if I understand the above issue correctly. In this paper the authors propose a new conv layer such that after each forward pass, they do some modification such that the central weight is always positive, all other weights are negative and all sum to 0.