Can a kernel get involve and updated in the gradient map by this way?

This question might be naive but qusetioned me for a long while. If we wrap a convolution function by a custom function, and pass the parameters, like this

my_w = nn.Parameter(torch.randn(1,1,3,3))
my_b = nn.Parameter(torch.randn(1,1,1))

def conv1(x, my_w, my_b):
    x = F.conv2d(x, weight=my_w, bias=my_b, stride, padding)
    return x

Can my_w and my_b get updated? as they are global parameters, they do not really live in this function

If can, why? If cant, how to fix it(without ‘global’ tag)?

Thank you guys!

Hi @SupremePROC

The root node x returned by the function point to the recorded backward graph, which has my_w and my_b as leaf nodes. They are recorded in x's graph, independently of their scope (global or local).
So their grad will get updated (as they are leaf nodes).

Find more details about autograd here.