Lambda inline function

There’s an inline function f = lambda x: 1+n(x), where n is a neural net defined by n=nn.Sequential(nn.Linear(1,10), nn.Sigmoid()).
n needs to be trained and during training this f function is called all the time. Is this f function evolving with the n being trained, or is f just endowed with a frozen n without training?

Yes, it seems to work as I’m seeing weight updates:

n = nn.Sequential(nn.Linear(1,10), nn.Sigmoid())
optimizer = torch.optim.SGD(n.parameters(), lr=1.)

f = lambda x: 1+n(x)

for _ in range(3):
    print(n[0].weight.abs().sum())
    optimizer.zero_grad()
    
    x = torch.randn(1, 1)
    out = f(x)
    out.mean().backward()
    
    print(n[0].weight.grad.abs().sum())
    optimizer.step()
1 Like

thank you, really nice proving that a lambda function is dynamic