Backpropagation in neural style transfer

I’m trying to understand how optimization and backpropagation work.

In this code from Neural style transfer:

optimizer = optim.LBFGS([opt_img]);
n_iter=[0]

while n_iter[0] <= max_iter:

    def closure():
        optimizer.zero_grad()
        out = vgg(opt_img, loss_layers)
        layer_losses = [weights[a] * loss_fns[a](A, targets[a]) for a,A in enumerate(out)]
        loss = sum(layer_losses)
        loss.backward()
        n_iter[0]+=1
        return loss
    
    optimizer.step(closure)

Do the pretrained weights of VGG optimized?
(Or, do both opt_img and the weights of VGG optimized by LBFGS?)
(Do the weights of VGG stay same for every iterations, or they get optimized and different for each iterations?)

The weights of the VGG are not modified. The only things your optimizer is allowed to change are the parameters you give it as its first argument: in this case all it can change is opt_img.

1 Like

Great, thanks.
So does that mean loss.backward() calculates gradients on vgg layers down to the opt_img, then optimizer.step() modifies only opt_img?

Yep. As far as I know, all the other gradients are being calculated and accumulated, but for sure the only tensor modified is opt_img, because optimizer only knows about that tensor.

1 Like

Now, i understood. Thanks