How to do manual update on parameters after optimizer?

I am new to PyTorch and had little experience on this library.

So I was trying to replicate the result of Deep dream and I found a implementation in GitHub. That works beautifully. However, I read some sources saying I should try to use gaussian filter on training as well (as a “regularization” method) and things started to fall apart after that.

def getFilter():
    ### some code here to generate a filter content
    gaussian_filter = nn.Conv2d(in_channels=channels, out_channels=channels, kernel_size=kernel_size, groups=channels, bias=False, padding=int(kernel_size / 2))

    gaussian_filter.weight.data = gaussian_kernel
    gaussian_filter.weight.requires_grad = False
    return gaussian_filter

processed_image = getImage(input_path)
optimizer = SGD([processed_image], lr=12, weight_decay=1e-4)
filter_ = getFilter().cuda()
for i in range(1, steps + 1):
    optimizer.zero_grad()
    out = model(processed_image)
    loss = # skipped the calculation
    loss.backward()
    optimizer.step()
    processed_image = filter_(processed_image)
    print(loss)

The print(loss) in the function is quite important here because this program will run this line exactly twice. Then, the program will report “RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify
retain_graph=True when calling backward the first time.” in the loss.backward().

The program would run perfectly fine without processed_image = filter_(processed_image).

Does anyone know how to avoid the error?