How to use nn.optimizer in my own problem

here is my code:

from scipy import stats
from scipy import signal
import torch
from skimage import io
import numpy as np
import matplotlib.pyplot as plt
import torch.nn.functional as F

def gkern(kernellen=21,nsig=3):
    x = np.linspace(-nsig,nsig,kernellen+1)
    kern1d = np.diff(stats.norm.cdf(x))
    kern2d = np.outer(kern1d,kern1d)
    return kern2d/kern2d.sum()

path ='C:/Users/zcw/Desktop/lena.jpg'
img = io.imread(path)
kernel = gkern()
corrupt_img = signal.convolve2d(img,kernel,boundary='symm',mode='same')
noise = np.random.randn(img.shape[0],img.shape[1])*img.max()*0.02
corrupt_img = np.clip(corrupt_img + noise,0,255)

plt.imshow(corrupt_img)

## 如下开始使用pytorch来进行梯度更新
dtype = torch.float
device = torch.device("cpu")

corrupt_img = torch.tensor(corrupt_img,dtype=dtype,device=device)
corrected_im = torch.randn(1,1,corrupt_img.shape[0],corrupt_img.shape[1],device=device,dtype=dtype,requires_grad=True)
gt = torch.tensor(img)

#corrected_im = corrected_im[None,None,:,:]
corrupt_img = corrupt_img[None,None,:,:]


kernel = torch.tensor(kernel,dtype=dtype,device=device)
kernel = kernel[None,None,:,:]

lr = 1e-5
epoch = 1000

for i in range(epoch):
    conv_result = F.conv2d(corrected_im,kernel,padding=10)
    loss = (conv_result - corrupt_img).pow(2).sum() + torch.abs(corrected_im).sum()

    if i%10==0:
        print(i,loss.item())
        
    loss.backward()
    with torch.no_grad():
        corrected_im -= lr*corrected_im.grad
        corrected_im.grad.zero_()

I am trying to solve a l1-regulirized deconvolution problem using pytorch, but except for implementing my own adam(adagrad,momentum) optimization, how should I change my code so that I can utilize nn.optimizer in my code?

You would have to define kernel as an nn.Parameter:

kernel = torch.tensor(kernel,dtype=dtype,device=device)
kernel = kernel[None,None,:,:]
kernel = nn.Parameter(kernel)

and pass it to the optimizer:

optimizer = torch.optim.SGD([kernel], lr=1e-3)

After that’s done, remove the manual update step, and call optimizer.step().
Also, don’t forget the zero out the gradients afterwards via optimizer.zero_grad().

PS: I’ve formatted your code and you can post code snippets by wrapping them into three backticks ```, which makes debugging easier :wink:

Sorry to reply you so late.There are too much work for me to do as a phd candidate. Your suggestion was really help for me.But since the variable is corrected_im, should the code be as follow:
‘’’
corrected_im = torch.nn.Parameter(corrected_im)
optimizer = optim.SGD([corrected_im],lr=1e-6,momentum=0.9)
optimizer.step()
‘’’
By the way , I still wonder what is changed by the nn.Parameter?

Wrapping the tensor into nn.Parameter makes sure it’s trainable, i.e. it uses requires_grad=True. You can also set it manually, but I would generally recommend to use nn.Parameters, as it’ll also make sure to register this parameter e.g. inside an nn.Module.

It helps a lot! Thanks!