F.conv2d does not autograd

dtype = torch.float
device = torch.device(“cpu”)

corrupt_img = torch.tensor(corrupt_img,dtype=dtype,device=device)
corrected_im = torch.randn(1,1,corrupt_img.shape[0],corrupt_img.shape[1],device=device,dtype=dtype,requires_grad=True)
gt = torch.tensor(img)

#corrected_im = corrected_im[None,None,:,:]
corrupt_img = corrupt_img[None,None,:,:]

kernel = torch.tensor(kernel,dtype=dtype,device=device)
kernel = kernel[None,None,:,:]

lr = 1e-5
epoch = 1000

for i in range(epoch):
conv_result = F.conv2d(corrected_im,kernel,padding=10)
loss = (conv_result - corrupt_img).pow(2).sum() + torch.abs(corrected_im).sum()

if i%10==0:
    print(i,loss.item())
    
loss.backward()
with torch.no_grad():
    corrected_im = corrected_im - lr*corrected_im.grad
    corrected_im.grad.zeros_()

I am finding a solution of compressed sensing problem using pytorch.to solve a optimization
argmin_u || kernel*u -b||^2 + ||u||_1
I expressed the formula directly using F.conv2d ,and torch.tensor,but the backward doesn’t work for the u. the corrected_im.grad is a none type obeject is this code. However ,theoretically the autograd should work on this or is there any mistake in my convolution expression?

I think the problem is this line: It changes the variable corrected_im to the result of the substraction. But since this substraction is done in a no_grad block, the result does not require gradient.
I think you want to change it inplace no?
corrected_im -= lr * corrected_im.grad

Also I don’t think .zeros_() is a thing, you want .zero_() no?

Thx a lot! It really works.
I still have one question: how could I use the inserted Adam optimizer in pytorch to solve my problem?
looking forward to your answer!
Thanks again

Hi,

You can use the provided optimizer by giving it your Tensors to optimize (corrected_im,) here if there is only one.
Then you can just call opt.step() and it will use the content of corrected_im.grad to update corrected_im.

sorry for reply so late. Thank you for responding so patiently!