Multiple Parameter Image Regression

Hi

I’m kind of stuck on my work, and I’m looking for some advices.

I’m a image processing engineer. And I’m having a conventional image filter G, with a number of parameters w=(w1, w2, … wn).
So I’m going to have y_out = G(y_in, w), and the y_in and y_out is 2D gray level image.

And the G is not a blackbox, but it is quite complicated filter, so it takes a lot of try-and-errors, time, and effort to find a right or optimal parameters by hand-picking.

What I’m trying to do is let a deep learning model to learn to generate the parameter vector w based on the input image y_in and output image y_out. Then I think it can be considered as image regression problem. The input is images (y_in and y_out), and the output is the parameter vector w. Since the G is not the blackbox, I can reasonably randomly generate the w, so does y_out. So I can generate a lot of training/validation data.

Before I make a huge data and huge model, I just made a very small set with very few parameters like 10 parameters, w10=(w1,w2, … ,w10) for checking if the model is working okay. I generated 200 y_in/y_out/w10 pairs for training, and 30 pairs for validation.

I tried like giving both y_in and y_out to a deep learning model, for example resnet, and make the model to predict the parameter w, so I put sigmoid activation at the output neuron, and used MSE loss. It didn’t work well.

I tried few variations, like stacking y_in and y_out into channels, and give this as input to the network, or giving the difference y_out - y_in as input so the network can focus on the difference. Also tried using 2 networks for each images and concatenate them into fully connected layer. Tried different loss functions, tried different pre-trained model with different depth. But none of them gives good results. For example, the loss is going down, but the result is like most of the number are just zeros. Sometimes, the loss is not going down, or some cases, the output doesn’t seem to close to the desired value.

I’m still trying to find similar tasks so I can learn, but all I could find is image regression with very limited number of parameters like age prediciton, head position prediction which is just one element or two element vector regression.

Is there any example that similar to the problem I’m trying to solve? Is there anything I need to look into? Is there anything I missed? Any advices will be very appreciated.

Thank you so much.

I don’t know how complicated your filter is, but “standard” image processing filters can be learned.
E.g. this example shows how a sobel filter is trained using a single input-output sample:

from scipy import ndimage, misc
import matplotlib.pyplot as plt


image = misc.ascent()
target = np.array(image)
target = ndimage.sobel(target, mode="constant", cval=0)
plt.imshow(image)
plt.imshow(target)

device = "cuda"
conv = nn.Conv2d(1, 1, 3, 1, 1, bias=False).to(device)
optimizer = torch.optim.Adam(conv.parameters(), lr=1e-3)
criterion = nn.MSELoss()

image = torch.from_numpy(image).to(device).float()[None, None, :, :]
target = torch.from_numpy(target).to(device).float()[None, None, :, :]

nb_epochs = 10000
for epoch in range(nb_epochs):
    optimizer.zero_grad()
    out = conv(image)
    loss = criterion(out, target)
    loss.backward()
    optimizer.step()
    print("epoch {}, loss {:.3f}".format(epoch, loss.item()))
# ...
# epoch 9997, loss 0.002
# epoch 9998, loss 0.002
# epoch 9999, loss 0.002

plt.imshow(out.cpu().squeeze().detach().numpy())

print(conv.weight)
# Parameter containing:
# tensor([[[[-9.9979e-01, -1.4878e-03,  1.0026e+00],
#           [-2.0019e+00,  4.5433e-03,  1.9950e+00],
#           [-9.9821e-01, -3.3480e-03,  1.0026e+00]]]], device='cuda:0',
#        requires_grad=True)

As you can see, the final filter weights are close to the desired sobel kernel and visualizing the out tensor also shows the edge detection output.
This is of course quite a simple example and note that I have spent almost zero time trying to optimize the training by e.g. normalizing the inputs etc. but maybe this example could be helpful in starting your experiments.

Thanks ptrblck,

After I read your comment, I just trying to refresh my project from scratch, and it turns out that my data augmentation was gone too far. After I removed my data augmentation it starts to overfit. Previously my model couldn’t even overfit. It was just giving me wrong results in both train and validation. Thank you.