Hessian of output image pixels w.r.t. input image

Hello,

I have inputs of size (Bx2xHxW) and outputs of size (Bx1xHxW). In my problem, the channel dimension represents spatial coordinates e.g. (x, y) or (x, y, z) etc. I’m trying to create a laplacian ‘matrices’ of shape (Bx1xHxW) where for each image, each pixel represents the laplacian at that point in the original image. This means that for every pixel (i, j), I basically need the trace of the hessian of the output[:, :, i, j] w.r.t. input[:, :, i, j].

For now i’m trying to get this to work for a single image (B=1) and the input ‘image’ is a 2x100x100 tensor where the first channel represents the x-coordinate and the second channel represents the y-coordinate. The model for the toy example is X^2 + Y^2 + XY. Thus, the laplacian ‘matrix’ for this example should be a (1x100x100) tensor with the value 4 everywhere. However, im having trouble producing the expected result. Is there a nice way to do this that is not too expensive?

Here’s what I’ve tried so far:

size = 32
x = torch.linspace(-1, 1, size)
y = torch.linspace(-1, 1, size)
X, Y = torch.meshgrid(x, y)

grid = torch.stack([X, Y], axis=-1).permute(2, 0, 1)

class func2d(nn.Module):
    def __init__(self):
        super().__init__()
    def forward(self, x):
        return x[0]**2 + x[1]**2 + x[0]*x[1]
    
grid.requires_grad = True
model = func2d()
out = model(grid)

d1 = torch.autograd.grad(
    outputs=[out],
    inputs=[grid],
    grad_outputs=torch.ones_like(out),
    create_graph=True,
    only_inputs=True
)[0]
d2 = torch.autograd.grad(
    outputs=[d1],
    inputs=[grid],
    grad_outputs=torch.ones_like(d1),
    only_inputs=True
)[0]
print(d2)
1 Like