Hi All,

I have a quick question regarding how to implement the Laplacian of the output of a network efficiently. I’ve managed to implement a method for calculating it, however, I’m pretty sure the way I’m doing it is a pretty inefficient way.

What I mean by the Laplacian of the output of the network is, let’s say I have a simple feed-forward network, `y = model(x)`

and I wish to calculate `Sum_{i} d2y_dx_{i}2`

for all inputs samples.

My current code for calculating the Laplacian for an N-dimensional input `x`

is

```
y = model(x) #where model is an R^N to R^1 function
laplacian = torch.zeros(x.shape[0]) #array to store values of laplacian
for i, xi in enumerate(x):
hess = torch.autograd.functional.hessian(model, xi.unsqueeze(0), create_graph=True)
laplacian[i] = torch.diagonal(hess.view(N, N) offset=0).sum()
```

I’ve noticed for large input samples (which I need) the memory usage increases drastically. I would assume this is because I create a graph for each call of `torch.autograd.functional.hessian`

. I need this because I need to differentiate the Laplacian with respect to the network parameters in my loss function.

My question is, would it be possible to perhaps reduce the memory usage on calculating the hessian or perhaps re-use the same graph but just for different inputs? (if that’s the correct way of phrasing it?)

Many thanks in advance!