Using `autograd.functional.jacobian`/`hessian` with respect to `nn.Module` parameters

I’m also working on this issue when calculating the influence function.
My current solution is to follow this workaround:

So I write this sample code (which works as a workaround):

import torch
import torch.nn as nn

_input=torch.randn(32,3)
layer = nn.Linear(3,4)
criterion=nn.CrossEntropyLoss()

weight = layer.weight

def func(weight):
    del layer.weight
    layer.weight=weight
    return criterion(layer(_input), torch.zeros(len(_input),dtype=torch.long))

torch.autograd.functional.hessian(func,weight)

The drawback of this workaround is it’s Not Safe: You have to delete the attribute weight in the func everytime, and can’t be written just before func for only onece. Otherwise, the hessian will all be 0.

I would appreciate if there is any more elegant way. And if not, maybe we need to think about a modification for torch.autograd.functional.hessian

3 Likes