Autograd on a specific layer's parameters

Hi, I am trying to get the Jacobian matrix of a specific layer’s parameters. The below is my network model and i apply functional_call on it.

def fm(params,input):
                    return functional_call(self.model,params,input.unsqueeze(0)).squeeze(0)
                
def floss(func_params,input):
                    fx = fm(func_params, input)
                    return fx

and I used to compute the Jacobian of all the parameters by this way

func_params=dict(self.model.named_parameters())
per_sample_grads =vmap(jacrev(floss,0), (None, 0))(func_params, input)

Right now, I need just get the gradient on a specific layer’s parameters, here is my approach.

def grad(f,param):
            return torch.autograd.grad(f,param)    
out = vmap(floss,(None,0))(func_params,input)
gradf=vmap(grad,(0,None))(out,func_params['model.0.weight'])

However, the error saying “element 0 of tensors does not require grad and does not have a grad_fn”
Because, i have tried

grad=self.grad(out[0],func_params['model.0.weight'])

and it works. I don’t really know how to fix this problem.