Use `register_hook` to save gradients/gradient computation artifacts

I am currently analyzing a module where register_hook is used for a custom computation of the gradient of some intermediary variable. If you want to take a look, it’s here.

I would like to save the obtained gradient, as well as computation artifacts that are not the gradient.
I have tried using things like self.my_grad = new_grad and self.artifacts = artifacts, but it did not work, in the sense that model.my_grad returned 'MDEQClsNet' object has no attribute 'my_grad'.

How could I go about saving what’s happening in register_hook ?

I think my question is more involved as what is explained in this discussion, since I really want to use register_hook on a variable and not a module + I want to save some computation artifacts not just one gradient.

Apparently, in a minimal example it works.

import torch

class MyModule(torch.nn.Module):
  def forward(self, x):
    def my_hook(grad):
      self.my_grad = grad.detach().clone().cpu()
      self.artifacts = 'artifacts'
      return grad
    x.register_hook(my_hook)
    return x

model = MyModule()
x = torch.tensor(1.0).requires_grad_(True)
output = model(x)
output.backward()

model.artifacts

So I need to understand where exactly the problem happens in my more complex case.