Gradient of activation array

I am trying to get the gradient of an activation vector so that it returns an array that has the same size as the activation array and not the image.

I found this online and this seems the most promising, but I’m not sure how to do this without the ‘cutted model’ method.

def get_gradient(self, acts, y, bottleneck_name):
        inputs = torch.autograd.Variable(torch.tensor(acts).to(device), requires_grad=True)
        targets = (y[0] * torch.ones(inputs.size(0))).long().to(device)

        cutted_model = self.get_cutted_model(bottleneck_name).to(device)
        outputs = cutted_model(inputs)

        # y=[i]
        grads = -torch.autograd.grad(outputs[:, y[0]], inputs)[0]
        grads = grads.detach().cpu().numpy()

        cutted_model = None

        return grads

does anyone have any ideas?