Help with: "RuntimeError: One of the differentiated Tensors appears to not have been used in the graph" when trying to calculate attributions

So, I have been trying to recreate a TensorFlow function in PyTorch, and the code below should in theory replicate the TensorFlow code. Though I appear to missing something as I get an error when testing the code. The logit_activ variable is the captured activations from a model’s final layer (before softmax), and the layer_activ variable is the capture activations from the target layer.

def fwd_gradients(logit_activ, layer_activ):
    zeros = torch.nn.Parameter(torch.zeros_like(logit_activ))
    grad_one = torch.autograd.grad(
            outputs=[logit_activ],
            inputs=[layer_activ],
            grad_outputs=[zeros],
            create_graph=True,      
        )[0]
    grad_two = torch.autograd.grad(
            outputs=[grad_one],
            inputs=zeros,
            grad_outputs=[torch.ones_like(grad_one)],
            create_graph=True,
        )[0]

    # Issue occurs somewhere after here:
    # Equivalent tensorflow code is: 
    # logit_attr = grad_two.eval({layer_name: layer_activ, d_previous: layer_zeros})

    layer_zeros = nn.Parameter(torch.zeros_like(target_activ))
    layer_zeros[..., x, y] = layer_activ[..., x, y]

    # Calc logit attribution
    logit_attr = torch.autograd.grad(
            outputs=[layer_activ],
            inputs=grad_two,
            grad_outputs=[layer_zeros],
            create_graph=True,     
        )[0]
    return logit_attr

The error message when running the above function looks like this:

1 frames

<ipython-input-48-326f99eba6c0> in fwd_gradients(logit, activ)
     21             inputs=out,
     22             grad_outputs=[layer_zeros],
---> 23             create_graph=True,
     24         )[0]
     25     return out

/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused)
    202     return Variable._execution_engine.run_backward(
    203         outputs, grad_outputs_, retain_graph, create_graph,
--> 204         inputs, allow_unused)
    205 
    206 

RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

Any help resolving the issue would be appreciated!