How to calculate gradient with respect to input

Hi,

I want to find the gradient of a network with respect to one of its inputs, not the weights. I was wondering how should I do so. Here is a snippet of my code:

            manager.train()
            worker.eval()
            goal1 = manager(ds)
            goal2 = manager(dq)
            goal2 = Variable(goal2,requires_grad=True ).float().to('cuda')
            e1, CO1, mask1, width, height = initialize_the_state(num_masks, ds, size=84)
            e2, CO2, mask2, width, height = initialize_the_state(num_masks, dq, size=84)
            for st in range(2):
                mask1, e1, _, CO1 = take_one_step(worker, goal1, ds, CO1.to('cuda'), e1, mask1, width, 
                         height, num_masks)
                mask2, e2, pr, CO2 = take_one_step(worker, goal2, dq, CO2.to('cuda'), e2, mask2, width, 
                          height, num_masks)
                data_shot = e1
               data_query = e2
               with torch.no_grad():
                 proto = model(data_shot)
               logits = euclidean_metric(model(data_query), proto)
               pred = torch.argmax(logits, dim=1)
               acc = count_acc(logits, label)
               reward = torch.tensor(acc)
               m_loss = (torch.sum(torch.mul(pr.reshape([1]).to('cuda'), 
                           Variable(reward).mul(-1).to('cuda')))).to(
                           'cuda')
              optimizer_m.zero_grad()
              optimizer_w.zero_grad()
              m_loss.backward()
              optimizer_m.step()

manager and worker are both CNNs. The output of manager is given to the worker as the input, and the output of the worker is used to calculate the manager’s loss. The gradient of manager loss is calculated based on the gradient of pr which is the worker’s output w.r.t goal2, which is the input of worker (worker gets goal2, dq and CO2 as the input). Would you please help me with how to do that?
Many thanks for your help