I have a huge network (e.g. resnet)

I like to change every gradient for each function which is computed via back-propagation.

Let say I like to add a constant `C`

to it.

can someone help me to understand how via `torch.autograd.Function`

and `aaply`

i can do it?

I think my function should be like this?

```
class Change(torch.autograd.Function):
@staticmethod
def forward(ctx, input,C):
ctx.save_for_backward(input)
return input
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_tensors # why we should do input, not intpu?
grad_input = grad_output.clone()
return grad_input+C
```

I am not sure where should i put/apply it though