Manipulating gradients in backward

Thanks for the reply.

I’m still in a bind because the weighing of the gradients depends on the results of the multinomial.
Specifically my backward is:

self.gradInput.resize_as_(input).zero_()
        self.gradInput.copy_(self.output)
        self.gradInput.div_(input)
        self.gradInput.mul_(gradOutput)
        return self.gradInput

And that means the hook will have to have access to the layer itself which gets kinda messy.

Do you have any suggestions as to how to solve this?