How can I optimise the Gradient? Something like grad.backward()


(XingChen) #1

I want to optimise the graident like the paper “Improved Training of Wasserstein GANs”. Something like, D(input), D.backward(), loss = D_loss + || input.grad - 1||. I found that input.grad don’t have creator, so the graph of grad don’t connect to the net, it won’t implement somethine like input.grad.backward(). How could I do this? By the way, the author of that paper using tensorflow.


(Ruotian(RT) Luo) #2

Currently not suppoted. Check here for more discussion How to implement gradient penalty in PyTorch


(Adam Paszke) #3

It’s already implemented, but the PR waits for review. It’s probably going to be merged next week.


(XingChen) #4

Thank you, I’ll see~


(Ruotian(RT) Luo) #5

@Sora already merged two days ago