Can I mark the gradient?

Hello, Pytorch forum.
My neural network(with weight w) takes some input, and its output is two scalar variables, namely x and y.
The loss function is simply L(x,y). Now, what I want to do is calculating two vectors dx/dw, dy/dw(it is sad that I cannot use LateX here - I mean \nabla x, \nabla y).
Doing L.backward() will calculate dL/dw, which is dL/dx* dx/dw + dL/dy*dy/dw by chain rule. For my research, I just need dx/dw, dy/dw. Is there any convenient method or function that I can use?

Plus, if you know any papers which deal with this topic, please let me know… it would be a great help for me… I am now looking for the natural gradient descent things on VAE.