How does the softmax gradient in the pytorch framework calculate the gradient of the input data? I followed this link https://community.deeplearning.ai/t/calculating-gradient-of-softmax-function/1897/3 to implement the gradient derivative function, why are the gradients of the derivatives all 0? How do I correctly pass the gradient when using the softmax layer?
I haven’t looked at the details of your code, but softmax() has a property that
will cause your particular gradients to be zero. Namely, softmax() returns a
set of probabilities that sum to one, which, being a constant, has zero gradient.
Instead of calling backward on the sum of out, you might try out[0].backward().
An individual component of the result of softmax() is not a constant, so you will
get a non-trivial gradient.
Thank you very much for your reply。
Yes, what I want to describe is the same as what you said. After I call the backward function of the softmax function, I find that the gradient of the softmax input data obtained by using the softmax output data to differentiate is always 0. So I would like to ask, if softmax is an intermediate layer in a network, how is its corresponding gradient calculated? Is it only the maximum value to calculate the gradient or is there any other method? How to implement this part of the program in pytorch?
First note that applying softmax() to, say, a one-dimensional tensor returns a
one-dimensional tensor. You can’t compute a gradient of a tensor (of length greater
than one), so you have to take the gradient of some scalar function of the tensor.
If that scalar function happens to besum(), then the result will be 1.0, a constant,
so the gradient of that constant will be zero.
However, if you use some other scalar function that doesn’t always return 1.0, you
will get a non-zero gradient, as you expect.
(Again, the point is that softmax() has the special property that the result of softmax() sums to one. It’s the fact that you are summing the result of softmax(),
instead of doing something else with it, that leads to the zero gradient.)
Thank you very much for your answer. After your suggestion, I also realized that it was because I called the sum function every time, which caused their gradient values to be 1, and thus the gradient of the input tensor to be 0. I just reproduced your code and the gradient is indeed normal. Thank you very much, and I wish you a happy life~