Sorry for the question, but I am still a rookie and I’m trying to understand how it works here:

I have a tensor x with shape (64,10) so 64 rows and 10 columns.
Checking shape for the below denominator I get a shape 64, why it is not equal to (64,1) and I need to reshape my tensor while using a softmax function?

Softmax is essentially normalising the values in the range [0, 1] The input shape and output shape would be the same if you are using a softmax function. For reference you can check the documentation and implementation of sotmax here

I understand what softmax is doing, but what I do not understand here is the shape of tensor in denominator. Why it’s 64 not (64,1) hence I need to reshape it to (64,1).