Hi, I’m trying to use softamx2d and I can’t see what I’m doing wrong.
I will show my problem using something that will be easier to understand.
I have this 2d matrix of values and I want to make her to a probabilities matrix:
I can’t understand why? my “sanity-check” is that the sum of element of the entire matrix suppose to sum to 1 (I tried to make the precision bigger, no luck).
I know I’m missing something, can’t understand what.
But I don’t want across channels, because they are not really channels, I want for each 2d 16x16 to have a softmax alone.means that each 16x16 will sum to 1.
I think it worked,but isn’t y[0,1] is the first two dimensions of y which are (1,12), and not (16,16) ,which those the ones I would like.
another weird thing is that what I got :
In the first image in the first messege you can see the values before the softmax , isn’t that weird that there is only 1 value bigger then 0? maybe this is the biggest value, but there were some big values like:109/104/101 etc…Isn’t that weird!?
No, y[0, 1].shape will return a 16x16 tensor, so this should be fine.
Also, that’s not really weird but expected, as nn.Softmax()(torch.tensor([130., 109., 104.])) will give you a almost a 1 for the logit of 130. The difference between the logits is just large.
Have a look at the manual implementation:
hey , i would like to apply softmax2d on different way
i have a tensor image shape of ‘[ batch_size , channel , width , height ]’ i transformed using FFT into a tensor of shape (batch_size, num_channels, num_freq, num_time)
You might need to apply the softmax operation on separate parts of the tensor or calculate e.g. the magnitude of the tensor. Alternatively, you could also check if a manual softmax implementation would work.