I was calling
nonzero() on a tensor and then getting the mean values, but it turns out that I will need to keep the shape of the original tensor, but just ignore the values that are 0 for the mean calculation, is there a way to do this?
I was calling
I don’t understand what do you mean by “keep the shape…”. A mean is an scalar, thus, you can’t “keep the shape”.
You just can filter them out like mean = tensor[tensor !=0].mean()
right, but the original tensor has a shape and I need to preserve the shape which would exist if I were to do…
x = torch.ones((2, 3, 4)) x.mean(dim=0)
which would be a
(3,4) tensor of the means along dimension 0. I have found that I could do this by summing the elements along the dimension and then somehow dividing by the count of nonzero elements along the same dimension…
But then I realized my problem is a little bit harder than that because I am dealing with one tensor of values and then one probability distribution tensor with the same shape, which gives the probabilities of those values…
x = torch.rand((2, 3, 4)) y = torch.distributions.Normal(torch.randn((2, 3, 4), torch.randn((2, 3, 4))) pr = y.log_prob(x) ...
at this point I need to get the mean of dimension 0 of the values in y which are 0 in
x but they are not 0 in
y because of the
log_prob call… very confusing about how to go about it
So then use a masking
If I got it you has a tensor (N,M,P)
mask = x!=0 y_mean = (y*mask).sum(dim=0)/mask.sum(dim=0)
I think this would fit what you want right?
Yes I believe that will work. thanks for taking the time to show me the way
You could use
.nonzero() but just reshape back to the original dimensions
and then do
torch.mean() as normal