Layer wise count of tensor using torch.count_nonzero()

Lets say “a=torch.rand(1,4,2,2)” and “count_a=torch.zeros(4)”.

I want count of non zero of a(0,0,:,: ) to store in count_a(0), a(0,1,:,: ) to store in count_a(1) etc

If i used below code it stores total non zero count of “a” in to “count_a”.


import torch
print("a value is ",a)
print("count is ",count_a)


a value is tensor([[[[0.8222, 0.4329],
[0.4672, 0.2243]],

     [[0.3086, 0.6668],
      [0.2699, 0.3848]],

     [[0.0251, 0.9675],
      [0.7624, 0.8663]],

     [[0.4905, 0.8086],
      [0.0784, 0.4926]]]])

count is tensor(16)

I want output to be tensor[4 4 4 4] as each layer has 4 non zero value instead of tesnor(16).

I don’t quite understand the expected output shape.
Your input tensor a has 16 values (1*4*2*2=16), so count_nonzero can return a value in [0, 16], while the expected output shape would contain more values.
Could you explain your use case a bit more?

If “a=torch.rand(1,4,2,2)” then think of 4 layer having 2x2 value. If “a=torch.rand(1,8,2,2)” then think 8 layer having 2x2 value.

I want layer wise non zero count value. so for “a=torch.rand(1,4,2,2)” if all 4 layer 2x2 value is non zero then i want output to be “count_a=[4 4 4 4]”. Sorry i forgot to mention that count_a is of size “torch.Size([4])” and i want output to be tensor([4., 4., 4., 4.])

If “a=torch.rand(1,8,2,2)”, if all the 8 layers 2x2 values are non zero then i want output to be tensor([4., 4., 4., 4., 4., 4., 4., 4. ]).

Thanks for the clarification!
I misunderstood the [4, 4, 4, 4] output as the shapes, not the values.
In that case, you could flatten the last two dimensions and use the dim argument in count_nonzero:

a = torch.rand(1,4,2,2)
count_a = torch.count_nonzero(a.view(1, 4, -1), 2)
print("a value is ",a)
print("count is ",count_a)
> count is  tensor([[4, 4, 4, 4]])

Thanks. That’s exactly i want.