How to maintain zero for the zero vectors in Linear layer

For example, my data is like this:

tensor_1 = torch.tensor([
    [1,2,3], 
    [2,3,4], 
    [0,0,0]
])

I defined a linear layer like:

layer = torch.nn.Linear(3,10)
output = layer(a.float())

and the output would be like:

tensor([[-1.6636, -2.1729, -1.6064,  0.4921,  0.5485, -0.5628,  0.8863,  1.1453,
          0.6818, -0.2417],
        [-2.6192, -3.2155, -1.9747,  0.6968,  0.9965, -0.5925,  0.7474,  1.9600,
          0.8981, -0.3384],
        [ 0.4313, -0.0326, -0.2426,  0.1669, -0.4219, -0.1127,  0.3835, -0.2567,
          0.4994, -0.0233]], grad_fn=<AddmmBackward>)

Is there a way to maintain zeros for output in the last raw?
I can imagine a “mask” method that “mask_filled” the output. Is there another way?

You could remove the bias from the linear layer or alternatively multiply the output with a mask.
What is you use case that you would like to pass a sample with all zeros and expect an all-zero output?

Forgive me if I didn’t understand you correctly, but here is one option if you’re trying to maintain an all-zero output.

output *= (tensor_1.sum(dim=1) > 0).view(-1, 1)

Thanks for replying. I’m using a bottom-up feature which may lead to a situation that no bounding box is detected in one single image sometimes, and as for the images that have zero bounding box, I choose zero features to replace them. So of course for the zero features, I want to maintain zero in the whole model.

Thanks for replying. This is a workable solution.