What is the best way to append scalar to tensor


I need to know what is the best way (i.e. most efficient) to append a scalar value (i.e. tensor with empty size) to a tensor with multidimensional shape. I tried torch.cat and torch.stack but this requires the dimensions to be matched. I could use unsqueeze to the scalar value but I wonder if there is a better solution.


Unsqueezing / viewing seems to be the right thing to do.

You didn’t ask but just in case: It is good to avoid loops of incremental cat’s and instead collect things to cat into a vector and put them in in one go.
An alternative if you know the size in advance is to allocate a torch.empty of the new size and assign to that.

1 Like

Thanks! … the problem is that I am trying to implement a recursive (auto-regressive) filter and unfortunately in Pytorch only FIR filtering (i.e. just multiply and add without auto-regression) is supported via conv1d. That’s why I need to use looping in order to filter the signal on a sample-by-sample basis and append the output sample to manipulate the next samples and so on.

Thank y’all for the discussion! I ended up doing the unsqueeze + squeeze route; needed to understand a problem that a colleague ran into (as I’m sure I’ll run into it myself at some point :stuck_out_tongue:):

def scalars_to_tensor_good(*a):
    Will preserve gradient!
    # https://discuss.pytorch.org/t/how-to-concatenate-to-a-tensor-with-a-0-dimension/6478
    # https://discuss.pytorch.org/t/what-is-the-best-way-to-append-scalar-to-tensor/54445/3
    a = [ai.unsqueeze(0) for ai in a]
    return torch.cat(a).squeeze(0)