Question on broadcasting

I am trying to vectorize some code, however it is proving to be a lot more

difficult than I initially expected. Any help would be greatly appreciated.

My first objective is to create a vector with the same dimensions of another vector, however add 1 more dimension. Right now I have this and it seems to generalize quite well to different dimension sizes:

given some tensor a that has size = [a, b, c, … , n] (n dimensions),

b = torch.zeros_like(a.unsqueeze(0))
b = b.numpy()
b = np.repeat(b, 3, axis = 0)
b = torch.from_numpy(b.squeeze())

now I end up with a tensor b with size = [3, a, b, …, n], an n+1 dimensional tensor.

My issue arises with the following bit. If i have another tensor, say c = [0,1,2], how can I have it so that when I multiply c by b element wise, the multiplication results in something like this,

b[0,:,:, … ,:] will be multiplied by all 0’s,
b[1,:,:, … ,:] will be multiplied by all 1’s and so on?

i.e i multiple each element of the 3 (initially identical) tensors in the 0th dimension by 0, 1 and 2 respectively?

I’ve tried using the expand function, but what I’m doing doesn’t work well.

Again, any help would be much appreciated!

This should do the trick:

b = torch.ones(3, 5, 5, 5)
c = torch.tensor([0., 1., 2.])
d = b * c.view(-1, 1, 1, 1)

Hi @ptrblck,

Thank for the quick response! Much appreciated!

Part of the reason why I’ve gone about this in such a confusing way is because I won’t know the dim of the initial tensor a. That’s where I run into troubles using view.

Pretty ugly, but should work:

d = b * c.view([-1] + [1]*b.ndimension())

I try to come up with a cleaner solution. :wink:

@ptrblck Thanks so much!