# Torch expand error

a = torch.zeros((1,1,100))
a.expand(1,2,100)

Works

a = torch.zeros((1,2,100))
a.expand(1,4,100) gives error

RuntimeError: The expanded size of the tensor (4) must match the existing size (2) at non-singleton dimension 1

Why is this?

Also i am trying to do batched beam search, my batch size=2 and beam size=2. When it comes out from encoder, hidden dimension is 1x2x100 [as i dont consider beam there]. Now as it has to be fed into the decoder with two initial states for two sentences, i need to make it 1x4x100. Is the approach right?

Here is my answers (suggestions only):

``````x = torch.tensor([, , ])
x.size()
=torch.Size([3, 1])
x.expand(3, 4)
tensor([[ 1,  1,  1,  1],
[ 2,  2,  2,  2],
[ 3,  3,  3,  3]])
``````

From the above example its clear that it returns new view of the exists.
In the first case, it didn’t throw any error because it’s just of shape [1,1,100] from which you can produce anything as its single shape almost with no dimensions.
Not the same in the second case.

``````Decoder and Encoder(My Thinking)
``````

Hidden dimension is 1x2x100 : 2 indicates the batch size I think and you don’t need to pass anything 14100 as the pytorch net itself will handle this.

Hi, Thanks Jaya !

For the Decoder and Encoder , i also thought this initially

But going forward we would have 2 hidden states for each sentence [as there are two sources per sentence[beam size=2] each giving a new hidden state . Which dimension would pytorch put the 4 hidden states into?

I would need to feed four hidden states [2 sentences;2 beam size] which axis’s dimension should be increased?

I am not sure as i can’t answer until i see the net.