```
x = torch.tensor([[1, 2, 3], [4, 5, 6]]) # x has shape (2, 3) to convert to (1 ,2, 3,)
c = torch.tensor([1, 10, 11, 100]) # c has shape (4,) to convert to (4, 1, 1)
# x * y will have shape of (4, 2, 3)
y = c.view(-1, 1, 1)
```

so when we have (2, 3 ) for x and ( 4, ) for **c**onstant

first, we extend (4) to (4,1)

but

x shape (2,3)

y shape (4,1)

will not work because of broadcasting semantics

https://pytorch.org/docs/stable/notes/broadcasting.html

hence, we need to

(1, 2, 3)

(4, 1 ,1)

it canâ€™t be this

( 1 at the rear instead at the front for x is because it will fail the broadcast semantics? )

( 2 , 3 , 1 ) and

( 4, 1 , 1 )

Am I correct up to here?

Is there an easier way to work on broadcasting?

Thanks.