# How to do elementwise multiplication of two vectors?

I have two vectors each of length n, I want element wise multiplication of two vectors. result will be a vector of length n.

2 Likes

You can simply use `a * b` or `torch.mul(a, b)`.

18 Likes

both gives dot product of two vectors. I want element wise multiplication.

1 Like

ex. a = (a1, a2, â€¦ an) and b = (b1, b2, â€¦ bn)
I want c = (a1*b1, a2*b2, â€¦ , an*bn)

Well this works in my case. Do you get a scalar running this?

``````a = torch.randn(10)
b = torch.randn(10)
c = a * b
print(c.shape)``````
7 Likes

It is giving a tensor of size 10. I must have made some other mistake. Thanks.

3 Likes

Hello ,every one.I have a question. Does the * operator run in the GPU ?

If both tensors are stored on the GPU, then yes.
The operations are written for CPU and GPU tensors, so as long as the data is pushed to the device, the GPU will be used.

5 Likes

Its lucky for the pytorch users to have you always here

6 Likes

thank you very much!

Hi there,

Is there a rule of thumb regarding mixing CPU and GPU based variables? As in, if I have a tensor going through a function with a mix of CPU-stored (non-tensor) and GPU-stored variables, is it worth it to declare each variable as a torch tensor on the GPU at the beginning of the function call?

Do you mean plain Python variables by â€śCPU-stored (non-tensor) variablesâ€ť, e.g. like `x = torch.randn(1) * 1.0`?

Generally you should transfer the data to the same device, if you are working with tensors.
However, you wonâ€™t see much difference, if you are using scalars, as the wrapping will be done automatically.

2 Likes

Yeah, exactly. Iâ€™m venturing a little off the path and wasnâ€™t sure if it was reasonable or if the scalars would pull the tensor from the GPU for computation (outside of a module).

Thanks for taking the time!

1 Like