How to do elementwise multiplication of two vectors?

I have two vectors each of length n, I want element wise multiplication of two vectors. result will be a vector of length n.

3 Likes

You can simply use a * b or torch.mul(a, b).

23 Likes

both gives dot product of two vectors. I want element wise multiplication.

1 Like

ex. a = (a1, a2, … an) and b = (b1, b2, … bn)
I want c = (a1*b1, a2*b2, … , an*bn)

Well this works in my case. Do you get a scalar running this?

a = torch.randn(10)
b = torch.randn(10)
c = a * b
print(c.shape)
7 Likes

It is giving a tensor of size 10. I must have made some other mistake. Thanks.

3 Likes

Hello ,every one.I have a question. Does the * operator run in the GPU ?

If both tensors are stored on the GPU, then yes.
The operations are written for CPU and GPU tensors, so as long as the data is pushed to the device, the GPU will be used.

7 Likes

Its lucky for the pytorch users to have you always here

6 Likes

thank you very much!

Hi there,

Is there a rule of thumb regarding mixing CPU and GPU based variables? As in, if I have a tensor going through a function with a mix of CPU-stored (non-tensor) and GPU-stored variables, is it worth it to declare each variable as a torch tensor on the GPU at the beginning of the function call?

Do you mean plain Python variables by “CPU-stored (non-tensor) variables”, e.g. like x = torch.randn(1) * 1.0?

Generally you should transfer the data to the same device, if you are working with tensors.
However, you won’t see much difference, if you are using scalars, as the wrapping will be done automatically.

2 Likes

Yeah, exactly. I’m venturing a little off the path and wasn’t sure if it was reasonable or if the scalars would pull the tensor from the GPU for computation (outside of a module).

Thanks for taking the time! :slight_smile:

1 Like