Matrix operations with PyTorch

Hello.

I was previously using several for loops for below use case, but it’s very very slow because I have bigger shapes of tensors. I saw the documentation of torch.where, but the pattern of tensors is very irregular, so I am stuck.
I would be thankful if someone can provide a dummy example. :pray:

Use case:
I have two 2D tensors say A and B, both of same shape.
Tensor-A : say torch.randn(5,5) is

[ 0.1923, 1.6150, -0.4331, 0.7061, 1.7127],
[-0.4912, 0.5317, -1.0820, -0.2575, 0.3446],
[ 0.3956, 0.7645, -0.7015, 0.0574, 0.6930],
[ 0.4492, 0.0215, -1.1855, -1.1453, 0.3912],
[-0.5674, -0.8794, -0.1316, 0.4391, -1.0830]

Tensor-B consists of only three unique integer values (say 1,2,3). Each row consists of some 1s, 2s and 3s but in different proportion. Say, Tensor-B by torch.randint(1,4,(5,5)) is
[3, 1, 1, 1, 2],
[3, 1, 2, 3, 3],
[3, 2, 2, 1, 1],
[2, 2, 3, 3, 1],
[2, 3, 3, 1, 1]
##########

What I want to do is,

For each row,

  1. Sum all the values of Tensor-A corresponding to the indices of all 1s of Tensor-B.
  2. Divide each value of Tensor-A corresponding to the indices of all 2s of Tensor-B by the sum from previous step.
  3. Take exponent after division for those values corresponding to all 2s and then sum again the values corresponding to all 2s for each row.

i.e for above tensors if I take 3rd rows of A and B:

0.0574+0.6930 = 0.7504
(0.7645/0.7504)= 1.019, (-0.7015/0.7504)= -0.9348
exp(1.019)+exp(-0.9348) = 3.163

Similarly after doing the same for each row: Row1’s value+ Row2’s+ 3.163+…

Hi!

I don’t know any compact way to achieve this, however, using the function torch.mul might be useful for this task. Here’s a possible solution:

A = torch.randn(5,5)
B = torch.randint(1,4,(5,5)).float()
Z = torch.zeros(5,5).float()

# I_n is a matrix where I_n(i,j) = 1 iff B(i,j)=n and for else I_n(i,j)=0
I1 = torch.where(B==1, B, Z)
I2 = torch.where(B==2, torch.ones(5,5), Z)
I3 = torch.where(B==3, torch.ones(5,5), Z)

# observe that torch.mul is different than torch.mm
step_1 = torch.sum(torch.mul(I1, A),dim=1)

# reshape for transposing
pre_step_2 = step_1.reshape(-1,1).repeat(1,5)
step_2 = torch.mul(torch.mul(I2, A), pre_step_2)

# here we also sum the components which are not corresponding to 2,
# which are 0, so they will become 1 under exp 
step_3 = torch.sum(torch.exp(step_2))

result = step_3-torch.sum(I1+I3)
1 Like

Thanks @Emre_Yalcinoglu for the reply.

I have one more confusion.
Say we have a Tensor-B of shape 12x12. (Which we can imagine, is divided in small matrices of 4x4 both row and columnwise.) And initially B contains only 1s and 2s.

Now, I want to replace all the entries of main diagonal+(5th ,6th ,7th ,9th ,10th ,11th) upper and lower diagonals, by 3s. I don’t know what would be a faster way.

See torch.diag_embed and torch.diagonal. You can simulate replacement by torch.where.
I would suggest writing your own short lambda expressions to handle such tasks.