Element wise multiplication

How can I do this multiplication?

Let´s assume two tensors:

x= torch.ones(9,9)
y= torch.randn(3,3)

x can be be imagined as a tensor of 9 blocks or sub-matrices, each of size (3,3).
I want to do elementwise multiplication of each block of (3,3) with y, so that the resultant tensor would have size same as x.

This task is analogous to convolution operation, where x and y can be assumed as input and filter respectively and stride=3, but output size should be same as input!

Hi Hdk!

Here’s a scheme that involves creating an auxiliary block matrix:

>>> x = torch.ones (9, 9)
>>> y = torch.randn (3, 3)
>>> block_prod = (x.unsqueeze (0).unsqueeze (0) * torch.nn.functional.interpolate (y.unsqueeze (0).unsqueeze (0), 9)).squeeze()
>>> print (y)
tensor([[ 0.6211, -0.8269,  0.1364],
        [-0.0908, -1.8126, -0.3683],
        [-1.2883, -0.7424, -1.0893]])
>>> print (block_prod)
tensor([[ 0.6211,  0.6211,  0.6211, -0.8269, -0.8269, -0.8269,  0.1364,  0.1364,
          0.1364],
        [ 0.6211,  0.6211,  0.6211, -0.8269, -0.8269, -0.8269,  0.1364,  0.1364,
          0.1364],
        [ 0.6211,  0.6211,  0.6211, -0.8269, -0.8269, -0.8269,  0.1364,  0.1364,
          0.1364],
        [-0.0908, -0.0908, -0.0908, -1.8126, -1.8126, -1.8126, -0.3683, -0.3683,
         -0.3683],
        [-0.0908, -0.0908, -0.0908, -1.8126, -1.8126, -1.8126, -0.3683, -0.3683,
         -0.3683],
        [-0.0908, -0.0908, -0.0908, -1.8126, -1.8126, -1.8126, -0.3683, -0.3683,
         -0.3683],
        [-1.2883, -1.2883, -1.2883, -0.7424, -0.7424, -0.7424, -1.0893, -1.0893,
         -1.0893],
        [-1.2883, -1.2883, -1.2883, -0.7424, -0.7424, -0.7424, -1.0893, -1.0893,
         -1.0893],
        [-1.2883, -1.2883, -1.2883, -0.7424, -0.7424, -0.7424, -1.0893, -1.0893,
         -1.0893]])

(You don’t actually need to unsqueeze() x, as the element-wise
multiplication (*) will broadcast. I just put that in to make it a little more
clear what is going on.)

Best.

K. Frank

Hi @KFrank,

Thanks for replying. I think you misunderstood what I want to achieve, however from your illustration I got another idea that might work.

Cheers! :stuck_out_tongue_winking_eye: :v:

Hi Hdk!

Yes, I do believe that I misunderstood your goal.

Am I right that you want each block of what I called the “auxiliary block
matrix” to be a copy of your matrix y?

If so, I think that a new feature (as of 1.8?), torch.kron(), might do what
you need. Here’s an illustrative script:

import torch
torch.__version__

x = torch.randn (9, 9)
print (x)
y = torch.arange (9).reshape (3, 3).float()
print (y)
y_block = torch.kron (torch.ones (3, 3), y)
print (y_block)
block_prod = x * y_block
print (block_prod)

And here is its output:

>>> import torch
>>> torch.__version__
'1.8.0.dev20201203'
>>>
>>> x = torch.randn (9, 9)
>>> print (x)
tensor([[ 0.7726, -0.5733,  1.6097,  1.5556,  0.6953, -0.3901, -0.2402,  0.1618,
         -0.6779],
        [ 1.2874, -2.0785, -0.1511, -0.4685, -0.4369,  0.1307, -0.5300, -0.2147,
         -0.1542],
        [-0.4025,  0.4115,  0.2223, -0.5108, -1.6646,  1.8222,  0.7601,  0.7101,
         -0.2345],
        [ 0.3181,  1.2334,  0.4891, -2.5712,  0.5123, -1.5413, -0.1983, -0.2145,
          1.3856],
        [ 1.1100, -0.3581, -0.2157,  0.4873, -0.8274, -0.4914,  0.4379,  0.6812,
         -0.8991],
        [-1.1600,  0.7685,  0.5095,  0.3122,  1.4661, -0.6632,  0.2715, -0.4530,
          0.4327],
        [-0.8347, -2.8993, -2.7608, -0.7906,  0.3208, -1.0284,  0.9308, -1.3254,
         -1.6779],
        [-0.3797,  0.7627,  0.6850,  0.5627,  0.4745, -0.9528, -1.2549,  0.4859,
         -0.0178],
        [-1.8941,  0.0079,  2.0205,  0.1333, -0.3190,  1.7404, -0.1067,  0.5844,
         -0.0513]])
>>> y = torch.arange (9).reshape (3, 3).float()
>>> print (y)
tensor([[0., 1., 2.],
        [3., 4., 5.],
        [6., 7., 8.]])
>>> y_block = torch.kron (torch.ones (3, 3), y)
>>> print (y_block)
tensor([[0., 1., 2., 0., 1., 2., 0., 1., 2.],
        [3., 4., 5., 3., 4., 5., 3., 4., 5.],
        [6., 7., 8., 6., 7., 8., 6., 7., 8.],
        [0., 1., 2., 0., 1., 2., 0., 1., 2.],
        [3., 4., 5., 3., 4., 5., 3., 4., 5.],
        [6., 7., 8., 6., 7., 8., 6., 7., 8.],
        [0., 1., 2., 0., 1., 2., 0., 1., 2.],
        [3., 4., 5., 3., 4., 5., 3., 4., 5.],
        [6., 7., 8., 6., 7., 8., 6., 7., 8.]])
>>> block_prod = x * y_block
>>> print (block_prod)
tensor([[  0.0000,  -0.5733,   3.2195,   0.0000,   0.6953,  -0.7802,  -0.0000,
           0.1618,  -1.3557],
        [  3.8623,  -8.3139,  -0.7554,  -1.4055,  -1.7474,   0.6533,  -1.5901,
          -0.8588,  -0.7709],
        [ -2.4150,   2.8806,   1.7786,  -3.0647, -11.6520,  14.5776,   4.5606,
           4.9710,  -1.8756],
        [  0.0000,   1.2334,   0.9782,  -0.0000,   0.5123,  -3.0827,  -0.0000,
          -0.2145,   2.7712],
        [  3.3300,  -1.4326,  -1.0786,   1.4618,  -3.3095,  -2.4572,   1.3138,
           2.7249,  -4.4954],
        [ -6.9600,   5.3794,   4.0758,   1.8730,  10.2626,  -5.3054,   1.6288,
          -3.1710,   3.4613],
        [ -0.0000,  -2.8993,  -5.5215,  -0.0000,   0.3208,  -2.0569,   0.0000,
          -1.3254,  -3.3558],
        [ -1.1390,   3.0509,   3.4252,   1.6880,   1.8981,  -4.7641,  -3.7648,
           1.9438,  -0.0889],
        [-11.3648,   0.0554,  16.1641,   0.7995,  -2.2332,  13.9231,  -0.6401,
           4.0910,  -0.4107]])

Best.

K. Frank

Thanks @KFrank again.
I could solve it with torch.mul and torch.repeat

Hi Hdk!

Yes, it looks like torch.repeat() is the better solution. It ought to be
faster than torch.kron() – at least a little bit – and it’s an established
feature, so you don’t need to use an unstable version to get it.

Best.

K. Frank