suppose A is a sparse tensor, B is a dense tensor,

A’s size is (12, 250, 92, 768), B’s size is (250, 768, 768). How to multiply the two tensors.

Hi Weikang!

Pytorch supports only certain operations between sparse and dense

tensors. But you can always convert your sparse tensor to dense:

```
dense_result = my_multiply_op (A.to_dense(), B)
```

Further, from the shapes of your tensors, it’s not clear how you wish

to “multiply” them together. Could you post explicit pytorch code

that shows – for two dense tensors of your given shapes – how you

would like them multiplied?

Best.

K. Frank

Hi! I posted the question only for asking for sparse operation. so `to_dense`

is not what I want.

The multiplication behavior is the same as `A @ B`

, which multiplies the 3th and 4th dimension and broadcast in the 1st dimension.

Hi Weikang!

Having ruled out using `to_dense()`

, you will have to drill down into

`A.indices()`

and write your own sparse-reshape function.

At issue is that most of pytorch’s *tensor* (as opposed to matrix)

multiplication functions do not support sparse tensors. `torch.bmm()`

does, but it only supports 3-d tensors, so it doesn’t quite fit your use

case.

Furthermore, pytorch also does not support reshaping sparse tensors.

In your use case, if you write a sparse-reshape function, you could

use the following expression:

```
torch.bmm (my_sparse_reshape (A, (12 * 250, 92, 768)), B.expand (12, 250, 768, 768).reshape (12 * 250, 768, 768)).reshape (12, 250, 92, 768)
```

(Note the use of `expand()`

to perform the broadcasting that `bmm()`

does not do automatically.)

This would preserve the benefits of sparsity in the tensor multiplication

(but final result would still be dense), but you would have to write

some not-fully-trivial code to implement it.

Best.

K. Frank