See https://www.tensorflow.org/api_docs/python/tf/keras/backend/batch_dot

Is there a way to translate an arbitrary batch_dot call into Pytorch?

See https://www.tensorflow.org/api_docs/python/tf/keras/backend/batch_dot

Is there a way to translate an arbitrary batch_dot call into Pytorch?

Hi,

You can use matrix multiplication functions like `matmul()`

which is quite general and handle arbitrary batching (`mm`

and `bmm`

are available as well if you want more specific functions).

You should use `.unsqueeze()`

to add dimensions of size 1 such your dot product is actuall a matrix multiplication of `1xn @ nx1`

matrices.

Thanks – I understand that I can use matmul or dot to re-implement a specific use of batch_dot, with specific dimensions of the arguments and a specific axes=… parameter, but I am looking for something that either provides the equivalent function or some code that would automatically (efficiently) work in the same way for anything that the keras batch_dot function accepts, so that code for keras could get more efficiently converted to pytorch without the need to actually look at that specific use in detail.

Tensordot lets you specify contraction axes.

Einsum is foolproof in the way that you specify the dimension of each tensor.

I guess PyTorch is more modelled after NumPy than anything else, though.

Best regards

Thomas

1 Like

Thanks, rumour has it though that einsum is very slow in PyTorch, is this true (I am using PyTorch 1.0)?

It reduces to bmm and so there are cases when it copies inputs and you’d notice that, but if there is enough interest, that isn’t actually hard to fix. I never imagined that it would be a workhorse as much as it has become for some people.

Best regards

Thomas