Convert tensorflow code to pytorch

Hi all, I am trying to convert TensorFlow code to PyTorch. But I don’t have any prior knowledge with TensorFlow, I will be grateful if someone can help with this situation. Here is the code

def trial_fun(xs, xt):
xs = xs - tf.reduce_mean(xs, axis=0)
xt = xt - tf.reduce_mean(xt, axis=0)
xs = tf.expand_dims(xs, axis=-1)
xt = tf.expand_dims(xt, axis=-1)
xt = tf.expand_dims(xt, axis=-1)
xs_2 = tf.transpose(xs, [0, 2, 3, 1])
xt_1 = tf.transpose(xt, [0, 2, 1, 3])
xt_2 = tf.transpose(xt, [0, 2, 3, 1])
HR_Xs=xsxs_1xs_2 # dim: bLLL
HR_Xs=tf.reduce_mean(HR_Xs,axis=0) #dim: L
HR_Xt = xt * xt_1 * xt_2
HR_Xt = tf.reduce_mean(HR_Xt, axis=0)
return tf.reduce_mean(tf.square(tf.subtract(HR_Xs, HR_Xt)))

The mapping should be:

  • tf.reduce_mean -> tensor.mean
  • tf.expand_dims -> tensor.expand
  • tf.transpose -> tensor.permute

Let us know, if you have any trouble.


Thank you so much, it works

Is there any mapping list from tensorflow to pytorch? (I cannot search one from google but I guess it mush be existed :zipper_mouth_face:)

If the input tensor becomes empty torch.max(), will give an error vs tf.reduce_max will give -inf.

Is there someway we can retain the same behavior as tf.

RuntimeError: max(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the ‘dim’ argument.

<tf.Tensor: shape=(), dtype=float32, numpy=-inf>

Could you explain why a -Inf return value makes sense for the max operation of en empty tensor?
I can see why raising an error makes sense, but I’m unsure how the -Inf is defined.

Couple of things IMHO, may be they are not strong enough reasons.

Other operations on empty tensor:

The behaviour for these reduction operations is not consistent with max.

Like in sum output is zero, if you use this further with other reduction method or use the value then it will be consistent, without any side effects.

Similarly, for max → -Inf means max is smallest possible value and if we use this with lets say another max(max(empty), [1,2,3]) then the output will be consistent without any side effects.

Same logic for min → +Inf

c. I faced this issue while migrating network/training from TensorFlow to PyTorch, in such cases if we do not have identical behaviour then there are high chances of the migrated code having side effects, as we need to modify the code elsewhere to ensure we get similar behaviour.

Thanks for the explanation. I believe PyTorch sticks to the numpy reference, which shows the same behavior:

# nan
# 0.0
# ValueError: zero-size array to reduction operation maximum which has no identity

In any case, I think you should create a feature request on GitHub as your use case of having a “consistent” user interface makes sense.

Thanks for the numpy reference, now I can understand the logic to keep things this way in PyTorch.

But yes in my case i will need the behaviour to be consistent with tensorflow rather than numpy.