Hi,
I am trying to implement the power mean in pytorch following this paper.
It is pretty straight forward to do it in numpy:
def gen_mean(vals, p):
p = float(p)
return np.power(
np.mean(
np.power(
np.array(vals, dtype=complex),
p),
axis=0),
1 / p
)
Or in tensorflow:
def p_mean(values, p, n_toks):
n_toks = tf.cast(tf.maximum(tf.constant(1.0), n_toks), tf.complex64)
p_tf = tf.constant(float(p), dtype=tf.complex64)
values = tf.cast(values, dtype=tf.complex64)
res = tf.pow(
tf.reduce_sum(
tf.pow(values, p_tf),
axis=1,
keepdims=False
) / n_toks,
1.0 / p_tf
)
return tf.real(res)
Source
However since pytorch do not allow complex number this seems really not trivial.
An example of a limitation is the geometric mean for negative numbers that do not seems possible in pytorch.
Am I missing something?