Does dynamic indexing cause slow performance?

I have a project in tensor flow that I want to translate to pytorch. The tensorflow code creates an index out of the output of one layer and uses that index to slice a tensor, i.e. index = tf.reduce_sum((output < num).cast(tf.int32), axis=0); return a @ b[index]; more or less. Would this work in pytorch? Would it cause performance issues? Thanks in advance!