Exponentiating a float to sparse tensor

Is there a way of doing torch.pow(a, A), where a is a float and A is a sparse tensor?. Given that we know that a^0 = 1, how can i do this operation faster, i.e. without computing a^Aij for each Aij = 0.
Im currently using my GPU so my intention is to do this paralelly.