Why `entropy()` in distributions use natural logarithm?

It seems many tools in computer science also like numpy, uses natural logarithm, which leads the entropy term not lying between 0 and 1, which is the case in maths (Shannon entropy) via base 2.

I am quite curious what might be a common reason, although mathematically, they only scales by a constant to each other.