Hi I’m trying to find a good way to implement the incomplete Beta function B_z(a,b) or better yet the incomplete Beta regularized function I_z(a,b) to work with autograd. The need for this is essential to compute the CDF of the StudentT distribution (which is not currently supported in torch.distributions).
I see that there is a JAX implementation and I’m wondering how trivial/not it would be to directly use this within PyTorch and ideally still get GPU performance. Also, this call is only required for computing loss so it’s not a huge bottleneck and could potentially just be done on CPU.
If anyone has some thoughts/suggestions on solving this problem that would be appreciated.