Equivalent of scipy.special.hermite in PyTorch

Hi All,

I was just wondering if there’s a way to do scipy.special.hermite within PyTorch natively (in order to utilize the GPU). My current approach to pass to the CPU and then move back to the GPU which although working, I’d prefer to utilize the GPU as much as possible! As it works can’t covert cuda to numpy error if you directly pass to scipy.special.hermite,

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

An example code is shown below

import torch
from scipy.special import hermite

x=torch.randn(1000, device='cuda')

Hn_func = hermite(n=n, monic=False)
out = Hn_func(x.cpu()) 
out = out.to("cuda")

Thanks in advance!

Hi Alpha!

I’m not aware of pytorch support for Hermite polynomials.

Edit:

Update: I see support for Hermite polynomials in a recent nightly build,
but it doesn’t appear to support autograd yet:

>>> import torch
>>> torch.__version__
'1.13.0.dev20220727'
>>> _ = torch.manual_seed (2022)
>>> x = torch.randn (10, requires_grad = True)
>>> out = torch.special.hermite_polynomial_h (x, 3)
>>> out
tensor([ -2.2419,  -3.6784,  -2.6694,  -5.0147,   2.3845,   4.9064, -71.0367,
         15.3553,  45.3444,   2.9188])
>>> print (out.grad_fn)
None

A numerically-satisfactory old-school method to evaluate Hermite polynomials
is to use their recurrence relations.

In your example, x is a (cuda) tensor of shape [1000] for which you want
to evaluate the Hermite polynomial of order n.

I would imagine writing a loop where you build up a list of the tensors
H_i (x), for i in range (0, n + 1). Autograd will be able to track the
computation graph even though you are using a loop, so backpropagation
will work. I expect that you will get numerically satisfactory results and that
autograd will work even for moderately large values of n.

You can double-check numerical stability by comparison with scipy or by
repeating the computation in double-precision.

For large enough n it may end up being cheaper to use scipy on the cpu
than to run the gpu loop. If that is the case – and you need gradients – you
might consider wrapping scipy’s hermite() in a custom autograd Function
and (presumably using scipy) implement the Function’s backward()
method.

Best.

K. Frank

1 Like

Hi @KFrank!

Thanks for the detailed response! I had a feeling there might be a way to do it natively within PyTorch ops but it’s good to hear there’s a valid function in the nightly build!

I don’t need the gradient at the moment, but it’s something that might be useful in the future. I’m sure I can define it via a custom autograd Function if that’s needed! (or wait until its updated in a future PyTorch version)

Thanks!