I was just wondering if there’s a way to do
scipy.special.hermite within PyTorch natively (in order to utilize the GPU). My current approach to pass to the CPU and then move back to the GPU which although working, I’d prefer to utilize the GPU as much as possible! As it works can’t covert
numpy error if you directly pass to
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
An example code is shown below
from scipy.special import hermite
Hn_func = hermite(n=n, monic=False)
out = Hn_func(x.cpu())
out = out.to("cuda")
Thanks in advance!
I’m not aware of pytorch support for Hermite polynomials.
Update: I see support for Hermite polynomials in a recent nightly build,
but it doesn’t appear to support autograd yet:
>>> import torch
>>> _ = torch.manual_seed (2022)
>>> x = torch.randn (10, requires_grad = True)
>>> out = torch.special.hermite_polynomial_h (x, 3)
tensor([ -2.2419, -3.6784, -2.6694, -5.0147, 2.3845, 4.9064, -71.0367,
15.3553, 45.3444, 2.9188])
>>> print (out.grad_fn)
A numerically-satisfactory old-school method to evaluate Hermite polynomials
is to use their recurrence relations.
In your example,
x is a (cuda) tensor of shape
 for which you want
to evaluate the Hermite polynomial of order
I would imagine writing a loop where you build up a list of the tensors
H_i (x), for
i in range (0, n + 1). Autograd will be able to track the
computation graph even though you are using a loop, so backpropagation
will work. I expect that you will get numerically satisfactory results and that
autograd will work even for moderately large values of
You can double-check numerical stability by comparison with scipy or by
repeating the computation in double-precision.
For large enough
n it may end up being cheaper to use scipy on the cpu
than to run the gpu loop. If that is the case – and you need gradients – you
might consider wrapping scipy’s
hermite() in a custom autograd
and (presumably using scipy) implement the
Thanks for the detailed response! I had a feeling there might be a way to do it natively within PyTorch ops but it’s good to hear there’s a valid function in the nightly build!
I don’t need the gradient at the moment, but it’s something that might be useful in the future. I’m sure I can define it via a custom autograd Function if that’s needed! (or wait until its updated in a future PyTorch version)