# Erf and erfinv functions in PyTorch

I have a torch.FloatTensor say A of size u,v,x,y. I have to calculate its erf and erfinv. Is there any way to do this in PyTorch. Please help me.

1 Like

What is Erf (and what is it’s inverse)? Is this an abbreviation?

Erf is error function also known as Gauss error function (https://en.wikipedia.org/wiki/Error_function) and erfinv is its inverse.

I’m actually interested by this as well, could you give us some updates if you find a solution?

``````import torch
from scipy import special
## reference - https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.special.erf.html

u = 1
v = 2
x = 3
y = 4

A = torch.randn(u,v,x,y)

special.erf( A.numpy() )
special.erfinv( A.numpy())
``````
5 Likes

Thanks Ajay! This is working fine. Nope, it does not work, because I also need the error function but as an autograd function; I mean, I would like to insert this function in the computation graph and pytorch to automatically differentiate it !

If the function is not supported for now in pytorch, may I define my own custom autograd function ?

(actually the equivalent of TF function:
https://www.tensorflow.org/versions/r0.11/api_docs/python/math_ops/basic_math_functions#erf )

Thank you @Soumith and all devs !

1 Like

If you can do it with `torchtensors` then you can define your own custom autograd function.

It might be a little tedious, but we’ve all written our own custom functions and modules - that’s a lot of the fun .

It’s definitely worth doing, once you’ve done one, it’s a lot less scary - good luck Hello @cerisara,

if you can live with an approximation, you could use the following (in the formula with the square root here: https://en.wikipedia.org/wiki/Error_function#Approximation_with_elementary_functions )

``````import torch
a_for_erf = 8.0/(3.0*numpy.pi)*(numpy.pi-3.0)/(4.0-numpy.pi)
def erf_approx(x):
def erfinv_approx(x):
b = -2/(numpy.pi*a_for_erf)-torch.log(1-x*x)/2

I must admit I don’t have a particular reason to use x*x for x**2 except that I copy-pasted it from a C version I typed up a couple of years ago.
If you feed `Variable`s that should work as well.

To get an impression of how they look, you could plot it against the scipy.special functions like

``````from matplotlib import pyplot
%matplotlib inline
import scipy.special
x = numpy.linspace(-2,2,100)
pyplot.subplot(1,2,1)
pyplot.title('erf')
pyplot.plot(x,erf_approx(torch.from_numpy(x)).numpy(), label="approx")
pyplot.plot(x,scipy.special.erf(x),'--', label="scipy")
pyplot.legend()
pyplot.subplot(1,2,2)
y = scipy.special.erf(x)
pyplot.title('erfinv')
pyplot.plot(y,erfinv_approx(torch.from_numpy(y)).numpy(), label="approx")
pyplot.plot(y,scipy.special.erfinv(y),'--', label="scipy")
pyplot.legend()``````

(the %matplotlib is for jupyter).

Best regards

Thomas

5 Likes

Hi @tom,

nice work !!!

I found a very simple implementation of the sinkhorn-knopp matrix normalisation, we were looking at a while ago. It’s in MATLAB, but it’s very understandable,

Hopefully I’ll try to implement it in `torch` this weekend - that should be a lot of fun, and useful too.

All the best,

Aj

1 Like

Great !

Thank you, very useful !

One use case that I needed was to sample from a Gaussian VAE and do interpolation in uniform distributed variables. That is because u = erfinv(v) follows normal distribution if v follows uniform distribution via a technique called inverse sampling.

These functions are now in master: https://github.com/pytorch/pytorch/pull/2799 3 Likes

Awesome, thank you!

Best regards

Thomas

1 Like

Can you point me to the current code for erf and erfinv? I want to see how they and their gradients have been approximated.
Many thanks!