Using the univariate spline from scipy inside pytorch

Hi,

I want to be able to use the univariate spline class from scipy with pytorch but I am not able to do this while maintaining autograd. I have been looking at Creating Extensions Using numpy and scipy — PyTorch Tutorials 1.8.0 documentation to try and get the scipy code working in pytorch but I am unsure that the correct gradient is being returned. This is the code I have so far:

from torch.autograd import Function

class SPLFunction(Function):
   
    @staticmethod
    def forward(ctx, bin, x):
        ctx.save_for_backward(bin, x)

        x = x.detach().numpy()
        bin = bin.detach().numpy()
        
        bin_steps = np.arange(2, 22+20/63, 20/63)[:-1]
        spline_curve = UnivariateSpline(bin_steps, bin, ext=3)
        spline_curve.set_smoothing_factor(0.5)
        
        result = spline_curve(x)
        return torch.as_tensor(result)

    @staticmethod
    def backward(ctx, grad_output):
        bin, x = ctx.saved_tensors

        bin_steps = np.arange(2, 22+20/63, 20/63)[:-1]
        spline_curve = UnivariateSpline(bin_steps, bin, ext=3)
        spline_curve.set_smoothing_factor(0.5)
        
        df = spline_curve.derivative(x)
        df = torch.from_numpy(df*grad_output)

        return df

def apply_spline(bin, x):
    return SPLFunction.apply(bin, x)

Thank you for your help

Hi,

That looks mostly good.
A few updates: the backward is not differentiable (as it used numpy) so you can mark it as such. The backward should return as many values as the forward’s inputs. If an input is not differentiable, it should return None for it. You can use the gradcheck tool we have to check gradient via finite difference (if your function is smooth and differentiable).
That would look like:

import torch

from torch.autograd import Function, gradcheck
from torch.autograd.function import once_differentiable

class SPLFunction(Function):
   
    @staticmethod
    def forward(ctx, bin, x):
        ctx.save_for_backward(bin, x)

        x = x.detach().numpy()
        bin = bin.detach().numpy()
        
        bin_steps = np.arange(2, 22+20/63, 20/63)[:-1]
        spline_curve = UnivariateSpline(bin_steps, bin, ext=3)
        spline_curve.set_smoothing_factor(0.5)
        
        result = spline_curve(x)
        return torch.as_tensor(result)

    @once_differentiable
    @staticmethod
    def backward(ctx, grad_output):
        bin, x = ctx.saved_tensors

        bin_steps = np.arange(2, 22+20/63, 20/63)[:-1]
        spline_curve = UnivariateSpline(bin_steps, bin, ext=3)
        spline_curve.set_smoothing_factor(0.5)
        
        df = spline_curve.derivative(x)
        df = torch.from_numpy(df*grad_output)

        return None, df # Bin is not differentiable?

def apply_spline(bin, x):
    return SPLFunction.apply(bin, x)

gradcheck(apply_spline, (example_bin, example_x))

Thank you for the help,

I was thinking that the differentiable part would come from differentiating the spline curve and so I am a bit confused why you would return none

Ho, I added one None “randomly” just to make sure to return two things in the backward.
But if the forward takes bin and x as input, then the backward should return the gradients wrt bin and gradients wrt to x.

are there any benefits from doing that, seems like this only detects unsupported double backprop?

The only benefit is that you are sure that you will get a nice error if you try to use double backward later. And you won’t get silently wrong result or weird error in the second backward.

so, it is more appropriate for public / library code then; however I’d point out that @once_differentiable is undocumented, hence seriously underused…

1 Like

Yes, there is a plan to rework the custom Function doc to include these tips.