Np.gradient equivalent

Hi,

Is there a np.gradient equivalent in Pytorch?

1 Like

Hi,

I don’t think there is as we don’t usually deal with sampled function (if that’s what I understand correctly from the numpy doc).
That being said, if you don’t plan on doing backprop through this op, you can simply do: torch.from_numpy(np.gradient(your_f.numpy())).
If you need to be able to get gradient through it, then you’ll need to check how numpy implements this function and reproduce it with pytorch’s ops.

1 Like

Thought as much. I can do that but I assume this is going to bring the tensor from the GPU to the CPU and then will have to put it back. That doesn’t seem very efficient. I’ll see if I can implement the same thing. I want the velocity from each sample but was kinda lazy since this accomplishes almost the same (well, not really np.diff) and preserves the shape for my purposes.

I’ll see if I can do something with it.

Update : There is now one here