# Don't understand why only Tensors of floating point dtype can require gradients

``````import torch

def get_f (x):

t1= x**2
t2= x**3
t3= x**4
f= t1+t2+t3
f= 2*(t1+t2+t3)
f= 3*(t1+t2+t3)

return f

x=torch.arange(0,3)
f=torch.arange(0,3)
J = jacobian(get_f, x).detach().numpy()

``````

What I want J is a 3x3 matrix

``````df/dx   df/dx  df/dx
df/dx   df/dx  df/dx
df/dx   df/dx  df/dx
``````

But I encounter the error
RuntimeError: Only Tensors of floating point and complex dtype can require gradients
I believe the requires_grad is True by default

Is there no way to diff the int32 (default) or I am doing something wrong?

I wouldn’t know how and if gradients are defined on “integer functions”. E.g. just take a simple example of `f(x) = x**2`.
For floating point numbers you would see: and can draw the gradient directly into the plot.

But I guess this is how this simple function would look if only integers are used: Would this mean that the gradient is everywhere 0 besides at the dots (where it would then be +/- Inf)?
If so, then I don’t think it would make sense to allow Autograd to accept integer values.
In case you are expecting integer outputs, it might be better to round the result.

Also, I’m sure other users such as @KFrank, @tom, and @albanD might have a better explanation.

2 Likes

Thanks, another following question, maybe I am wrong, so the AD in PyTorch is doing finite difference? so you need it to be continuous? I am new to AD, a silly question, why can’t it perform dx**2/dx=2x

To comment on the `dx**2/dx = 2x` question, AD isn’t computing a symbolic expression for the gradient. It’s basically using the chain rule to calculate your derivative directly. If you want to read a paper on it, I’d recommend Automatic Differentiation in Machine Learning: a Survey which will explain to you why AD isn’t symbolic differentiation nor numerical differentiation

If you do want an expression you could use the `functorch` library, although that’s a bit more complicated than standard PyTorch. An example for your function would be something like this,

``````import torch

def f(x):
return x**2

x = torch.arange(1,5,dtype=torch.float32) #dummy input