I am trying to train model with audio signal.

Loaded audio signal with librosa.load and mutiplied with 32768 to convert it into integer and changed the dtype to int16 from float32

Initialized a weight in Channel normalization function(didn’t used nn.layernorm) with int16.

when I tried to train, I recieved the below error

RuntimeError: Only Tensors of floating point dtype can require gradients

cant we use autograd with int16?

will changing all the parameters to int16 solve this issue? if so how to change weights to int16?

Hello Sasank!

[quote=“Sasank_Kottapalli, post:1, topic:78381, full:true”]

RuntimeError: Only Tensors of floating point dtype can require gradients

cant we use autograd with int16?

[/quote[

No. `int16`

s (as well as other integer types) are discrete, and

so are not usefully differentiable. So you cant use them with

autograd.

No, you’ll still have the same problem.

If your network parameters are `int16`

s, and, as part of training,

your optimizer wants to change one of your network parameters

by a small amount, it won’t be able to – it could change the

parameter by `1`

or `0`

(or `2`

, etc.), but not by, say `0.0345`

.

Instead, you should change your ints to floats, something like:

```
my_float_tensor = my_int_tensor.float()
```

Note, in general, pytorch only likes to perform tensor operation

with tensors of the same type. So even if you have a case

where it logically and mathematically makes sense to combine

an integer tensor with a float tensor (because, say, gradients

won’t be flowing back through the integer tensor), pytorch won’t

let you (and won’t automatically cast the ints to floats), so you

will have to explicitly cast your int tensor to floats before

performing the operation.

Best.

K. Frank

Thankyou so much Frank for quick reply. Everything made sense

Hi, for me, I’m just interested in saving my weights as int to be able to make inferences, I don’t need it to do training, is that possible?