Can we train a model with weights initialized with int16?

Hello Sasank!

[quote=“Sasank_Kottapalli, post:1, topic:78381, full:true”]
RuntimeError: Only Tensors of floating point dtype can require gradients
cant we use autograd with int16?
[/quote[

No. int16s (as well as other integer types) are discrete, and
so are not usefully differentiable. So you cant use them with
autograd.

No, you’ll still have the same problem.

If your network parameters are int16s, and, as part of training,
your optimizer wants to change one of your network parameters
by a small amount, it won’t be able to – it could change the
parameter by 1 or 0 (or 2, etc.), but not by, say 0.0345.

Instead, you should change your ints to floats, something like:

my_float_tensor = my_int_tensor.float()

Note, in general, pytorch only likes to perform tensor operation
with tensors of the same type. So even if you have a case
where it logically and mathematically makes sense to combine
an integer tensor with a float tensor (because, say, gradients
won’t be flowing back through the integer tensor), pytorch won’t
let you (and won’t automatically cast the ints to floats), so you
will have to explicitly cast your int tensor to floats before
performing the operation.

Best.

K. Frank