I have a Deep Convolutional LSTM NN, for which I am trying to do a very basic post trained quantization. Just convert the trained weights from float32 to int8.

I’m new to this so I might have does this wrong but what I’ve done is taken a list of model.parameter after training, of which there are 18 tensors. Converted the value range from the float range to int8 range and then converted the data type to int8. The list output (qparameter) looks right, now I’m just unsure how to put these values back into a model to test.

```
qparameters = []
def convert_float_to_int8(i):
while i < 18:
x = list(model.parameters())[i]
OldMin = torch.min(x)
OldMax = torch.max(x)
NewMin = float(-128)
NewMax = float(127)
OldRange = (OldMax-OldMin)
NewRange = (NewMax-NewMin)
y = (((x - OldMin)*NewRange)/OldRange)+NewMin
y = torch.round(y)
y = y.type(torch.int8)
print(x.dtype)
print(y,y.dtype)
qparameters.append(y)
i += 1
convert_float_to_int8(0)
print(qparameters)
```