Couldn't quantize the model I used

I have a notebook here that has a image classifier skeleton

I tried to fuse layers and quantize the model as in the video but got an error.

Are there any best practices for writing your model etc that overcome this problem?

Hi guys any pointers would be helpful here!

Your notebook doesn’t show any error, so could you describe what issues you are seeing and what you’ve tried so far?

1 Like

Hi Patrick, the issue arose when I ended up using Torchserve. The API broke, and it worked when I tried a straight MNIST classifier with a clean architecture. What I mean here is that the model didn’t call a small conv function that has a ( conv_bn_relu) combo.

How did you check that this modules wasn’t called? Did you get any error message or do you think Torchserve just skipped this module somehow?

Got an error messgae @ptrblck

Let me organize this into a full question with pics and error messages. It will help people. Please give me an hour.

LOL ok I got my question completely wrong. The Torchserve issue was completely different since I’d screwed up the ModelHandler messing up the preprocess method

The quantization is different, but I still haven’t figured it out yet. Here’s the model.
I’m not sure how I can quantize this, compared to this sweet simple tutorial.