How to train a model with fixed precision in some layer?

I want to train a model and set the first layers to fixed precision (e.g., 8-bit). Is this possible?

A concrete example, suppose I want to train an Alexnet, but set the initial 4 layers to 8-bit and the others as 32-bit.

I notice there is Dynamic quantilization. But it only supports 32-bit input and does not support training in this process. Similarly, QAT does not support different precision for different layers.