Can i remove a layer from a pre-trained model while loading the model weights?

Based on this great visualization by @vdw, you could probably try to set the weight and bias is the corresponding layer to ones and zero, and index the output state accordingly.
Note that you will find the corresponding layer parameters e.g. by model.weight__hh_l0, where l0 gives you the layer.