Efficient way to manipulate latent space of a pre-trained network

I have a pre-trained Unet model that I want to do some extra calculations on its bottleneck, multiply the output of that block to a new vector, and pass the resulting vector to the decoder for reconstruction. What is the most efficient way to do that without decomposing the Unet into encoders and docoders and then assembling them back into a new module?