Forcing consistency in output

I’m currently training a NN for a reconstruction task, with a structure such that its first layer is a Conv2D layer and its output as well. It takes in an image + additional parameters, where I force the additional parameters to be the same number in a tensor of the same size as the image. The parameter map is then passed through as it’s own channel. The output layer is the same as the input layer and outputs an image + additional parameters.

I’d like to teach the neural network / or force it that the additional parameters at the output should also be consistent, so the output channels corresponding to the parameters should be nearly or exactly the same within a channel.

I hope someone can help me out.