I am currently experimenting with a WGAN-GP in which I want to enforce an additional constraint on the generated images.

Specifically, I want the sum of the pixels in one particular channel (e.g., red) to be equal to a particular value *x*.

Thus, during the training, I compute such sum for each generated image in the batch (of, let’s say, *n* generated images) and store the results a tensor of length *n*. Let’s call this tensor *S*.

Then, I add to the loss function of the generator an additional part given by the `MSELoss()`

between the tensor S and a target tensor of length *n* filled with *x*.

What I observe is that, while a lot of elements of *S* are actually quite far from x, the **mean** of *S* converges very precisely to the desired value *x*.

I am a bit puzzled by the fact that this additional loss is certainly doing something, but not what I would expect, i.e., make all the values of *S* converge to *x* as much as possible.

Instead, it seems that the mean of *S* is converging to the mean of the target tensor. Is this normal?

PS: I also tried with `reduction='sum'`

, but it doesn’t seem to help much. Also, what’s the point of having `'sum'`

and `'mean'`

in the implementation of `MSELoss()`

?