Matrix, out of range (0,255), rather than Image

Can Pytorch deal with Matrix whoes values are out of range (0,255), instead of images?
I am learning Pix2pix model and using matrix as input, whose range maybe (-200,300), but the Generator seems to only produce numbers that belong to (0,255), which bothers me a lot…

Most likely your generator has some code, that ensures this scaling.
Could you post the code of your generator or how the generated images are created?

the pix2pix G output is a tanh result. its input and output are designed normalize from 0…255 to -1…1. pytorch doesn’t make any assumptions about this. you should change the code yourself.

1 Like

The Generator is defined by the code below. Like SimonW said, there is a ‘tanh’ function, but I do not konw how to change this…

class UnetSkipConnectionBlock(nn.Module):
    def __init__(self, outer_nc, inner_nc, input_nc=None,
                 submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False):
        super(UnetSkipConnectionBlock, self).__init__()
        self.outermost = outermost
        if type(norm_layer) == functools.partial:
            use_bias = norm_layer.func == nn.InstanceNorm2d
        else:
            use_bias = norm_layer == nn.InstanceNorm2d
        if input_nc is None:
            input_nc = outer_nc
        downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=4,
                             stride=2, padding=1, bias=use_bias)
        downrelu = nn.LeakyReLU(0.2, True)
        downnorm = norm_layer(inner_nc)
        uprelu = nn.ReLU(True)
        upnorm = norm_layer(outer_nc)

        if outermost:
            upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,
                                        kernel_size=4, stride=2,
                                        padding=1)
            down = [downconv]
            up = [uprelu, upconv, **nn.Tanh()**]
            model = down + [submodule] + up
        elif innermost:
            upconv = nn.ConvTranspose2d(inner_nc, outer_nc,
                                        kernel_size=4, stride=2,
                                        padding=1, bias=use_bias)
            down = [downrelu, downconv]
            up = [uprelu, upconv, upnorm]
            model = down + up
        else:
            upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,
                                        kernel_size=4, stride=2,
                                        padding=1, bias=use_bias)
            down = [downrelu, downconv, downnorm]
            up = [uprelu, upconv, upnorm]

            if use_dropout:
                model = down + [submodule] + up + [nn.Dropout(0.5)]
            else:
                model = down + [submodule] + up

        self.model = nn.Sequential(*model)

Thank for your reply, there is indeed a ‘tanh’ function in the code, does it serve as an activation function? How can I change it to a linear one without influencing the final results?

Is your output range fixed between [-200, 300]?
If so, you could just keep the tanh and scale your output to your desired range.

Alternatively, you could remove the tanh, but the training might be more unstable.
Could you explain your range a bit? Is it just an estimate or are your matrices limited to it?

Thanks for your quick reply :grinning:.
Sorry for the ambiguity. [-200, 300] is just an example to explain numbers that are out of [0,255].
My input has different ranges, which is all out of [0,255]. And I want to generate numbers corresponding to the input as precise as possible

Ok, then I would still suggest to use tanh and normalize your output somehow.
If you don’t have a limited range or just very sparse min and max values you could try to standardize your output with its mean and std (calculated from the training targets).
This would allow your model to predict outputs in [-1, 1], which you would have to scale back to its original range.

Thanks a lot. I’m working on this.
BTW, if I want to remove the ‘tanh’ function, to avoid any transformation and just for fun :ghost:, what should I do ?

From skimming your code, it looks like the tanh is added, when outermost=True.
Just remove it from up = [uprelu, upconv, nn.Tanh()].

OK, I will give a try. Thanks again.