# How to produce a confidence map?

Suppose I have a ground-truth image with every pixel labeled as 0 or 1.And I want to train a CNN to produce a confidence map,which every pixel’s value varies from 0 to 1.if the pixel in ground-truth labeled as 1,its counterpart in output should be 1 or closed to 1(that’s why I call it confidence map)
I tried to treat it as a regression problem,so I setted the result to be a 1-channel output map and I used MSELoss.
But it seems failed after 1k epoches training.

left:input,mid:ground-truth;right:output.
I want to know How to modify my loss function and the output of network to get a desirable result.Thx for ur help!

As far as I understand you are dealing with a segmentation task, i.e. you would like to classify each pixel in your input image to one of your classes.
In your case you just have two valid classes (background and class).
If that’s correct, you could use a simple classification approach with your multi dimensional output and target.
Here is a dummy example using random input and a random target consisting of 2 classes:

``````batch_size = 2
c, h, w = 3, 24, 24
nb_classes = 2
x = torch.randn(batch_size, c, h, w)
target = torch.empty(batch_size, h, w, dtype=torch.long).random_(nb_classes)

model = nn.Sequential(
nn.Conv2d(c, c*2, 3, 1, 1),
nn.ReLU(),
nn.Conv2d(2*c, nb_classes, 3, 1, 1)
)
criterion = nn.CrossEntropyLoss()

output = model(x)
loss = criterion(output, target)
loss.backward()
``````

yeah,its a segmentation task,but what different is,the segmentation produce a confidence map with channel number=class number,and we finally use max(pixel) to get the largest probability’s location channel.for instance,if we got a [0.8,0.2] on one pixel, so we consider it belongs to label 0.but I want to get the confidence map.I don’t do any judgement operation.only produce a confidence.
Just like this figure i scratched from a paper, the blue and red part,and some other color shows the confidence. but it do receive a binary ground truth.

You could calculate the probabilities for both classes using softmax, and slice the class1 channel to get a similar visualization. Could you try that and see, if that’s what your are looking for?

Yeah！ I think I understand your answer,another question: When I using multi GPU,is it necessary to set the batch size to be even number? I find that my dual GPU Machine with default setting, the first GPU memory usage it nearly full,but the other is only 1/2.Is there any reason why the GPU:0 uses far more than the other?Can batch size be a odd number? Because the current setting seems that I left nearly 6GB without using,while another is full-time working.