Deconvolution and Image segmentation

When I doing the image segmentation work in Encoder-Decoder architecture,I do it as usual.But I found that the segment result seems weird.
the result havs some parttern inside the character area,where should be all white.
The logistic of my decoder is:receive a encoder output as input,and three successive transposed-convolution directly, each one upsampling the feature map by 2 times, so to enlarge the image size 8 times in total.

Hello Kevinkevin189, for a quick reference, see this article on Deconvolution and Checkerboard Artifacts on Distill ( This might explain why you are observing this.

Your problem isn’t specific to PyTorch. It will be more helpful for you if you reach out to StackOverflow community on the related topic.


Thx for ur help.I read the article and got a rough understanding of the artifact causes.
But still confused. And I’d like to know the specific modification.
e.g, Now I use 3 deconv with k=4,s=2(I got the k=2s from this article,which said k set to 2s should be a solution.I did this modification but got no obvious elevation.
My architecture is : deconv1,deconv2, deconv3, then conv, three deconv all set to k=4, s=2 to enlarge twice as their input.the final conv is a k=1,s=1 conv layer to convert feature to channel-wise class to do the segmentation work.

Hello @Kevinkevin189, glad to know that you read the article and made the modifications. I’m no expert in this field but I recommend that you try the method that article proposes if keeping the kernel size a multiple of stride doesn’t help and your application can’t do away with these artifacts. As you might have noticed in that article, it mentions that you can resize the image to double size using some standard interpolation and then used a convolution layer to mix-up the output. The article also shows some examples on how they got a better performance on GAN output using this technique.

I again advice you to take this discussion on StackOverflow as you’ll reach a wider community and you’ve better chance of solving your problem. Also, you can post here a the link of discussion you create there.

Thx,I 've post on stackoverflow.I used to modify the kernel size double as stride in deconv operation.But I can still see checkerboard artifact. So I use feature resize and convolution with padding to enlarge my feature map.
I use nearest neighbor interpolation, then a conv op with padding =1 ,stride=1,kernel=3 to keep size unchanged.But something weird happens.
while using argmax to get the output, it generate all black image,but argmin to generate an inverse result(for black pixel in training mask ,it generate white, and vice versa)0_result
I check the softmax output, find that it does treat the background pixel to 0 channel, so argmax op generate 0 at this pixel.But foreground pixel’s softmax output is even. just(0.5,0.5),so it can’t distinguish,so the argmax chose the natural order 0 to present the class,and generate 0. So when I use argmax it generate all black image, while the argmin to genrate reverse result,which means it distinguish the background,but at these even pixels,the network fails.
the only modification about my network is the replacement of deconv to a resize and conv it must be.But I don’t know how to modify.
you can see the detail link here

1 Like