Tensorboard image quality lower than matplotlib?

New to tensorboard and was testing out some basic functionality.

Displaying the image in matplotlib shows the image expected

print(torch.__version__)
1.2.0

However, adding the image to tensorboard gives a degraded image:

tb = SummaryWriter()
tb.add_image('image_indiv',dcm[0].unsqueeze(0))
tb.close()

This looks like some image normalization issue. Matplotlib normalizes the image before displaying, but I don’t think tensorboard does that (not sure though).

It looks more like full preprocessing rather than just normalization, the colors seem inverted. Are you sure that you are calling the two lines (plt.imshow and tb.add_image) without dcm changing in between?

Hi,
I’m also facing with this problem when I’m using torch.utils.tensorboard with tensorboard 2

image
The left picture is the image on tensorboard. It’s bad.
The right picture is when I save the numpy array by using the imageio.imwrite(image)

Previous I used this code to plot image in tensorboard 1.13.1 and the quality was good just like the right picture above.

Does anyone have the best practice for this situation?

I found the problem.
Because my input array is not np.uint8 so _calc_scale_factor multiply the input array with scale_factor = 255 one more time.

Solution:
Change the type before feed it to add_image
writer.add_image(tag, images.astype(np.uint8), step, dataformats=dataformats)