Nuclei segmentation with variable nuclei classes per patch/image

I need to perform nuclei segmentation. There are total 5 nuclei classes. However, some patches contain only 2, 3, or 4 nuclei classes, while some contain all 5 classes. Lets say there is a patch A with only 3 nuclei classes – class A, C, and E. In this case, how do I create labelled and binary masks? Since each colour represents a particular nuclei class, should I use a black mask for class B and D?

Thank you.

Hi Kountay!

Let me assume that you want to perform semantic segmentation, that is, you want your
model to classify each pixel in an image as belonging to one of your classes. Also, by
“patch” I assume that you mean an entire image (that might be cropped out of larger
image and that might be processed as one sample in a batch of several samples) that
is passed through your model for training / inference.

You will be performing a multi-class classification problem, presumably using
CrossEntropyLoss as your loss criterion. You will have six classes – your five nuclear
types plus a “background” class, used to label pixels that aren’t part of any nucleus.

Your ground-truth mask “image” for a given patch / image will have the same spatial size
as the given input image and consist of integer (long) class labels, in your case, the values
0 through 5. The value 0 would typically be used to label the background class (but you
don’t have to do it this way), so if a pixel in the input image belongs to a nucleus of class A,
the corresponding pixel in the mask would have value 1, class B would have mask value
2, and so on. A pixel in the input image that belongs to no nucleus would correspond to
a mask pixel with value 0.

It’s perfectly fine for a mask to not have all six values in it. If an input image happens not
to contain any nuclei of class A, then its mask will not contain any pixels with value 1.

(It is not uncommon for pixels of one or more classes to be much more common than
others, often with the background class being the most common. If the class counts in
your training data are sufficiently unbalanced, say by a factor of maybe three or more,
you might consider using CrossEntropyLoss’s weight constructor argument to help compensate for the imbalance.)

Best.

K. Frank

Thank you so much for your insights.

Actually, currently I just have the binary masks corresponding to each nucleus class. Using these binary masks, I should create a single multiclass mask having pixel values just as you mentioned (class A-E ==> values 1-5; class BACKGROUND ==> value 0). So, if a patch does not have class A and C, then there would not be a binary mask for these classes. In that case, how should I proceed with creating the multiclass mask?

In gist, what you mean by ground-truth mask is what I need to create. :slight_smile:

Thank you.

Hi Kountay!

Let me assume that each pixel in an image is in exactly one class (including background).
Let me also assume that your existing binary masks respect this, that is, if a given pixel
is 1 in the binary mask for class A, it is 0 for the binary class for class B, and so on. For
convenience, let me also assume that your binary masks are already pytorch tensors of
type long.

You could then do something like the following:

multiclassMask = torch.zeros (H, W, dtype = torch.long)
if  maskExists ('A'):  multiclassMask += 1 * maskA
if  maskExists ('B'):  multiclassMask += 2 * maskB
if  maskExists ('C'):  multiclassMask += 3 * maskC
if  maskExists ('D'):  multiclassMask += 4 * maskD
if  maskExists ('E'):  multiclassMask += 5 * maskE

(You wouldn’t have to follow this particular scheme – there are lots of ways to accomplish
the same thing. For example, if maskA doesn’t exist, you could simply create one made
up of all zeros, and then add all five masks together with weights, as above.)

Best.

K. Frank

1 Like

Amazing! Thank you so much for your response.

Based on your answer, I have tried to implement the steps as follows (please let me know if it is correct):

maskA = cv.imread('maskA.png')
maskB = np.zeros_like(maskA)
maskC = cv.imread('maskC.png')
maskD = np.zeros_like(maskA)
maskE = cv.imread('maskE.png')

stackedBinaryMask= np.dstack([maskA, maskB, maskC, maskD, maskE])
multiclassMask = np.zeros_like(maskA)
for i in range(stackedBinaryMask.shape[-1]):
    multiclassMask[stackedBinaryMask[...,i] == 255.] = i+1

Is this approach correct?
Thank you.

Hi Kountay!

I can’t speak to the details of the cv and np functions you are using, but if they behave
like their pytorch cousins, yes, your approach should work.

Note that you will need to convert your multiclassMask to a pytorch tensor if you want
to use it in a pytorch loss criterion such as CrossEntropyLoss.

Best.

K. Frank

Yes, the np.zeros_like() works similar to torch.zeros(). Sure, I will keep in mind to convert the mask into pytorch tensor. Thank you for your response.
Regards
Kountay Dwivedi