Glass slide

Nuclei segmentation using deep learning: Methodology essentials

Written by:

Dr. Fanny Dobrenova, M.A.

cand. med. Elisa Opriessnig, BA BA MA

Advanced methods for nuclei segmentation using deep learning (DL) have risen in popularity in recent years. These approaches aim at detecting and measuring properties of nuclei in tissue sections based on automated image recognition algorithms.

Being able to correctly detect and segment cell nuclei in microscopy images is an essential task throughout various disciplines such as pathology and histology. Changes in the morphological characteristics of the nuclei in tissue samples are often considered an indication of pathological processes.

Compared to qualitative slide reading, using a DL-based method for nuclei segmentation facilitates a faster and more reliable morphological assessment, improves the objectivity and reproducibility of results and increases efficiency in both, research and diagnostics. Yet, the choice of the right DL-architecture should depend primarily on its performance in terms of processing efficiency and segmentation accuracy.

nuclei segmentation deep learning benefits

The benefits of deep learning approaches for nuclei segmentation

5 Benefits of a DL-based method for nuclei segmentation prepared by KML Vision

In this article we provide an overview of recent DL-based methods used for nuclei segmentation tasks. Learn about the most common challenges researchers face during tissue analysis such as over- and under-segmentation, overlapping or touching nuclei and domain shift. We discuss a number of preprocessing, annotation and labelling best practices, intended to yield optimal results when implementing nucleus segmentation in digital image analysis workflows. Find out about the various applications of algorithm training for nuclear segmentation tasks in the field of histopathology.

​The cell nuclei segmentation workflow

The automated segmentation of cell nuclei involves a number of stages: preprocessing, segmentation, postprocessing and evaluation. During the preprocessing stage the quality of input images is improved to ensure optimal segmentation results. The objective of the actual segmentation stage is to extract cell nuclei present in the foreground of the image.

During the postprocessing stage the segmentation results are optimized. This involves sorting out false findings i.e. removing objects that are not actual nuclei, like image artefacts, from the segmentation results by adding additional code for morphological filtering. Postprocessing is also applied to refine nuclear boundaries.

At the evaluation stage the researcher assesses the quality of the quantitative segmentation results based on benchmark metrics. The most common metrics used at this stage are precision, recall and F1-measure.

nuclei segmentation deep learning workflow

The cell nucleus segmentation deep learning workflow

The cell nuclei segmentation workflow prepared by KML Vision

Precision is a measure of prediction accuracy. Recall is a metric related to the reproducibility of results. The F1-measure is a metric related to the object detection capabilities of the network. All above-mentioned metrics can take an expression from 0 to 1. The closer to 1 the value is, the greater the predictive power of the model (Fujita & Han, 2020).

Deep learning methods for nuclei segmentation: analysis frameworks, challenges and opportunities

A frequent and central goal of tissue analysis is determining the population of individual cell nuclei in microscopy images. Depending on the task, an object detection algorithm may be sufficient to localize nuclei instances. However, morphological  properties of the nucleus such as shape and texture allow observers to obtain quantitative data at the individual object level which may subsequently be aggregated to visualize size distributions and detect abnormal patterns.

An essential step in this analysis pipeline is the accurate segmentation of the image into semantic classes representing coherent regions such as nuclei as an example of intracellular structures, or connective tissue, epithelial and stromal areas within a histology dataset. This can be done for one or multiple classes and facilitates a multi-readout from a single analysis.

Each semantic class needs to be further processed by identifying the boundaries of every single nucleus i.e. performing boundary detection. Thus, three classes of pixel types are identified during the nuclear segmentation process: background, nuclei interior and nuclear boundaries (Caicedo et al., 2019).  For instance, this allows us to detect individual instances of a particular type of nuclei.

The most common traditional approaches for nuclei segmentation analysis are thresholding and seeded watershed. Thresholding involves the use of threshold intensity values in grayscale histograms of microscopy images in order to filter objects of interest like cell nuclei (Win et al., 2018; Caicedo et al., 2019).

The classical seeded watershed method partitions microscope images using watershed lines alongside the boundaries of morphological structures such as nuclei. By doing so, the locations of the nuclei - referred to as seeds - are identified. The algorithm further relies on region growing in order to expand each location until the boundaries of the nucleus are reached (Atta-Fosu et al., 2016).

However, accurate and robust nuclear instance segmentation using traditional methods can be challenging due to morphological variations in the nuclei among different organs and tissue types. Moreover, spatial configurations (e.g. clusters, touching nuclei), or tissue preparation, complex background containing imaging artefacts (e.g. folds, out-of-focus regions) are significantly limiting the applicability of such methods (Zhou et al., 2019).

At the same time, innovative deep learning-based methods have been on the rise in recent years and have proven to be more efficient and accurate and robust in segmenting cell nuclei across various tissue types (Jang et al., 2019). We provide a brief overview of the most common DL-based approaches used for nuclei segmentation.

Deep learning model architectures for nucleus segmentation

Deep neural network architectures apply semantic and instance segmentation in order to identify and locate nuclei in microscopy images (Naylor et al., 2017).

Most common DL-based approaches for nuclei segmentation

Semantic segmentation

Associates every pixel of an image with a class label

Instance segmentation

Treats multiple objects of the same class as distinct individual instance

Semantic segmentation involves assigning areas of the image to different semantic classes being nuclei area, cytoplasm area, nuclei edges or boundaries area and background area. The task of DL-algorithms for semantic segmentation is to track the presence of nuclei in the foreground of the digital slide and to later detect the boundary of each nucleus by segmenting the connected foreground area (Cui et al., 2019; Kowal et al., 2020).

Instance segmentation covers in two stages: semantic segmentation and object identification. One of the most popular methods to perform instance segmentation is described in an article by Allehaibi et al. (2019). First, the algorithm detects objects of interest and creates bounding boxes around them. Second, the model produces a pixel-wise mask for each individual object, i.e. it performs semantic segmentation on each bounding box. Instance segmentation offers the advantage of providing additional information on rich morphological features of cellular structures (Zhou et al., 2019).

DL-algorithms classify and segment objects in the images by learning features from large amounts of representative input data (Hayakawa et al., 2019). Various types of DL-architectures have been used for segmenting biomedical images, yet they all differ in layer configuration and model depth (Kowal et al., 2020). We discuss a number of deep learning architectures commonly used for the purposes of nuclei segmentation.

DL-architectures used for nuclei segmentation 

  • ​Convolutional neural networks (CNN)
  • ​Fully convolutional neural networks (FCN)
  • ​U-Net
  • ​MASK R CNN

Convolutional neural networks (CNN)

Convolutional neural networks (CNN) are widely used for the automatic segmentation of biomedical image data for the purposes of feature extraction. CNNs are multilayer neural networks that learn an abstract representation of the input image on the basis of hierarchical layers of convolutional filters (Kowal et al., 2020).

Like other segmentation approaches convolutional neural networks classify single pixels of the image into different semantic categories. However, they have been reported to provide superior performance in nuclei segmentation, classification and detection tasks as compared to traditional methods (Alom et al., 2018).

This approach applies three types of layers in order to process image data: convolution layers,  pooling layers and activation or fully connected layers. Convolution layers are used to compress certain features in areas of interest in the input images, while pooling layers are used to create feature maps from the data obtained during convolution. The latter represent semantic maps of connected objects, which are later to be segmented into singular objects. Last but not least, a fully connected layer connected to all activations in the previous layers is applied at the end of the CNN. The task of the network at this stage is to capture relationships between high-level features and output labels (Wang et al., 2014; Hayakawa et al., 2019; Caicedo et al., 2019; Kowal et al., 2020).

CNNs have been reported to be more effective in detecting nuclei and cytoplasm areas than traditional computer vision methods, even in cases when the staining and the background are heterogeneous. Problems may still occur in cases of overlapping nuclei, where no clear nucleus boundaries can be detected. Yet, existing research points at the superior segmentation performance of CNNs as compared to intensity-based thresholding methods in such cases. However, the separation of overlapping areas still requires a two-stage method combining convolutional neural networks with seeded watershed techniques (Kowal et al., 2020).

Several types of deep learning architectures based on CNN have been discussed in existing literature and each brings along specific advantages during segmentation tasks.

Fully convolutional neural network (FCN)

The fully convolutional network (FCN) approach relies on the original CNN-architecture, but uses fully connected upsampling and skip layers. A FCN makes predictions on a pixel by pixel basis and operates on image input of arbitrary size. Thus, the spatial precision of the output is more refined and the network is more sensitive to variations in input image resolution. This makes FCN flexible enough to not require pre- and postprocessing and to learn high-level features (Zhang et al., 2017; Naylor et al., 2017; Long et al., 2015).

FCN has been reported to be particularly precise in capturing deep features of cell nuclei, but not as effective in localizing nucleus boundaries. This type of architecture has been quoted for the capability to segment “normal” and “abnormal” nucleus masks in cytology studies (Zhang et al., 2017).

U-Net

U-Net is a U-shaped multi-scale stacked neural network based on FCN architecture. A U-Net relies on the stacking of encoder and decoder layers. The stacking of layers allows the algorithm to learn more complex features as the network depth increases (Vuola et al., 2019).

Encoder layers create representation-encoding by reducing input images by their spatial resolution through multiple convolutional layers. Decoder layers are used to create feature maps in order to predict segmentation masks.

Similarly to the FCN a typical U-Net introduces shortcut connections between the encoder and decoder layers. These skip connectors allow the network to retrieve spatial information lost during the process of pooling and to preserve features at different resolutions (Ronneberger et al., 2015; Kang et al., 2019; Vuola et al., 2019; Ibtehaz & Rahman, 2020).

U-Net has been reported to yield excellent results in nucleus segmentation tasks especially in cases of large nuclei, elliptical nuclei shapes and single nuclei. At the same time some pitfalls of U-Net as compared to other networks can still be observed with regards to nucleus detection tasks and clumping multiple nuclei into one big nucleus (Vuoloa et al., 2019).

Mask R CNN 

Mask R CNN is an example of a domain adaptation network particularly effective for the purposes of instance segmentation. Mask R CNN enables the simultaneous detection and segmentation of cell images by adding branching subnets to estimate the segmentation mask for each region of interest (ROI) (Allehaibi et al., 2019; Fujita & Han, 2020). The model first detects bounding boxes of nuclei and subsequently performs segmentation on the nuclei within the boxes (Vuola et al., 2019).

When comparing histopathological image inputs with the nucleus segmentation mask produced by DL-algorithms certain discrepancies in terms of objects being assigned to different classes can be observed. Due to such image-level and feature-level domain bias a phenomenon referred to as “domain shift” occurs. Domain adaptation deep learning methods tackle this issue by adding benchmark datasets as a source in order to adapt to a variety of biomedical images (Lui et al., 2020; Hsu et al., 2021).

Mask R CNN has been reported to perform outstandingly in nucleus detection tasks, while its segmentation capabilities still need improvement. This particular deep learning model yields less oversegmentation cases as compared to U-Net and can better detect clumped or grouped nuclei. Still, Mask R CNN tends to oversegment large nuclei, while it yields better performance for small and middle sized nuclei (Vuola et al., 2019).

Apparently each type of deep learning network has its advantages and is particularly efficient at different nucleus segmentation tasks. Choosing the right network for your research project or adapting a deep-learning algorithm to match your requirements requires expert knowledge in computer vision and programming skills. All this speaks in favor of a model that is robust and flexible enough to adapt to different samples, research designs and segmentation tasks.

Preprocessing, annotation and labeling strategies

Prior to conducting the actual quantitative segmentation analysis, a stage of preparation is necessary in order to get the image data ready for the algorithm training. Different strategies for preprocessing, annotation and labeling of the image dataset have been proposed in existing literature in order to extract the most accurate nucleus segmentation data with trained algorithms.

Applying the right strategy will significantly improve the performance of the trained algorithm. We discuss a number of training strategies intended to facilitate optimal results in nuclear segmentation.

7 Training Techniques for Nuclei Segmentation prepared by KML Vision

Data preparation techniques for nuclei segmentation tasks

7 Training Techniques for Nuclei Segmentation prepared by KML Vision

Training data selection and preparation

DL-algorithms tend to be effective in segmenting nuclei even when trained on a small dataset, when the task permits it. Yet, providing a variety of training images improves the predictive power of the algorithm. DL-algorithms can be used across different experimental settings as long as variations of the morphological structures of interest have been priorly introduced to the algorithm (Caicedo et al., 2019). Otherwise, a phenomenon known as “domain shift”, where the model was trained on a specific setting, but has to perform in a different setting, limits the applicability (Liu et al., 2020; Hsu et al., 2021).

The careful selection of input images is also recommended in order to be able to balance the dataset and reduce the amount of images containing background only or images of poor quality  (Araujo et al., 2019).

Image data augmentation

Data transformation techniques in the spatial or color domain may effectively be used as part of a data augmentation pipeline. Augmentation refers to the artificial extension of a dataset to increase the amount and variance of training data so that it best represents the expected target domain. Data augmentation is amongst the most important steps in preprocessing and building a robust model for production use.

Common augmentation strategies involve random or parameterized spatial transformations such as flipping, rotation or deformation, frequently combined with adding noise or color transformations (Allehaibi et al., 2019). A nuclei segmentation model generally benefits from these operations, because when using meaningful data augmentation, the total amount of manually annotated and labeled data required for training can be kept at minimum, facilitating faster model development.

Color processing and normalization

During the segmentation process of stained virtual slides color information serves as an important indicator for detecting objects of interest. However, differences in the staining or acquisition devices of the whole-slide images may pose a difficulty for DL-algorithms to properly process the data.

Different types of stains are used to mark various objects of interest within histopathological images. Stains such as hematoxylin and eosin are used to mark those objects with a particular color. Thus hematoxylin is often used to mark cell nuclei and paint them blue. Eosin stains are normally absorbed in the cytoplasm marking it reddish in color. Certain color variations in the markings may occur as a result of using different staining protocols, stain brands or microscope and scanning devices. This may affect the performance of the deep-learning network, especially if the color hues training input images differ from those  images to be processed  (Kowal et al., 2020).

Therefore, it is recommended to perform color transformation into illumination-resistant color spaces and perhaps also normalization of individual color space components on the input images in order to ensure the homogeneity of input data in terms of color variation. Color normalization techniques may involve histogram matching, color transfer and spectral matching (Hayakawa et al., 2019; Cui et al., 2019; Mahmood et al., 2019; Kowal et al., 2020).

Annotation and labeling strategies 

The terms “annotation” and “labeling” are often used interchangeably and usually mean the same: adding a semantic meaning to a specific, spatially constrained image region. Here, we consider annotating the process of marking image regions manually using geometric shapes, for instance a polygon, and labeling the process of adding a specific term or meaning to a respective geometry.

End-to-end deep-learning-based segmentation models essentially learn to create semantic masks in the image, where each pixel is assigned to a specific semantic class. Hence, a pixel mask is also needed as a learning target for the system. While a pixel-wise annotation or “dense annotation” method requires a lot of manual effort (Qu et al., 2020) for the accurate outlining of individual objects, there are a number of annotation techniques discussed in literature to make the algorithm training process more time-efficient.

Partial annotation, or “weak annotation”, is a technique that relies on annotating only a small number of objects in the training dataset. This strategy is also referred to as “sparse labeling.” Usually, this technique does not directly result in the desired masks and requires additional preprocessing steps to subsequently generate pixel-level annotations. While the generated masks may not be perfectly delineating the nuclei, this approach is much more time-effective. Moreover, algorithms trained on partially annotated image datasets have reportedly shown similar performance results when compared to approaches involving a dense pixel-level annotation (Qu et al., 2020; Bruch et al., 2020; Ho et al., 2021).

Point annotations are probably the most time-saving method of manual image annotation. Recent work has shown that training nuclei segmentation algorithms only by marking a single location of the nucleus without actually delineating the exact boundaries is possible, which significantly reduces the effort of annotation (Yoo et al., 2019).

Dealing with touching and overlapping nuclei during algorithm development

One object class of particular interest in the analysis of histological images is the nucleus of cells. The detection of overlapping, touching and heavily clustered nuclei is one of the most challenging issues when training automated nuclei segmentation models. The presence of multiple layers of cells e.g. in a pap smear sample often results in an upper layer of cells covering and obscuring the underlying cell layers on the microscopy slide image.

Challenges during tissue analysis

  • Over- and under-segmentation
  • Overlapping
  • Touching nuclei
  • Domain shift

Overlapping cell nuclei have intersecting boundaries at points of concavity (Cloppet & Boucher, 2008; Ishrad et al., 2013). Thus, extracting reliable and accurate nuclear boundaries in regions of overlap may be problematic with the software currently available on the market.

Overlaps and touching regions in the whole-slide images may result in the inaccurate identification of nuclear boundaries and may cause either under-segmentation, i.e. the segmenting overlapping nuclei as one, or over-segmentation, i.e. segmenting one single nucleus as multiple ones (Mahmood et al.,  2019;  Zhou et al.,  2019).

Several studies have attempted to address this issue using non-DL methods watershed transform algorithms (Cloppet & Boucher, 2008) or active shape models (ASM) (Plissiti & Nikou, 2012). DL-based models for splitting touching and overlapping nuclei areas are more scarce and often involve complementing the DL-algorithm with traditional techniques like seeded watershed (Kowal et al., 2020).

An article by Cui et al. (2019) puts forward a model that relies on an overlapped patch extraction and assembling method. Kang et al. (2019) apply a two-stage learning approach based on two separate U-Nets with different outputs. Coarse nuclear boundaries extracted during the first stage act as metadata used while segmenting overlapping nuclei during the second stage.

An article by Kowal et. al. (2020) proposes combining a convolutional neural network with seeded watershed approaches in order to split nuclei clusters. The CNN applies conditional erosion to detect centers or seeds of prospective nuclei within the climped area. Overlapping nuclei are subsequently segmented using the seeded watershed method.

Yet, most DL-algorithms for splitting overlapping nuclei used in literature are not completely technically mature and still require additional postprocessing steps like binary map transformation and thresholding (Cui et al., 2019).

Applications of deep-learning nuclei segmentation techniques in histopathology image analysis

We take a closer look at the applications of DL-based nucleus segmentation in the field of histopathology. In histopathology studies nuclear segmentation is an essential task, which enables nuclei morphology analysis, cell type classification as well as cancer detection and grading.

One of the biggest challenges that DL-based methods face during the analysis of histopathology image data is the segmenting of abnormal cell structures. Abnormalities in the morphological characteristics of the nuclei such as size, shape, texture, color, volume, improper distribution of nuclear to cytoplasmic ratio serve as indicators of pathological processes detected in histology images (Ishrad et al., 2013; Rączkowski et al.,  2019).

A large body of histopathology nuclei segmentation research work sets a thematic focus on two types of nuclei: lymphocyte nuclei and epithelial nuclei, since these two types are often associated with inflammatory processes on a cellular level (Ishrad et al., 2013). Nuclei segmentation studies using histopathological images have already been performed on different types of tissues, the most common ones being breast, cervix and prostate tissue.

Cell nuclei segmentation deep learning prostate tissue

Cell nuclei in prostate mouse tissue

Nuclear segmentation on the basis of histopathology image data often poses a challenge to researchers and pathologists due to color and contrast variations, the presence of staining artifacts, the morphological variability of nucleus shapes, the occurrence of clustered structures and overlapping regions (Ishrad et al., 2013).

Furthermore, non-deep-learning computer vision approaches often fail to correctly segment the nuclei of cancerous or malignant cells, since these algorithms have been previously trained solely on a benign cell sample and lack the ability to adapt to novel data.

A DL-model can alleviate this problem as it automatically extracts features from raw data. Using a DL-based method has proven to be more effective, due to the ability of DL-algorithms to autonomously and adaptively learn from novel image data. Furthermore, DL-methods are more robust and showcase a more reliable performance when it comes to object classification, detection, and segmentation. In particular, a better predictive performance can be achieved (Araujo et al., 2019; Wang et al., 2020).

deep learning approaches for nuclei segmentation

This is how deep learning methods for cell nuclei segmentation work.

deep learning approaches for nuclei segmentation

In histopathology studies the number of cells undergoing mitosis i.e. cell division is often reported as an indicator of tumor severity. That is why it is essential to precisely track instances of mitotic nuclei. Yet, the segmentation of nuclei undergoing mitosis is also a difficult task due to variability in the morphology and texture of mitotic cells (Wang et al., 2014).

Although deep learning-based models to segment nuclei in histopathological images achieve better precision results than traditional techniques like watershed and thresholding, there is still a need for improvement regarding the task of segmenting overlapping or touching nuclei. Those may in some cases be segmented as one object (Naylor et al., 2017; Wang et al., 2020).

Several histopathology studies suggest that the same trained algorithms often yield different performance metrics for tissues from different organs (Kang et al., 2019; Cui et al., 2019). That is why there is a high demand for effective nuclei segmentation methods which can be generalized across various cell, tissue and organ types.

We provide an overview of deep learning methods used in recent scientific articles on nuclei segmentation for histopathological image analysis and further discuss commonly used algorithm performance metrics. We take a look at performance indicators such as recall, precision and F1-score in order to evaluate the predictive power of several nuclear segmentation studies available.

Article

Tissue type (cell type)

Deep learning architecture

Recall value

Precision value

F1-Score

Cruz-Roa et al. 2013

skin tissue

(basal cells)

CNN

0.887

0.901

0.894

Wang et al. 2014

breast tissue

(mitotic cells)

CNN

0.650

0.840

0.734

Naylor et al. 2017

breast tissue

FCN

0.855

0.878

0.866

Ren et al. 2017

prostate tissue

CNN

0.812

0.892

0.846

Saha et al. 2018

breast tissue

(mitotic cells)

CNN

0.880

0.920

0.900

Araujo et al. 2019

cervix tissue

CNN

0.650

0.730

0.690

Kang et al. 2019

multiorgan

U-Net

0.833

0.826

0.829

Cui et al. 2019

multiorgan

(breast, liver, kidney, prostate)

FCN

0.892

0.845

0.850

Jung et al. 2019

multiorgan

(bladder, breast, kidney, liver, prostate

Mask R CNN

0.913

0.821

0.861

Lu et al. 2020

breast tissue

(lymphocytes)

U-Net

0.953

0.901

0.926

Ali et al. 2020

prostate tissue

(benign cells)

CNN

0.810

0.860

0.850

Although state of the art DL-algorithms tend to outperform traditional segmentation analysis methods, some aspects of their performance may still pose an issue. The majority of studies have shown the limited generalizability of segmentation models across cell and tissue types.

The over- or under-segmentation in cases of touching and overlapping nuclei, the misidentification of micronuclei and mitotic nuclei pose different types of issues. Prior research points at differences in the performance of the algorithm when running the algorithm on benign and malignant samples. Ali et al. (2020) report that the algorithm achieves higher scores in precision, recall and F-measure in the case of malignant cells  as compared to benign cells.

In order to deal with issues such as overlapping nuclei and limited generalizability, further postprocessing and intervention are usually necessary (Mahmood et al., 2019; Caicedo et al., 2019).

Nucleus segmentation in prostate tissue samples

Several studies have laid an emphasis on segmenting nuclei in prostate biopsy tissue samples (Ren et al., 2017, Ali et al., 2020). When analyzing prostate tissue glandular structures such as epithelial nuclei, stromal nuclei, epithelial cytoplasm, stromal cytoplasm and lumen are put in focus. The accurate nuclei segmentation in the histopathology analysis of prostate tissue biopsies is a decisive factor in the diagnosis and grading of prostate cancer. Epithelial nuclei concentration on the boundaries of the prostate gland indicate that the structure of the tissue is intact and benign. The spread of epithelial nuclei with irregular shapes across the stroma areas can indicate that the biopsy sample is malignant.

Prostate tissue microscopy

Prostate tissue microscopy image

The procedure proposed by Ali et al. (2020) employs a few preprocessing steps i.e. affine transformation prior to performing a coarse segmentation of the input images. The next stage consists of wavelet packets in order to extract glandular regions of the tissue by applying sample entropy values and a non-linear threshold. By doing so, benign tissue samples are differentiated from malignant ones.

More recent approaches also apply both boundary segmentation and region growing, while employing  prior knowledge on the specifics of prostate tissue structure. For the purposes of Gleason scoring, which is typically used for prostate cancer grading, morphological features like glandular growth patterns and the degree of differentiation have to be considered as seen under the microscope at low magnification. The nuclei in neoplastic lesions are often enlarged and show prominent nucleoli, however, may also exhibit variations in size and shape, while mitotic figures are in general extremely uncommon. Using deep learning approaches, according to Bhattacharjee et al. (2019), a high accuracy for classifying input images into benign vs. malignant samples (81%), grade 3 vs. grades 4 and 5 prostate cancer samples (75%) and grade 4 vs. grade 5 prostate cancer samples  (76%) can be reported.

A multistage segmentation approach proves to be more efficient as the usage of sample entropy values allows an easier differentiation between epithelial nuclei and stroma nuclei in standard H&E stained images (Ali et al., 2020).

nuclei segmentation deep learning mouse prostate tissue

Annotated mouse prostate microscopy images

Such novel prostate tissue segmentation techniques have been applied in a recent study on kinase assays at the Medical University of Vienna. Using KML Vision’s fully automated software application IKOSA AI, we have developed a novel DL-based approach for segmenting the nuclei in mouse prostate tissue in cooperation with the Medical University of Vienna. The segmentation involves a two-step process (unpublished data). Automated prostate duct recognition was trained on a total of 2564 annotations of prostate ducts in 5 H&E stained samples each of normal, precancerous, inflammatory and mixed genotypes. The DL-based nuclei segmentation algorithm was developed based on 4582 individual nuclei annotated in 66 randomly cropped image sections.

nuclei segmentation deep learning mouse prostate tissue

Mouse prostate tissue deep learning segmentation model

Developing new nucleus segmentation algorithms on your own study data may be time-consuming and requires programming skills. Some of the deep learning architectures we have presented above are available in open source libraries, however, adjusting them for the purposes of the research design still requires expert knowledge.

With the help of KML Vision’s unique software solution IKOSA AI you can develop your own deep learning algorithms for nucleus segmentation tasks best tailored to your experimental design without any coding experience required. IKOSA AI has proven highly robust and efficient across a variety of datasets, tissue samples and research designs.

References

Ali, T., Masood, K., Irfan, M., Draz, U., Nagra, A. A., Asif, M., ... & Yasin, S. (2020). Multistage Segmentation of Prostate Cancer Tissues Using Sample Entropy Texture Analysis. Entropy, 22(12), 1370.  


Al-Kofahi, Y., Zaltsman, A., Graves, R., Marshall, W., & Rusu, M. (2018). A deep learning-based algorithm for 2-D cell segmentation in microscopy images. BMC bioinformatics, 19(1), 1-11. 


Allehaibi, K. H. S., Nugroho, L. E., Lazuardi, L., Prabuwono, A. S., & Mantoro, T. (2019). Segmentation and classification of cervical cells using deep learning. IEEE Access, 7, 116925-116941. 


❡Alom, M. Z., Yakopcic, C., Taha, T. M., & Asari, V. K. (2018). Microscopic nuclei classification, segmentation and detection with improved Deep Convolutional Neural Network (DCNN) approaches. arXiv preprint arXiv:1811.03447


Araujo, F. H., Silva, R. R., Ushizima, D. M., Rezende, M. T., Carneiro, C. M., Bianchi, A. G. C., & Medeiros, F. N. (2019). Deep learning for cell image segmentation and ranking. Computerized Medical Imaging and Graphics, 72, 13-21.  


Bhattacharjee, S., Park, H. G., Kim, C. H., Prakash, D., Madusanka, N., So, J. H., ... & Choi, H. K. (2019). Quantitative analysis of benign and malignant tumors in histopathology: Predicting prostate cancer grading using SVM. Applied Sciences, 9(15), 2969.


Bruch, R., Rudolf, R., Mikut, R., & Reischl, M. (2020). Evaluation of semi-supervised learning using sparse labeling to segment cell nuclei. Current Directions in Biomedical Engineering, 6(3), 398-401.   


Caicedo, J. C., Roth, J., Goodman, A., Becker, T., Karhohs, K. W., Broisin, M., ... & Carpenter, A. E. (2019). Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. Cytometry Part A, 95(9), 952-965.   


Cloppet, F., & Boucher, A. (2008, December). Segmentation of overlapping/aggregating nuclei cells in biological images. In 2008 19th International Conference on Pattern Recognition (pp. 1-4). IEEE.  


Cruz-Roa, A. A., Ovalle, J. E. A., Madabhushi, A., & Osorio, F. A. G. (2013, September). A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 403-410). Springer, Berlin, Heidelberg.


Cui, Y., Zhang, G., Liu, Z., Xiong, Z., & Hu, J. (2019). A deep learning algorithm for one-step contour aware nuclei segmentation of histopathology images. Medical & biological engineering & computing, 57(9), 2027-2043.   


Fujita, S., & Han, X. H. (2020). Cell Detection and Segmentation in Microscopy Images with Improved Mask R-CNN. In ACCV Workshops (pp. 58-70). 


Hayakawa, T., Prasath, V. S., Kawanaka, H., Aronow, B. J., & Tsuruoka, S. (2019). Computational nuclei segmentation methods in digital pathology: A survey. Archives of Computational Methods in Engineering, 1-13.   


Ho, D. J., Yarlagadda, D. V., D’Alfonso, T. M., Hanna, M. G., Grabenstetter, A., Ntiamoah, P., ... & Fuchs, T. J. (2021). Deep multi-magnification networks for multi-class breast cancer image segmentation. Computerized Medical Imaging and Graphics, 88, 101866.   


Hsu, J., Chiu, W., & Yeung, S. (2021). DARCNN: Domain Adaptive Region-based Convolutional Neural Network for Unsupervised Instance Segmentation in Biomedical Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1003-1012).


Ibtehaz, N., & Rahman, M. S. (2020). MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Networks, 121, 74-87.    


Irshad, H., Veillard, A., Roux, L., & Racoceanu, D. (2013). Methods for nuclei detection, segmentation, and classification in digital histopathology: a review—current status and future potential. IEEE reviews in biomedical engineering, 7, 97-114.   


Jung, H., Lodhi, B., & Kang, J. (2019). An automatic nuclei segmentation method based on deep convolutional neural networks for histopathology images. BMC Biomedical Engineering, 1(1), 1-12.


Kang, Q., Lao, Q., & Fevens, T. (2019, October). Nuclei segmentation in histopathological images using two-stage learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 703-711). Springer, Cham.      


Kowal, M., Żejmo, M., Skobel, M., Korbicz, J., & Monczak, R. (2020). Cell nuclei segmentation in cytological images using convolutional neural network and seeded watershed algorithm. Journal of digital imaging, 33(1), 231-242.


Lee, J., Kim, H., Cho, H., Jo, Y., Song, Y., Ahn, D., ... & Ye, S. J. (2019). Deep-learning-based label-free segmentation of cell nuclei in time-lapse refractive index tomograms. Ieee Access, 7, 83449-83460. 


Liu, D., Zhang, D., Song, Y., Zhang, F., O'Donnell, L., Huang, H., ... & Cai, W. (2020). Unsupervised instance segmentation in microscopy images via panoptic domain adaptation and task re-weighting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4243-4252).


Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440). 


Lu, Z., Xu, S., Shao, W., Wu, Y., Zhang, J., Han, Z., ... & Huang, K. (2020). Deep-learning–based characterization of tumor-infiltrating lymphocytes in breast cancers from histopathology images and multiomics data. JCO clinical cancer informatics, 4, 480-490.  


Mahmood, F., Borders, D., Chen, R. J., McKay, G. N., Salimian, K. J., Baras, A., & Durr, N. J. (2019). Deep adversarial training for multi-organ nuclei segmentation in histopathology images. IEEE transactions on medical imaging, 39(11), 3257-3267.     


Naylor, P., Laé, M., Reyal, F., & Walter, T. (2017, April). Nuclei segmentation in histopathology images using deep neural networks. In 2017 IEEE 14th international symposium on biomedical imaging (ISBI 2017) (pp. 933-936). IEEE.   ¶


Nguyen, K.; Jain, A.K.; Allen, R.L. Automated gland segmentation and classification for Gleason grading of prostate tissue images. In: Proceedings of the 20th International Conference on Pattern Recognition (ICPR), Istanbul, Turkey, 23–26 August 2010; Volume 1, pp. 1497–1500


Plissiti, M. E., & Nikou, C. (2012). Overlapping cell nuclei segmentation using a spatially adaptive active physical model. IEEE Transactions on Image Processing, 21(11), 4568-4580.  


Qu, H., Wu, P., Huang, Q., Yi, J., Yan, Z., Li, K., ... & Metaxas, D. N. (2020). Weakly supervised deep nuclei segmentation using partial points annotation in histopathology images. IEEE Transactions on Medical Imaging, 39(11), 3655-3666.    


Rączkowski, Ł., Możejko, M., Zambonelli, J., & Szczurek, E. (2019). ARA: accurate, reliable and active histopathological image classification framework with Bayesian deep learning. Scientific reports, 9(1), 1-12.    


Ren, J., Sadimin, E., Foran, D. J., & Qi, X. (2017, February). Computer aided analysis of prostate histopathology images to support a refined Gleason grading system. In Medical Imaging 2017: Image Processing (Vol. 10133, p. 101331V). International Society for Optics and Photonics. 


Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham.  


Saha, M., Chakraborty, C., & Racoceanu, D. (2018). Efficient deep learning model for mitosis detection using breast histopathology images. Computerized Medical Imaging and Graphics, 64, 29-40.   


Vuola, A. O., Akram, S. U., & Kannala, J. (2019, April). Mask-RCNN and U-net ensembled for nuclei segmentation. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) (pp. 208-212). IEEE.  


Wang, H., Xian, M., & Vakanski, A. (2020, April). Bending loss regularized network for nuclei segmentation in histopathology images. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI) (pp. 1-5). IEEE.     


Wang, H., Roa, A. C., Basavanhally, A. N., Gilmore, H. L., Shih, N., Feldman, M., ... & Madabhushi, A. (2014). Mitosis detection in breast cancer pathology images by combining handcrafted and convolutional neural network features. Journal of Medical Imaging, 1(3), 034003.   


Win, K. Y., Choomchuay, S., Hamamoto, K., & Raveesunthornkiat, M. (2018). Comparative study on automated cell nuclei segmentation methods for cytology pleural effusion images. Journal of healthcare engineering, 2018.  


Yoo, I., Yoo, D., & Paeng, K. (2019, October). PseudoEdgeNet: Nuclei segmentation only with point annotations. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 731-739). Springer, Cham.  


Zhang, L., Sonka, M., Lu, L., Summers, R. M., & Yao, J. (2017, April). Combining fully convolutional networks and graph-based approach for automated segmentation of cervical cell nuclei. In 2017 IEEE 14th international symposium on biomedical imaging (ISBI 2017) (pp. 406-409). IEEE.


Zhang, Z., Leong, K. W., Van Vliet, K., Barbastathis, G., & Ravasio, A. (2021). Deep learning for label-free nuclei detection from implicit phase information of mesenchymal stem cells. Biomedical Optics Express, 12(3), 1683-1706.  


Zhou, Y., Onder, O. F., Dou, Q., Tsougenis, E., Chen, H., & Heng, P. A. (2019, June). Cia-net: Robust nuclei instance segmentation with contour-aware information aggregation. In International Conference on Information Processing in Medical Imaging (pp. 682-693). Springer, Cham.