ROI, Annotation, Labels: Mastering Image Annotation in Pathology

7 Sep, 2023 | Blog posts, IKOSA AI

Microscopy image annotation and pathology annotation, in particular, are tasks that require a lot of attention to detail. To be able to train Deep Learning Computer Vision Applications, which perform advanced tasks such as detection, segmentation, and classification of objects, you need to provide accurately annotated and labeled input image data. Inaccurate or incomplete annotation negatively impacts the performance of your trained models and the results of your image analysis.

That is why you have to first get your annotations right. In this article, we guide you through image annotation basics and some more advanced techniques. We show you how easy the annotation of digital images is with the IKOSA software. See for yourself how proven annotation tactics will increase the efficiency of your work process and the accuracy of your image analysis models.

Please note that this article contains information for both pathology specialists and any life science expert participating in image analysis.


Annotation, Labeling, or Region-of-interest: What’s the difference?  

To create highly accurate artificial intelligence models and analyze pathology image data, you need a sufficient amount of carefully annotated whole slide images (WSI). In life science literature, researchers commonly use terms like annotation, labeling, and Region of Interest interchangeably. However, on the IKOSA Platform, these methods of dataset preparation serve distinct purposes to fulfill specific tasks.

Image annotation 

Annotation entails manually marking the boundaries of morphological objects in your images using various shapes.

Annotations serve as the “ground truth” for your particular image dataset. It means they serve as reference data to train and evaluate the performance of your deep learning model.

Cell and nuclei annotations on Pap smear images in IKOSA
Cell and nuclei annotations on Pap smear images in IKOSA


Labels are text categories or classes you assign to your annotations. The labeling of the objects normally depends on your research design. Labels can be associated with objects and entire regions within digital WSI slides. For example, when working with image data containing various cell types, you could classify them into distinct categories such as macrophages, lymphocytes, and T-cells. On a subcellular level, you might be interested in labeling morphological objects like cell nuclei and mitochondria. Additionally, it is possible to differentiate between normal cells and cancerous cells. This labeling technique is suited for diagnostic purposes. In IKOSA, you assign a specific color to each label category for easy identification and visualization.

On a subcellular level, you might be interested in labeling morphological objects like cell nuclei and mitochondria. Additionally, it is possible to differentiate between normal cells and cancerous cells. This labeling technique is suited for diagnostic purposes. In IKOSA, you assign a specific color to each label category for easy identification and visualization. 

Label assignment by a user to cells and nuclei on Pap smear images in IKOSA
Label assignment by a user to cells and nuclei on Pap smear images in IKOSA

Regions of interest (ROIs)

ROIs are particular areas within a microscopy image you want to limit your training to.

In pathology research, ROIs are usually the most diagnostically significant areas within an image where the morphological features of your interest concentrate. It might be, for example, an outline of a specific cancerous region within WSI images used to grade malignancy levels.   

If your image contains morphological objects for annotation, but annotating the entire image is excessively laborious, or if only a portion of the image is relevant, you can choose to define an ROI or multiple ROIs. By doing so, you can focus on specific areas for training and validation while reducing the annotation effort. A Region of Interest might be a simple rectangular shape or a carefully delineated irregular form. 

Region of interest with annotated and labeled cell and nuclei on Pap smear image in IKOSA
Region of interest with annotated and labeled cell and nuclei on Pap smear image in IKOSA

Annotations, labeling, and ROI selection require a great degree of knowledge about tissue structures. Therefore, only domain experts should perform these activities.

IKOSA is your research partner, ready to assist you.

The Pathology Image Annotation Workflow Explained

Annotating large amounts of whole slide image data is a time-consuming and complex process involving some steps. To help you with the management of this process we’d like to share with you the different stages of the digital pathology image annotation workflow based on Wahab et al. (2022):

Proposed annotation workflow
The various steps in digital pathology image annotation. When annotating pathology image data, a structured workflow is crucial. © 2022 The Authors. The Journal of Pathology: Clinical Research published by The Pathological Society of Great Britain and Ireland & John Wiley & Sons, Ltd.

Defining Research Objectives

Each research project is centered around a distinct question that requires investigation. This question defines how your annotation protocol looks and which tissue features in your WSI sample you want to annotate. For example, if you want to develop a pathology image analysis model that is capable of grading breast cancer, you need to include tissue structures associated with the grading process in your protocol like tubules, tumor cell morphology, and mitotic cell nuclei. (Wahab et al., 2022) 

Drafting The Image Analysis Model

Before beginning the annotation process, take some time to consider the type of artificial intelligence model you intend to train. Different types of AI-driven applications can perform diverse image analysis tasks like detection, segmentation, and classification.

The IKOSA Platform supports semantic and instance segmentation AI models. These models can be trained on different image file formats including 2D and multichannel images.

Semantic and instance segmentation AI app training IKOSA
IKOSA AI has been specially developed to train advanced bioimage analysis applications without any coding required.

Creating an Annotation Data Dictionary

To reach a high level of accuracy, all annotators in your team need to have a common understanding of the annotation constructs used in your project. That is why it is advisable to create a dictionary or a database where you provide definitions of all tissue regions and cell types that are the subject of your study. It can also serve as a base for information exchange between pathologists on your team (Wahab et al., 2022).

This type of dictionary can also include guidelines regarding:

  • which tissue structures to annotate,
  • what is the diagnostic value of each annotation type,
  • how much to annotate, 
  • and references with illustrative examples.
Label set in the PAP smear project on IKOSA
A set of labels and annotation constructs used to train an AI application for the analysis of PAP smear datasets with IKOSA. Label categories like “nuclei” and “cells” have been assigned.

Selecting an Annotation and Image Analysis Software 

Choosing the right pathology image analysis software is crucial. The tool you are using impacts the quality of your annotations and analysis results. Besides user-friendliness and accessibility, you should consider whether the software supports the annotation types you want to include in your research project and corresponding metadata or whether interactive annotation (=collaborative annotation efforts) is possible. We suggest utilizing a user-friendly and web-based analysis software as it doesn’t require expensive IT infrastructure investments.

Did you know?

To ensure process continuity and convenience in your lab or research group, you can import annotations from open-source platforms like ImageJ or QuPath directly into IKOSA. In this way, IKOSA supports seamless integration and helps your workflow.

Defining Annotation Levels

Annotation levels are defined as the degree of detail you want to include in your annotation. This level depends on your research question. Regarding Wahab et al., there are different levels of complexity associated with annotations such as:

  • Case-level: Annotation on the slide with an overall diagnosis, for example.
  • Region-level: You can imagine this as dividing the sample into relevant regions, for example, the annotation of tumor tissue or stroma tissue.
  • Cell-level: Annotation of single cells like tumor cells, immune cells, and stroma cells.

The four proposed levels of annotation
4 annotation levels you might want to think about when setting up your research project with a team. © 2022 The Authors. The Journal of Pathology: Clinical Research published by The Pathological Society of Great Britain and Ireland & John Wiley & Sons, Ltd.

Defining Annotation Constructs

The varying and complex shapes of morphological objects in pathology WSI pose significant challenges to overcome in the annotation process. That is why you need to keep in mind the annotation shapes you want to use in the process. Decide what colors you will use to outline and fill in the morphological features of your interest. 

Use the Versatile Annotation Tools in IKOSA to the Advantage of Your Research

The IKOSA Platform provides a variety of annotation tools, which allow our users to perform all sorts of different annotation tasks. You can choose from several tools and shapes:

  • Rectangle annotation tool 
  • Polygon annotation tool 
  • Circle annotation tool 
  • Freeform annotation tool
  • Point annotation tool
Try different annotation tools on your images to find the best solution for various object shapes.

Deciding on the Degree of Annotation

The degree of annotation refers to the process of determining the level of detail and extent to which objects or elements on a WSI need to be annotated. It involves making choices about what specific information should be marked or labeled in the image to suit the requirements of your particular research question. This decision could range from annotating only the most significant objects (for example, only nuclei) to providing comprehensive annotations that include detailed information about various elements in the image (for example, cells, nuclei, organelles). The degree of annotation is crucial as it directly impacts the complexity and accuracy of the trained model. 

It’s essential to note that unannotated objects will be treated as background and won’t be included in your analysis in IKOSA. Therefore, when initiating the annotation process, ensure that all objects of interest are annotated and assigned to their respective labels. If you annotate only a portion of the objects within a Region of Interest (ROI) and commence model training, the model may become confused and yield unsatisfactory analysis results as it struggles to differentiate between annotated and unannotated objects.

If you’re not employing ROIs to delineate specific areas in your image, you must annotate all objects of interest across the entire image.

Depending on the project’s objective and complexity, another important consideration is the estimation of the sample size. Projects aiming to develop prognostic algorithms to measure minute differences between groups may require a large sample size. (Wahab et al., 2022)

We’re here for you at every stage of image analysis, from data upload to result download.

Defining Annotation Phases

We recommend applying a phased approach to annotations, with each phase focusing on specific annotation levels. Initiating with a pilot phase enables early identification and potential resolution of issues that might otherwise arise in later stages of the project. It also serves as a training opportunity for the annotation team to become acquainted with new constructs and terms defined in the data dictionary (Wahab et al., 2022). Understanding the pivotal role annotations play in the application training process, it’s crucial to highlight their significance in setting the foundation for effective model development.

To kickstart your application’s training on IKOSA, you can apply the quick training option. This allows you to swiftly assess the app’s performance and ascertain whether you should incorporate additional images or annotations into the training set. Following the initial annotation phase, the quick training option becomes a valuable bridge to gauge the model’s initial performance based on the annotations provided.

The extended training option is advised as a subsequent refinement phase and has the potential to deliver notable enhancements, resulting in even more superior results. Building upon the foundation laid by annotations, the extended training option fine-tunes the model to further improve its accuracy and capabilities.

Selection of AI application training duration in IKOSA
The quick training feature gives you a first impression of the model’s performance. Depending on the outcome, you can optimize your annotations and input data before proceeding with the extended model training for optimal analysis performance.

We provide a step-by-step guide on how to start your application training. Simple text and video explanations will help you prepare your digital data in the right way to successfully train an AI app for your use case without any coding or AI knowledge.

Distributing Annotator Workloads

A balanced workload can be achieved by listing cases, annotation types, timeframes, and available pathologists. A pilot phase can help in workload estimation. (Wahab et al., 2022)

Estimating workload distribution for histopathology image annotation is complex due to the varying levels and details involved, along with pathologists’ experience and time constraints.

Performing Interactive Annotation   

Advances in digital pathology make collaborative efforts possible. This collaborative process is called interactive annotation. (Wahab et al. 2022) It can significantly speed up the process, especially if you are on a tight budget or have a limited number of expert annotators on your team.

If you are not sure how to label specific objects in your WSI slides, you can always consult other members of your team or external experts. State-of-the-art pathology image analysis software such as IKOSA offers to work simultaneously on a project. You can assign different user roles to multiple team members, which enhances collaborative work in your team.

The IKOSA Platform is available in the cloud, which allows you to collaboratively annotate and label morphological features of interest together with other members of your team at any time and location. It also enables you to ask more experienced pathologists who do not work in the same facility as you to participate remotely.

Collaborative annotation in IKOSA allows internal and external team members to participate in the drawing, labeling, and reviewing processes. You can always invite an expert to join and review your work.

Collaborate with colleagues by inviting them to annotate slides, and easily control project access for external participants through distinct roles within your IKOSA organization. Assign specific roles to guests and organization members, granting them viewing, annotating, or editing privileges for the entire project.

Good Annotation Practices

Remember that the goal is to strike a balance between data volume, annotation quality, and the requirements of your analysis or model. A well-considered annotation strategy, based on a clear understanding of the analysis goals and the nature of the annotated objects, will help you achieve reliable and meaningful results.

  1. Define Annotation Classes: Clearly define the semantic and instance classes you’ll annotate (e.g., nuclei, cytoplasm, mitochondria, different cell types). A well-defined class hierarchy helps organize the annotation process.
  2. Use Regions of Interest (ROIs): Divide your image into meaningful regions. Use ROIs to enclose structures accurately and provide fine-grained annotations.
  3. Color Coding and Labels: Use consistent color coding for different annotation classes. Label annotations with clear, legible text to identify structures easily.
  4. Number of annotations: The number of objects that you need to annotate depends on the specific goals of your analysis, the complexity of the objects, the available resources, and the requirements of the machine learning model. We generally recommend annotating a representative sample of objects from each class. It ensures that the annotated dataset captures the diversity and variability of the objects you’re interested in analyzing.
  5. Iterative Approach: Start with a reasonable number of annotated objects, and if necessary, incrementally increase the number based on model performance and analysis needs.
  6. Image variability: Annotating various images ensures that your analysis model is robust and can perform well on new, unseen data. Models trained on a narrow dataset might not generalize well to different contexts.
  7. Object Variability: If the objects you’re annotating are highly variable in shape, size, or appearance, you’ll need more annotated instances to account for this variability.
  8. Noise and Artifacts: In bioimages, noise, artifacts, and staining irregularities are often present in the background. By annotating background regions that include such noise, you help the model learn to differentiate between actual objects and artifacts.
  9. Precision and Detail: Be precise in outlining boundaries. Pay attention to edges, corners, and intricate structures. Detailed annotations lead to accurate analysis and model training.
  10. Consistency: Maintain consistent annotation styles throughout the dataset. Use standardized shapes and colors. It ensures uniformity and reduces confusion.

We’re here to support you in AI application training, even at the earliest stages of preparation.

How does annotation quality affect the performance of computer vision applications?

The annotation quality significantly impacts the performance of computer vision applications. High-quality annotations are crucial for training robust image analysis applications and achieving high-quality results. Here are some ways in which annotation quality affects the performance of computer vision applications:

  1. Generalization: Computer Vision applications should be able to generalize or adapt well to new, unseen data. High-quality annotations on a representative selection of training data allow the model to learn from various features and patterns, which it can subsequently use in different new scenarios. 
  2. Model Accuracy: The accuracy and performance of a computer vision model heavily depend on the quality of the annotated training data. If the annotations are accurate and comprehensive, the model is more likely to learn meaningful patterns and make accurate predictions on new, unseen data. On the other hand, missing details or poorly annotated images can lead to biased model predictions and errors. 
  3. Object Localization: The quality of annotations directly affects the model’s ability to accurately locate and identify objects in tasks like object segmentation or detection. Precise and accurate annotations for object boundaries are essential for achieving reliable results. 
  4. Error Analysis and Debugging: During the development and re-evaluation of computer vision applications, the quality of annotations is crucial for error analysis and debugging. Incorrect annotations can lead to misleading conclusions and results of the model’s performance, making it challenging to identify the root causes of issues.
  5. Resource Efficiency: High-quality annotations reduce the need for modifying or re-annotating images, thereby saving time and resources. Models trained on accurate annotations are more likely to achieve desired performance levels. 
  6. Ethical Considerations: In life sciences, especially in applications with social or ethical implications (medical diagnosis, pre-clinical and clinical research, autonomous vehicles, and so on) annotation quality becomes even more critical. Misleading annotations could lead to severe consequences, making it essential to ensure high-quality annotations.

To ensure good annotation quality, it is essential to have skilled annotators, well-defined annotation guidelines, and quality control measures in place. Regular feedback sessions, validation, and iterative improvement of annotations are vital to enhance the performance of computer vision applications.

How to use Regions-of-Interest (ROIs) for a more precise pathological evaluation 

The advanced ROI features that come along with modern AI software allow you to set a special focal point on a particular area within your slide. Typically, pathologists select regions of interest to be analyzed or included in application training. This area is usually considered meaningful for diagnostics and decision-making. Choosing an ROI can be done based on different criteria like the presence of a subset of nuclei, cells, or specific patterns. (Niazi et al. 2019).  

Some AI applications are capable of automatically identifying ROIs based on the presence of particular morphological features in tissue structures. For instance, AI applications can be trained to distinguish between cancer regions and non-cancer regions, proliferation regions, or metastatic regions on digital pathology images (Niazi et al. 2019; Jiang et al. 2020).   

The convenient ROI feature in IKOSA allows you to easily select regions of interest in your digital slides. You can manually delineate your ROIs using a variety of shapes available.

With a wide range of drawing tools available for selecting regions of interest, you can easily highlight specific areas on your images. This allows you to perform analysis or train an AI application without using entire images, thereby saving storage space.

Background Recognition and Annotation in Bioimage Analysis

Background recognition is a fundamental aspect of image analysis that involves identifying and distinguishing the background elements within an image from the foreground objects or subjects of interest. This process is pivotal in enhancing the accuracy and precision of image analysis tasks. 

In the specialized domains of bioimage analysis and pathology, background recognition is instrumental for isolating signals from noise. It enables image analysis algorithms to discern the subtlest biological structures, such as cells, tissues, or cellular anomalies, against staining artifacts, slide imperfections, or other extraneous elements. In essence, it paves the way for the precise identification of disease markers, tissue characteristics, and cellular interactions.

In IKOSA, the trained model can autonomously discern the differences between the background and foreground as patterns within an ROI are considered as background when they are not annotated. Even without distinct background images, the model gains insights into the background’s appearance based on the spaces between annotations.

However, including additional images showcasing only the background aids the trained model in learning the background’s characteristics in more detail, even in the absence of labeled foreground objects. It also assists the application in understanding the patterns forming in the background around foreground objects.

Background of the PAP smear slide
You can choose entire images or regions of interest without any visible features for the app training to educate the model about the background.
Background recognition feature for the AI app training on IKOSA

It’s essential to exercise caution when selecting images for background-only training. If these images or ROIs contain foreground objects, the deployed application may struggle to accurately distinguish between background and foreground areas later on.

Preparing your image data for AI application training in IKOSA – your shortcut to faster results!

Our authors:

KML Vision Team Benjamin Obexer Lead Content Writer

Benjamin Obexer

Lead content writer, life science professional, and simply a passionate person about technology in healthcare

KML Vision Team Elisa Opriessnig Content writer

Elisa Opriessnig

Content writer focused on the technological advancements in healthcare such as digital health literacy and telemedicine.

KML Vision Team Fanny Dobrenova Marketing Specialist

Fanny Dobrenova

Health communications and marketing expert dedicated to delivering the latest topics in life science technology to healthcare professionals.


Abels, E., Pantanowitz, L., Aeffner, F., Zarella, M. D., van der Laak, J., Bui, M. M., … & Kozlowski, C. (2019). Computational pathology definitions, best practices, and recommendations for regulatory guidance: a white paper from the Digital Pathology Association. The Journal of Pathology, 249(3), 286-294.

Bokhorst, J. M., Pinckaers, H., van Zwam, P., Nagtegaal, I., van der Laak, J., & Ciompi, F. (2019). Learning from sparsely annotated data for semantic segmentation in histopathology images. Proceedings of Machine Learning Research, 102, 84–91.

Dudgeon, S. N., Wen, S., Hanna, M. G., Gupta, R., Amgad, M., Sheth, M., … & Gallas, B. D. (2021). A pathologist-annotated dataset for validating artificial intelligence: a project description and pilot study. Journal of Pathology Informatics, 12(1), 45.

Jiang, Y., Yang, M., Wang, S., Li, X., & Sun, Y. (2020). Emerging role of deep learning‐based artificial intelligence in tumor pathology. Cancer communications, 40(4), 154-166.

Jing, L., Chen, Y., & Tian, Y. (2020). Coarse-to-Fine Semantic Segmentation From Image-Level Labels. IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 29, 225–236.

Marzahl, C., Bertram, C. A., Aubreville, M., Petrick, A., Weiler, K., Gläsel, A. C., … & Maier, A. (2020). Are fast labeling methods reliable? A case study of computer-aided expert annotations on microscopy slides. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I 23 (pp. 24-32). Springer International Publishing.

Niazi, M. K. K., Parwani, A. V., & Gurcan, M. N. (2019). Digital pathology and artificial intelligence. The Lancet Oncology, 20(5), e253-e261.

Nofallah, S., Mokhtari, M., Wu, W. et al. (2022a). Segmenting Skin Biopsy Images with Coarse and Sparse Annotations using U-Net. J Digit Imaging 35, 1238–1249., N., Miligy, I. M., Dodd, K., Sahota, H., Toss, M., Lu, W., … & Minhas, F. (2022). Semantic annotation for computational pathology: Multidisciplinary experience and best practice recommendations. The Journal of Pathology: Clinical Research, 8(2), 116-128.


Join our newsletter