Train your own deep learning algorithms
Use already developed algorithms or train your own deep learning algorithms without data science and programming skills.
Why automate image data analysis?
We provide a portfolio of ready-to-use algorithms for automated image data analysis and the opportunity to train your own deep learning algorithms with the support of the new IKOSA AI service.
Develop multiple algorithms for various use cases or retrain the most suitable existing algorithm for similar research questions. Expand your service portfolio and serve a larger target audience.
You don’t need to be an AI expert or a data scientist. Stay up-to-date with technological innovations and gain new knowledge with quick and easy-to-use tutorials and concise educational materials.
Avoid incidental manual errors or biased results after long hours of work. Get quantitative, replicable, and accurate data by training the algorithm with your team.
These days, the volume of data researchers are confronted with is growing exponentially. To prevent people from having to sit through long hours of manual analysis, you need to automate that routine. Invest the time you save in more value-adding tasks.
Annotate images, assign labels to annotations and start training your new algorithm.
Why train your own algorithms with IKOSA?
We are constantly improving our software, making it more flexible and robust to meet the diverse needs of our customers.
Deep learning vs Machine learning
Machine Learning refers to the general ability of software systems to learn decision making on the grounds of data without being explicitly programmed.
“Conventional” Machine Learning algorithms rely on a set of image feature parameters, e.g. texture, area, intensities, which need to be defined beforehand. Then, the model is trained on the basis of these features and can create predictions about new data sets.
On the other hand, Deep Learning includes hierarchical feature extraction as a part of the learning process (“end-to-end”). This essentially removes the feature selection bias and reduces the need for human intervention.
We provide the entire platform on-premise, there is no need to compromise any data confidentiality.
No data science and programming skills
Existing lab staff can immediately use the software without undergoing special technical training. The interface is intuitive and provides assistance to guide even novice users to successfully applying AI algorithms to their data.
Integration and compatibility
IKOSA is compatible with standard input (JP(E)G, PNG, BMP, single- and multipage TIF(F), VMIC, GTIF, SVS, NDPI, SCN, STK, QPTIFF) and output data formats (JPEG/TIFF for visualizations and CSV/tabular formats for quantitative data) to ensure smooth integration with existing infrastructure and software packages,e.g. for purposes of secondary data analysis.
Free subscription and guided trial
We provide an unlimited free subscription to test the platform functionality. Also, we offer a guided trial period of up to 4 weeks, during which we provide workshops and consultations on image annotation and answer questions related to the product, data and AI training processes (supported by videos, demos and tutorials) in the context of the customer’s specific analytical challenges.
IKOSA provides a standard REST API that can be integrated in both commercial and open-source solutions, e.g. Fiji/ImageJ, CellProfiler, etc. On request, we can provide software development kits (SDKs) for all standard languages, if customers want to implement the changes themselves. Alternatively, we can also develop tailor-made plug-ins for them in a bespoke software development project.
IKOSA AI workflow
Algorithm training workflow with IKOSA AI
For more information on algorithm training, interpretability of results and validation, please refer to our FAQ.
If you are interested in specific examples of deep learning algorithm training, visit our case study page.
All important whys and hows of algorithm training
Can we use algorithms already available on the IKOSA Platform?
All available algorithms can be used in their current state, if they already sufficiently work on your data. We assist you in evaluating them and potentially fine-tuning them to get the most out of them.
Can we retrain the existing algorithms?
Retraining of existing algorithms is possible even with a limited set of data available. We help you every step of the way.
What data has been used for training the existing algorithms?
Each algorithm has been trained on images and labels provided by domain experts for specific use cases. Technical specifications of the input data and imaging modalities are provided in the documentation of each algorithm.
Algorithm update regulations
Once trained, the algorithms do not automatically update themselves using new data. You completely remain in charge of when you want to update the algorithm with new data. When new data is added to the training, a new version is created. We provide you with algorithm versioning that allows you to link specific versions in your quality management systems and ensure the required auditability of the tools used in your studies.
Can we export ready-made algorithms on the platform to other locations?
Not at the moment. In the future exporting to the most prominent machine learning frameworks will be possible.
Can we import our own algorithms?
Not yet for retraining the algorithm on the IKOSA platform. However, if you have a trained algorithm and want to use it on the platform, this is possible with a little help from our team.
If the algorithm has still been trained by a person, does that mean we are still dealing with human subjectivity?
The algorithm relies on data labelled by domain experts. If you can provide collective knowledge, i.e. of 2 or more people, the algorithm eventually adopts the common consensus among these experts and becomes objective.
Who has the access to our trained algorithms, if we use the online version of the software?
Only your colleagues within your own organization have access to these algorithms. Algorithms cannot be shared between different organizations.
What is the approximate time needed to train an algorithm?
This depends on the task and the data quality, but after 20-30 minutes you may get a first working algorithm. Upon applying corrections and retraining, greater robustness is possible to achieve within a day or two.
How much data do we need?
This highly depends on the task and the data quality. At least 5-10 images are recommended to get started.
What if we don’t have enough data?
You can customize an existing algorithm that has already been ran on similar image data. This allows you to work even with less than 10 images in your dataset.
How can we understand which parameters matter in the training?
The algorithm learns the common features of all objects you have provided during training. It figures out on its own whether color, shape or texture are the more relevant to best perform a given task. The more diverse and realistic data you can provide, the more robust results you can expect.
How can we understand what needs to be retrained?
When the algorithm makes mistakes, this often is related to omitting visual items during training. Try to find objects that were not recognized properly and add them to the next training iteration.
How can we validate the outcomes?
The app documentation contains information on which data and imaging modality the algorithm has been trained on. Compare it to an established analytical method on a similar data-set of your choice. That may be manual analysis, a rule-based system, or some semi-automated method. Having a direct comparison will help you understand the benefits and limitations of AI.
How can we interpret the results?
Each result parameter is described in detail in the technical documentation of an algorithm. In addition to the quantitative and qualitative outputs, i.e. visualizations are provided to allow verification and plausibility checks.
Usually, the visualizations assist the simple interpretation of the outcomes consisting of both correct and incorrect detections made by the algorithm.
Within each algorithm, different processing steps produce intermediate outputs. First, the AI algorithm predicts an image, by projecting certain objects as being located within a certain confidence interval. Some objects may get filtered out from the final result, because of some post-processing properties e.g. small size, or too little confidence (less than 50%).
You can select particular data points, run the algorithms on them and then view the results with the assistance of the interpretable visualizations enabled. This allows you to individually validate each step along the result creation process (=“decision making”).
In order to give you input for potential improvements our AI-backed software displays both “positive” and “negative” outputs and provides a detailed description of the post-processing parameters that lead to the prediction.