top of page
Search

AI of the Tiger

Instinct Written in Code


AI has become one of the most overused terms in laboratory medicine today.

In pathology, immunohistochemistry, and cytology, we hear it almost everywhere — in microscopes, scanners, stainers, software, and sometimes even in places where it has very little to do with actual “intelligence.”

And yet… AI can truly be a very effective tool.

But only under one condition: if we understand what it really is.

Not a Thinking Mind — but Instinct.

Artificial intelligence in pathology is not a thinking brain — and it should not be one. It does not analyze a patient’s medical history, it does not understand the biology of disease, and it has no doubts. It does not ask questions and it does not take responsibility. Its mode of operation is fundamentally different.

AI works like an instinct. It is fast, alert, and relentlessly consistent. It reacts to patterns not because it knows what they mean, but because it has seen them thousands of times before. Its so-called “intuition” is, in reality, the result of training on data — not understanding.

In this sense, it resembles a tiger. It sees, reacts, and acts immediately, always according to the same rules. It does not get tired, it does not lose focus, and it does not shift its criteria of evaluation. Its strength lies in precision and repeatability, but its limitation is the lack of context.

And that is precisely why AI does not replace thinking. It can notice, point out, and organize what is visible, but the diagnostic decision always belongs to a human. In pathology, instinct is an enormous asset — as long as it remains a tool, not a judge.

2016 — before we called it AI

My first real encounter with algorithmic image analysis took place in 2016, during a stay in Stavanger. It was there that I had the opportunity to meet Dr. Ivar Skåland, who showed me a solution that enabled automated analysis of HER2-stained slides.

It was not futuristic software with a bold Artificial Intelligence label.

It was ImageJ — or one of its distributions, such as FIJI — a classic, open-source image analysis tool.

The system was configured to recognize membrane staining intensity, define threshold levels, and automatically analyze scanned slides, with the option for rapid verification by a pathologist. Everything was based on clearly defined thresholds, logical rules, and direct comparison with manual assessment.

No neural networks.

No “learning” in the background. No black box.

And yet — it worked.

What’s more, configuring such a system was remarkably simple.

Was that AI?

From today’s perspective, many would call it AI.

Technically, it was simply an image analysis algorithm.

Skaland, Ivar & Øvestad, Irene & Janssen, Emiel & Klos, Jan & Kjellevold, Kjell Henning & Helliesen, T & Baak, J. (2008). Comparing subjective and digital image analysis HER2/neu expression scores with conventional and modified FISH scores in breast cancer. Journal of clinical pathology. 61. 68-71. 10.1136/jcp.2007.046763.
Skaland, Ivar & Øvestad, Irene & Janssen, Emiel & Klos, Jan & Kjellevold, Kjell Henning & Helliesen, T & Baak, J. (2008). Comparing subjective and digital image analysis HER2/neu expression scores with conventional and modified FISH scores in breast cancer. Journal of clinical pathology. 61. 68-71. 10.1136/jcp.2007.046763.

Evolution, not revolution

Today’s AI systems in pathology operate in a very similar way. What has changed is the scale of data, the speed of computation, the ergonomics of the interfaces, the possibilities for clinical validation, and the level of integration with laboratory workflows.

The foundation, however, remains the same: pattern recognition in images. This applies equally to histology and cytology, as well as to radiology.

Where AI truly helps

AI in pathology, immunohistochemistry, and cytology truly does help — even though it does not verify diagnoses. What we can say with certainty is that it supports decision-making.

In pathology, it proves particularly effective in the analysis of IHC markers such as Ki-67, ER, PR, PD-L1, and HER2,

ER App, VisioPharm (CE IVD)
ER App, VisioPharm (CE IVD)
PD-L1 App, NSCLC, VisioPharm (CE IVD)
PD-L1 App, NSCLC, VisioPharm (CE IVD)

FISH evaluation,

HER2-FISH, Breast Cancer, VisioPharm
HER2-FISH, Breast Cancer, VisioPharm

quality control (which I wrote about recently),

Qualitopix, VisioPharm (CE IVD)
Qualitopix, VisioPharm (CE IVD)

metastasis detection,

Metastasis Detection, VisioPharm (CE IVD)
Metastasis Detection, VisioPharm (CE IVD)

screening in gynecologic cytology and thyroid cytology,

LBC AI-Assisted Diagnostic System, KFBIO (CE IVD)
LBC AI-Assisted Diagnostic System, KFBIO (CE IVD)
Thyroid FNA AI-Assisted Diagnostic System, KFBIO (CE IVD)
Thyroid FNA AI-Assisted Diagnostic System, KFBIO (CE IVD)

in the histological assessment of gastrointestinal biopsies,

Gastric Biopsy AI-Assisted Diagnostic System, KFBIO (CE IVD)
Gastric Biopsy AI-Assisted Diagnostic System, KFBIO (CE IVD)

or colon biopsy.

Colorectal Biopsy AI-Assisted Diagnostic System, KFBIO (CE IVD)
Colorectal Biopsy AI-Assisted Diagnostic System, KFBIO (CE IVD)

The greatest value of algorithms lies in slide triage, scoring standardization, the ability to work with large case volumes, and in reducing fatigue and subjectivity. They function as a second pair of eyes — one that does not get tired and does not lose focus.

VisioPharm and KFBIO — tools, not oracles

Modern platforms such as VisioPharm, as well as solutions offered by KFBIO, do not attempt to “think on behalf of the diagnostician.” They enable the development and validation of algorithms, support whole slide imaging analysis, integrate seamlessly with digital pathology, and genuinely reduce turnaround time.

In times of chronic staff shortages, they have become a lifeline for many laboratories facing a growing backlog of unresolved cases. These are not black boxes. They are working tools — like a microscope, a microtome blade, or a stainer.

AI and LIS systems — where the real boundary lies

There are currently no LIS systems that make diagnostic decisions based on AI. This is not a lack of technological ambition, but a deliberate choice driven by the responsibility inherent in medical diagnostics.

LIS systems organize processes, automate workflows, and ensure data consistency. AI may support selected parts of these systems, but it does not conduct diagnostics as a whole. The boundary between support and decision-making must remain clearly defined.

Does AI threaten the work of pathologists?

This is one of the most frequently asked questions today — and one of the most emotionally charged. Every time new “AI-assisted” software appears in the laboratory, sooner or later someone says it, half-jokingly, half-seriously: “So soon we won’t be needed anymore.”

But that statement says far more about our fears than about the technology itself.

AI did not enter pathology as an answer to the question, “How do we replace the physician?” It appeared because the number of slides is growing faster than the number of people capable of evaluating them, and because fatigue, time pressure, and subjectivity are real problems the field has struggled with for years. Algorithms did not slip quietly into laboratories through the back door. They were invited in to solve very concrete, practical problems.

If AI were truly to threaten the work of a pathologist, it would first have to take over something far more difficult than counting nuclei or marking cell membranes. It would have to take over responsibility. And responsibility does not mean highlighting areas of high Ki-67 or generating a PD-L1 heatmap. It means understanding clinical context, reconciling conflicting information, making decisions under conditions of incomplete data, and bearing the consequences of those decisions.

AI does not do this. And it will not for a long time.

An algorithm does not know that a slide is “difficult.” It does not sense that something feels off. It does not carry the experience gained from hundreds of clinico-pathological discussions, nor the intuition that prompts additional staining when an image formally meets criteria yet still raises concern. AI does not understand the meaning of responsibility — it understands probability.

This is why, in practice, AI does not replace the pathologist but changes the nature of the work. The focus shifts from repetitive tasks to interpretation, from manual counting to verification, from hours spent staring into a microscope to making decisions that truly require human experience. Paradoxically, the better the algorithms become, the more visible the human role as the final arbiter actually is.

In laboratories that have implemented digital pathology and algorithmic support, there is no observable “reduction of diagnosticians.” What we see instead is the ability to handle more cases, improved standardization, reduced inter-observer variability, and genuine relief for overburdened teams. AI does not take away jobs. It takes away overload.

AI is a support, not pathologist.
AI is a support, not pathologist.

Of course, the competency profile is also changing. The diagnostician of the future will no longer be just a “slide reader.” They will be someone who understands how an algorithm works, what its limitations are, where it may fail, and how to use it responsibly. This is not a devaluation of the profession. It is its natural evolution — similar to what radiology underwent with the digitization of imaging, or molecular biology with the automation of PCR.

The greatest threat, therefore, is not AI taking our jobs. The greatest threat is the belief that AI will “do the thinking for us” and relieve us of responsibility. Poorly used algorithms can create a false sense of certainty, and uncritical reliance on system outputs can be more dangerous than their absence. But this is not a problem of technology. It is a problem of work culture and responsibility.

AI in pathology is not a predator hunting for jobs. It is a highly precise tool that — like any blade — can be used well or poorly. In the hands of an experienced diagnostician, it becomes an extension of vision and concentration. In the hands of someone looking for shortcuts, it can become a source of error.

The “tiger” does not take control of the jungle. It moves through it quickly, efficiently, and without hesitation. But it is the human who decides where it is led.

And that is exactly the role AI should play in pathology.

AI of the Tiger

AI in pathology does not have to be a thinking brain. And in truth, it never will be. Its role is not to understand disease, to engage in diagnostic dialogue, or to make decisions under uncertainty. That is not its nature, and it is not what it was created for.

AI in pathology is meant to function as an instinct — fast, keenly attentive to detail, and flawless in repetitive tasks. It is meant to notice what is easy to miss when fatigue sets in, to point out areas that require attention, to count what must be counted, and to bring order to the data chaos that modern laboratories face every day. It does so without emotion, without weariness, and without losing concentration.

In this sense, AI resembles a tiger. Not because it “knows” what it is doing, but because it reacts instantly, precisely, and always in the same way. Its strength is not reflection, but consistency. It does not interpret meaning — it recognizes patterns. And that is exactly where it excels.

The diagnostic decision always belongs to the human. It is the pathologist who connects the microscopic image with clinical context, additional test results, and experience that cannot be encoded into training data. It is the human who takes responsibility for the diagnosis, its therapeutic consequences, and the patient on the other side of the slide.

That is why well-used AI is not a threat. It is support from the shadows. It is a tool that strengthens what is most valuable in the work of a pathologist, rather than attempting to replace it. It allows the focus to shift from mechanics to interpretation, from counting to thinking, from repetition to decision-making.

In this sense, AI is one of the most valuable tools that pathology, immunohistochemistry, and cytology have gained in recent years — not because it is “intelligent,” but because it knows precisely what it should not be.

AI in pathology is instinct — the decision always belongs to the human.
AI in pathology is instinct — the decision always belongs to the human.


 
 
 

Comments


bottom of page