It's three in the morning. Pediatric intensive care unit. A four-month-old girl, Sara, lies connected to a ventilator. The monitor shows unstable parameters. On the duty doctor's desk — a chest X-ray showing a shadow. It might be a tumor. It might not.
Doctor Kowalska has 22 years of experience. She's seen thousands of such images. Her instinct says: this doesn't look good. But she's not sure. It could be an artifact, a vascular shadow, anything. She requests a consultation. Doctor Wiśniewski, a radiologist with 15 years of experience, looks at the X-ray and says: let's observe. He sees no reason for intervention.
Two experienced doctors. Two conflicting opinions. One child.
And then someone activates the AI.
14 million images versus 50 thousand memories
A next-generation diagnostic algorithm, trained on 14 million X-ray images with pathological descriptions, prepared from the WHO's public database and two hundred academic hospitals across three continents. It's not patented — in a world without IP, the medical knowledge base is a public good. Every hospital in the world has access to it.
AI analyzes Sara's image. The answer comes in 1.3 seconds:
Probability of malignant neoplasm: 23.4% (95% CI: 18.1–29.7%). Probability of vascular artifact: 61.2%. Probability of benign lesion: 15.4%. Recommendation: observation with imaging follow-up in 6 weeks. Surgery is not recommended at the current level of diagnostic certainty.
Doctor Kowalska looks at the screen. Then she looks at Sara's family, sitting in the waiting room. The father hasn't slept in two days. The mother is crying.
AI gave an answer. But an answer is not a decision.
What is a machine's "experience"?
When we say a surgeon has "30 years of experience," we know what that means. Thousands of patients. Thousands of decisions made under pressure. Moments when you had to cut despite uncertainty. Moments when the surgeon's hand felt — literally, physically — that the tissue under the scalpel was different than it should be. This is knowledge that lives in the body, not in data.
Doctor Kowalska remembers a case from seven years ago. A girl of similar age. A shadow on the X-ray. The radiologist said: let's observe. She listened. Three months later the child had metastases. The girl survived — barely — after aggressive chemotherapy. Since then, Doctor Kowalska doesn't trust shadows on images. Her "instinct" isn't mysticism — it's trauma forged into caution.
But AI also has "experience." Different, but real. The model has seen 14 million X-ray images. Fourteen million. Each with a detailed pathological description, with biopsy results, with clinical follow-up. AI doesn't remember one dramatic case — it remembers the statistical distribution of all cases that have ever been documented.
And here is the fundamental difference:
- The surgeon thinks in anecdotes. They remember cases that moved them. Dramatic, unexpected, painful cases. Human memory is built on emotions — we remember what hurt us, not what was routine.
- AI thinks in distributions. It has no emotions. No "case from seven years ago" that colors every subsequent diagnosis. It has millions of data points showing that a shadow with certain parameters is malignant in 23.4% of cases and an artifact in 61.2%.
Which kind of "experience" is better? That depends on what we mean by "better."
Precision versus intuition
The data is unambiguous: in diagnostic imaging, AI is statistically more accurate than humans. A study published in Nature Medicine in 2024 found that diagnostic algorithms achieve 94.5% sensitivity in detecting lung cancer on CT scans, while the best radiologists achieve 88.2%. AI makes fewer false-negative diagnoses — it misses cancer less often.
But AI also makes mistakes. Different from human ones, but equally real. The model optimizes for the average — it performs brilliantly on "typical" cases but can fail catastrophically on atypical, rare cases that weren't well represented in the training data. AI is an expert on what has already happened. The surgeon can be an expert on what no one has ever seen.
And there's something else that can't be quantified: moments when data isn't enough. When the surgeon opens the chest cavity and sees something no scanner showed. When you have to decide in a split second: cut deeper or pull back. When blood starts seeping from where it shouldn't. In those moments, no algorithm helps. What helps is a hand that has done this thousands of times.
AI is better at reading images. Humans are better at reading situations. The problem arises when we try to force one to be the other.
A scenario we must think through
Let's return to Sara. Doctor Kowalska has two opinions and one question before her:
AI says: don't operate. Probability of cancer — 23%. Risk of surgery on a four-month-old child — complications in 35% of cases, including risk of death on the table: 8%. The math is brutal: with a 23% chance of cancer and an 8% risk of death during surgery, the probability calculation says: wait.
Doctor Kowalska's instinct says: operate. Because she remembers that girl. Because that shadow doesn't look like an artifact to her. Because three in the morning in a children's hospital is not a place for probability calculations — it's a place where you look parents in the eyes and take responsibility.
Let's consider four scenarios:
Scenario A: They listened to AI, AI was right
They didn't operate. Six weeks later, a follow-up X-ray. The shadow is gone. It was an artifact. Sara is healthy. Nobody writes about it, because nothing happened.
Scenario B: They listened to AI, AI was wrong
They didn't operate. Three months later, the tumor is three times larger. Metastases to the lymph nodes. Now surgery is harder, longer, riskier. Sara survives — but a year of chemotherapy takes away her first two years of life.
Who is responsible? The parents signed consent for observation. The doctor presented AI's recommendation. AI gave 23% — below the surgical threshold. Everyone acted "according to procedure." But the child suffers. Who does the family sue?
Scenario C: They ignored AI, the surgeon was right
They operated despite the recommendation. The tumor turned out to be malignant. They removed it entirely. Sara is healthy. Doctor Kowalska is a hero. AI is discredited — even though it gave a correct probability (23% isn't 0%).
Scenario D: They ignored AI, the surgery failed
They operated despite the recommendation. Complication — hemorrhage. Sara dies on the operating table. The tumor turned out to be benign.
AI said: don't operate. The surgeon said: operate. The child is dead. The medical commission finds: "the doctor made a decision against the recommendation of the diagnostic system." The family sues the hospital. The hospital responds: "the system was advisory in nature, the final decision belongs to the doctor." The doctor responds: "I acted based on my clinical experience."
Which of these arguments carries more weight? Who bears responsibility?
A problem no algorithm can solve
Medical law as we know it rests on one simple assumption: decisions are made by humans and humans are responsible for them. A doctor has a diploma, a license, liability insurance, a medical board, a code of ethics. If they make a mistake — there's a system that handles it. Imperfect, but existing.
What happens when we add AI to this system?
- Can you insure an algorithm? Classic liability insurance assumes the insured entity is capable of committing "malpractice." But AI doesn't practice medicine. It has no license. It has no ethics. It has parameters, weights, and a loss function. Insurance companies don't know how to price the risk of a tool that is neither a product (like a scalpel), nor a person (like a surgeon), nor an institution (like a hospital).
- Who "owns" an AI's error? The programmer who wrote the model? The company that deployed it? The hospital that purchased it? The doctor who followed the recommendation? In a world without IP — where the model is open source and anyone could have modified it — the chain of responsibility disintegrates into atoms.
- Can a patient refuse AI diagnosis? Does a parent have the right to say: "I don't want an algorithm deciding about my child, I want a human doctor"? And what if the "human doctor" is statistically wrong more often?
Model experience vs. human experience
Let's be fair about this. There are things at which AI is unquestionably better:
- Image analysis — AI sees patterns the human eye can't detect. Micro-changes in tissue texture, subtle asymmetries, correlations among thousands of pixels.
- Consistency — AI doesn't have bad days. It's not tired after a 12-hour shift. It's not distracted by a fight with a spouse. It's not under pressure to "squeeze in" more patients.
- Memory — AI remembers every case it's ever seen. Human doctors remember those with emotional charge — which introduces systematic bias.
- Speed — 1.3 seconds to analyze an X-ray image versus minutes or hours of human assessment.
And there are things at which humans are irreplaceable:
- Patient context — AI sees an image. The doctor sees a child, a family, a history, a mother's fear, a father's determination, the question "what would you do in our place?".
- Real-time adaptation — in the operating room, the situation changes every second. AI doesn't stand at the table. It has no hands. It can't say: "wait, there's a vessel here I didn't see on the scan."
- Moral responsibility — AI doesn't feel the weight of a decision. It doesn't wake up at night thinking about the child it operated on. It doesn't carry the question: "did I make the right call?". That weight is human — and it's part of why we trust doctors.
- Communication — someone has to sit with Sara's family and say: "there's a chance, but there's also a risk." AI can generate a report. But it can't look a mother in the eye.
The problem of a world without IP
Everything above becomes an order of magnitude more complicated in a world where medical knowledge isn't protected.
Today — diagnostic algorithms are owned by companies. The FDA certifies them. The company is liable for the product. There's a chain: manufacturer → certifier → hospital → doctor. There's insurance, audits, civil liability.
In a world without IP — everyone shares models. A clinic in Bangladesh uses the same algorithm as the Mayo Clinic. A village doctor in Mexico makes diagnoses with a model created by an anonymous team on GitHub. Nobody knows who's responsible for the model. Nobody certified it. Nobody insured it.
Democratization of medical knowledge is a moral imperative. A child in Nairobi deserves the same quality of diagnosis as a child in New York. But democratization without accountability frameworks isn't freedom — it's chaos.
In a world where everyone has access to the best diagnostics but no one is responsible for the results — we don't live in a paradise of free knowledge. We live in a cosmos without gravity: everything floats, nothing holds.
Can you insure an algorithm?
This isn't a rhetorical question. It's one that insurers around the world will be grappling with in five years. And the answer is: we don't know.
Classic medical insurance prices risk based on the history of human errors. We know how often surgeons make mistakes. We know which operations are risky. We have actuaries, tables, statistics.
But how do you insure a model that:
- Is updated every month — each version has different parameters?
- Can be modified by the hospital that deployed it?
- Performs differently on different data — better on European populations, worse on African ones (because the training data was unevenly distributed)?
- Can't explain why it made a given decision? (The "black box" problem)
Insurance requires predictability. AI, paradoxically, is simultaneously more precise and less predictable than humans. A model might have 94% accuracy — but you don't know which 6% it'll get wrong. And that 6% might include your child.
Who looks the parent in the eye?
Doctor Kowalska sits before Sara's parents. It's five in the morning. She has two opinions before her, one algorithm recommendation, and one question that no technology will answer: what to do?
Because in medicine — unlike retail, logistics, or marketing — it's not about optimization. It's not about making "statistically best" decisions. It's about making a decision you can live with. You, the parent, and — if all goes well — the child.
AI doesn't live with its decisions. AI doesn't wake up at three in the morning thinking about a shadow on an X-ray. AI doesn't carry the weight of saying: "I did everything I could." And that's precisely why AI shouldn't decide — or at least, shouldn't decide alone.
Maybe the model for the future looks like this:
- AI diagnoses. It analyzes images, clinical data, medical history. It provides probabilities, correlations, recommendations. It does this better than humans, and it should.
- Humans decide. The doctor takes AI's diagnosis, adds their experience, the patient's context, the conversation with the family. They make a decision. They take responsibility.
- Law regulates the interface between the two. Not the model (because it's open source and can't be frozen). Not the doctor (because medical law already does that). But the way the doctor uses AI — standards, procedures, documentation, trust boundaries.
I don't know if I'd let AI operate on my child. Probably not. Not because I don't trust the algorithm. But because in the moment when everything falls apart — I want to look into the eyes of someone who says: "I'm here. I'll do everything I can."
An algorithm won't say that. And maybe that's the only difference that truly matters.
Because responsibility isn't a legal matter. It's about who is present when things go wrong. And machines aren't present. Machines just compute.