The Promise, The Risk, and What Patients Need to Know
By Professor Bu’Hussain Hayee, Clinical Director in Gastroenterology and Co-Founder at Ampersand Health.
Generative AI is now part of everyday life.
ChatGPT, Gemini, Claude, Copilot. Tools that draft emails, summarise articles, and answer almost any question in seconds.
Increasingly, people living with Crohn’s disease and Ulcerative Colitis are using them to ask about symptoms, blood tests, stool results, medication side-effects, and flare management.
That is understandable.
Inflammatory bowel disease does not operate between clinic appointments. Symptoms worsen at night. Anxiety builds while waiting for results. Questions arise long before the next review. AI is immediate, accessible, and always available.
But as a gastroenterologist who treats IBD every day, I am cautious when patients tell me they are relying on generic AI chatbots for medical decisions.
Not because AI is inherently unsafe.
But because unsupervised, general-purpose AI in a complex chronic disease carries real limitations.
IBD Is Not a Simple Information Problem
Crohn’s and Colitis are not static illnesses.
They are immune-mediated diseases with fluctuating activity, individualised treatments, and potential complications that can evolve quickly.
Small changes can matter.
A rising calprotectin may indicate early inflammation, or it may reflect something less significant.
Abdominal pain may represent IBS overlap, or something more serious.
Diarrhoea may signal a flare, infection, or medication side-effect.
Distinguishing between these possibilities requires context:
- Previous flare patterns
- Current medications and immune suppression
- Blood markers and trends
- Imaging history
- Individual treatment response
- Local hospital care pathways
Generic large language models do not have access to this clinical context.
They generate answers based on patterns in text, not on your medical record, not on your multidisciplinary discussion, and not within a framework of clinical accountability.
The Confidence Problem
Large language models are probabilistic systems. They predict the most likely sequence of words.
Often, that produces helpful and accurate explanations.
Sometimes, it produces answers that are plausible but incorrect. This is known as hallucination.
In inflammatory bowel disease, a confident but inaccurate answer can lead to:
- Delay in escalation
- Inappropriate steroid use
- Stopping biologics without supervision
- Missing serious complications
The issue is not that AI is always wrong.
The issue is that it can be wrong while sounding convincing.
For patients, that distinction is difficult to detect.
What About Privacy?
When you speak to your gastroenterology team, your data sits within regulated healthcare systems.
When you paste blood results into a generic chatbot, that information may be stored, reviewed to improve the system, or used in future model training. The governance frameworks are different from those that protect clinical records.
Terms of service are not equivalent to medical confidentiality.
Health data is not just information. It is protected clinical material, and patients deserve clarity about where it goes and how it is used.
Oversight Matters in Chronic Disease
One of the most important elements of safe IBD care is visibility.
- If your symptoms worsen, your team needs to know as soon as possible.
- If inflammatory markers rise, someone should review them and get back to you.
- If steroid dependence develops, it should be recognised and escalation must be planned.
Generic AI operates in isolation.
- It does not alert your IBD nurse.
- It does not flag deterioration.
- It does not trigger urgent review.
In a condition where early intervention can prevent hospitalisation, lack of oversight is a significant limitation.
This Is Not an Argument Against AI
AI has enormous potential in healthcare. It is already transforming imaging, endoscopy, and risk prediction.
However, medical AI should meet medical standards:
- Healthcare-grade data governance
- Clinically validated information
- Clearly defined boundaries
- Integration with established care pathways
- Clear accountability
Without those guardrails, we risk creating a faster version of search engines, persuasive but not necessarily safe.
What Should Patients Look For?
If you are living with Crohn’s or Colitis and considering using an AI assistant, it is reasonable to ask:
- Where is my data stored?
- Is it governed to healthcare standards?
- Has the information been reviewed by IBD specialists?
- Are the limits of the system clearly defined?
- Can my clinical team remain appropriately involved?
If those answers are unclear, caution is sensible.
Building AI the Right Way for IBD
At Ampersand Health, we believe digital tools should support safe clinical care, not replace it. That principle guided the development of ElenaAI.
The intention was not to create a general chatbot, but a system designed specifically for people living with IBD, with:
- Healthcare-grade data protection
- Clinician-reviewed, IBD-specific information
- Clear escalation boundaries
- Alignment with real-world care pathways
- The ability, where appropriate, to integrate with clinical teams
If generative AI is going to play a role in chronic disease, and it almost certainly will, it must be safe, specific, and accountable.
For people living with inflammatory bowel disease, those standards are not optional.
—
A Final Thought
AI can educate, clarify, provide reassurance. But it cannot examine you, assume responsibility or ultimately replace your hard-working IBD team. If symptoms worsen, results change, or something does not feel right, contact your clinical team. Technology should strengthen care, not quietly replace it.