Clinical AI6 min read

Inside Chat citations

How Lorraine Chat sources, ranks, and displays clinical evidence so a clinician can audit any claim inside ten seconds. A short tour of the retrieval stack and the design decisions behind it.

Lorraine Team10 April 20266 min read
Inside Chat citations

Clinicians told us something simple: an uncited answer is worse than no answer. If they cannot trace a claim back to a source in a few seconds, they cannot use it. So we built citations as a first-class part of the Chat experience, not an afterthought.

What we index

Chat retrieves from a hybrid corpus: PubMed abstracts and open-access full text, the major international guideline bodies (NICE, WHO, specialty societies), and the South African guideline set that matters most in daily practice — SAHS, SA Heart, NDoH circulars, SAMJ. We keep the SA corpus in a separate index with a preference weight, so local guidance surfaces first when it is relevant.

How we rank

Retrieval is hybrid BM25 + dense embeddings, followed by a clinician-tuned reranker that weights SA provenance, recency, and study design. For differential and management questions we favour structured guideline content; for drug dosing we favour the SA formulary and trial-grade evidence.

How we display

Citations appear inline in the answer, each backed by a hover card with the source title, year, publication, and the relevant quote. A single click opens the source. If a claim is not grounded, Chat says so — uncertainty is a first-class surface, not a silence.

We are still iterating. Next up: structured citation export for CPD submissions, and richer drug-monograph rendering for dose and interaction questions.

/written-by

LT

Lorraine Team

Engineering

/keep-reading

All articles

Finished reading?
Start practising.

10,000+ CMSA-aligned questions, adaptive study paths, and OSCE simulations — turn what you just read into what you can recall on exam day.