Engineering9 min read

Lor-1 v1.0 is GA

Our clinical foundation model is generally available. 27B dense on Qwen3.5, LoRA r8, INT4 GPTQ served with DFlash. Element recall 32.6% on LorBench vnext; SA preference 82.5%. Here is what we shipped.

Lorraine Team15 April 20269 min read
Lor-1 v1.0 is GA

Lor-1 v1.0 (codename prime-bushveld) is generally available. It is the model powering Lorraine Chat, Lorraine Learn, and the Lorraine Platform API. This post is a short tour of what we built and how we evaluated it.

Architecture

Lor-1 is a dense 27-billion-parameter model built on Qwen3.5-27B with a LoRA r8 adaptation trained for 2 epochs at LR 5e-5. Training took 11h14min on an H200 (486 steps, fp16). We quantise to INT4 GPTQ with AutoRound (w4g128) and serve with DFlash speculative decoding using 24 speculative tokens. End-to-end inference latency on H200 sits at the kind of numbers that let Chat feel like Chat, not like a batch job.

Benchmarks

On LorBench vnext — an internal clinical reasoning benchmark of 203 questions spanning SA drug dosing, differential and management, clinical scenarios, referral/triage, and protocol retrieval — Lor-1 scores 32.6% element recall and 82.5% SA preference in tools-mode at temperature 0. That is +14.6 percentage points over the untrained 27B base. Drug-dose questions lead at 41.0%; protocol retrieval trails at 26.0% — details in the model card.

What we learned

A few decisions earned their keep. LoRA r8 beat r32 (r32 catastrophically forgot clinical knowledge). Dense 27B beat the MoE 35B-A3B variant by 1.2pp recall and 2.9pp SA preference. Tools mode with unrestricted tool access beat router-mode by 7.5pp on the dense base. DPO regressed on SFT three times in a row — we stopped. And self-correction prompts consistently regressed the first answer — Lor-1 is best out of the gate.

What comes next

v1.1 will focus on the two weakest question types (protocol retrieval, referral/triage), grounding failures on tool output, and a re-run of the training mix on the improved strong-protea dataset — which landed after prime-bushveld was already in the oven. Expect a follow-up post when it ships.

/written-by

LT

Lorraine Team

Engineering

/keep-reading

All articles

Finished reading?
Start practising.

10,000+ CMSA-aligned questions, adaptive study paths, and OSCE simulations — turn what you just read into what you can recall on exam day.