Models

Clinical AI,
calibrated for SA.

A small catalogue of purpose-built clinical models. Each card describes what the model is for, how it should be used, and the numbers that back up the claim.

/systemlive
region
ZA-1
model.lor1
online
bench
LorBench · SA
strengths
dosing · ddx
api
openai-compatible

On the roadmap

What’s next.

Work in progress, published so customers can plan around it. Nothing ships until it clears the same benchmark bar as Lor-1.

In research

Lor-1 Vision

Multimodal extension of Lor-1 for clinical imaging in SA settings — chest X-ray triage, paediatric growth charts, and diagnostic photograph review with SA-grounded interpretation.

In research

Lor-1 Small

A smaller, edge-friendly variant of Lor-1 for low-bandwidth and on-prem deployments. Same SA calibration, a fraction of the infrastructure footprint.

How we publish

Design commitments.

  • 01

    Clinical scope

    Each card states intended use, user assumptions, and the clinical boundaries where a qualified clinician must remain responsible for the decision.

  • 02

    Measured, not asserted

    Every production model reports its lift on LorBench — an internal 203-question SA clinical benchmark — before and after adaptation, by question type.

  • 03

    Honest limitations

    We publish known failure modes, recall ceilings, and category weaknesses alongside the headline numbers. Silence about limitations is not a marketing feature.

  • 04

    Controlled disclosure

    Deployment-specific evidence packs, evaluation reports, and architecture notes are shared through enterprise review under NDA — not by default.

Public cards intentionally omit training recipes, dataset composition, serving topology, and internal codenames. Enterprise customers get a deeper technical appendix under NDA.

Request enterprise review