Glossary

Glossaire français

Definitions for the current product model: Source layer (provenance) and Mechanism layer (macros).

Includes Amplification, Canonisation, Audit terminology, and HOP (Preview) status language.

Source taxonomy

Hearsay

Provenance: second-hand claim (Source layer, not macro).

A claim attributed to someone else (“X said…”) without direct observation or primary documentation in the cited segment. Hearsay is a **Source layer** (provenance): it documents *who speaks*, *when*, and on what evidentiary basis. It is **separate** from **macros** (mechanisms) that describe *how* a statement turns into “truth on paper.” Hearsay is a **Source**, not a macro.

Macro taxonomy

Amplification

The claim escalates without proportional evidence.

When wording, qualifiers, or details escalate in intensity across the text (or across documents) without proportional new evidence: tone hardens, scope expands, or certainty increases. This is a **mechanism macro** (not a Source tag). Typical cues include connectors like “therefore,” “clearly,” “without a doubt,” or generalizations that outrun the cited segment.

Biographical rewriting

Complex life reduced to a durable institutional storyline.

A life history is invented, or condensed into a durable storyline that becomes institutional “truth,” often by omitting counterevidence, flattening nuance, or reinterpreting past events to fit a diagnostic arc.

Canonisation

Narrative repetition: a false story hardens into “fact.”

When a story/claim (often false or never revalidated) propagates across documents (“as previously noted…”), hardens, and ends up treated as a fact without revalidation. Unlike amplification, canonisation focuses on **propagation** and **stabilization** of the narrative (prior-record references), not only tone.

Critical omissions

What’s missing changes the meaning.

Missing context or counterevidence a reasonable reader would expect, where absence materially shifts interpretation (procedural facts, alternative explanations, prior corrections, exculpatory elements).

Fabrication / extrapolation

Weak cues → strong claims, without intermediate evidence.

Invented events or unjustified inferences presented as factual. Includes “stretching” from weak cues to strong conclusions (risk, intent, diagnosis) without intermediate evidence.

Internal contradictions

Material contradictions (not minor details).

Incompatibilities within a document or across the set (dates, sequence, observations, risk claims). Not minor typos: contradictions that change meaning, interpretation, or credibility.

Narrative deviation

Warping patient speech (meaning/tone/implication).

The record claims to reflect the patient’s words or intent but shifts meaning, tone, or implication via selective paraphrase, reframing, or insinuation. Key signal: divergence between attributed speech and asserted proposition.

Recycled Psychiatric Antecedents (RAP, from French acronym)

Past labels become present evidence through repetition.

Reusing past labels or narratives as current evidence, without re-verification. Creates narrative lock-in: earlier claims gain authority through repetition rather than fresh corroboration.

Features

Audit

See what is happening (OCR/AI logs + storage/balance).

A technical section inside OuiDire that makes the workflow visible (OCR logs, AI call logs, storage/balance status): progress, steps, errors, and current state. Goal: reduce opacity when an action takes time and make troubleshooting easier.

Citations & traceability

Citation anchor

Stable reference output → source.

A stable reference from an output to its source: doc ID + page + segment ID + offsets (if available). Anchors should survive exports and reformatting.

Snap-to-sentence

Expand intelligently without losing auditability.

Heuristic that replaces a weak span with a more interpretable excerpt (containing sentence or bounded window) while preserving the original anchor/offsets for audit.

Span (citable excerpt)

A bounded, inspectable excerpt.

A bounded excerpt of source text used as evidence (page + segment + offsets when available). Spans enable inspection: a reader can verify whether a claim is supported.

Weak span

Too short to justify a conclusion.

A span that is too short or non-informative (e.g., “therefore,” stopwords) to justify a tag/claim. Best practice: snap-to-sentence or expand a bounded window while preserving traceability.

Program

Human Oracles Program (HOP)

Share signals, not case files (paused preview).

**Preview (paused / in design)** — HOP is an opt-in program designed to improve OuiDire from aggregated signals (confirm / reject) while avoiding raw record collection by default. Current status: HOP is paused while the public beta and the Audit section are stabilized.

Infrastructure

Azure Document Intelligence (OCR)

Fidelity OCR + layout structure.

OCR + layout extraction for scanned PDFs (text + structure). Used when speediness and fidelity matters (tables, headers, pagination) and when traceable segmentation requires reliable spans/offsets.

Vercel preview deployments

Review before prod, commit by commit.

Per-commit preview URLs used to review changes before production. Enables fast iteration and safe review of content/UX updates.

Workflow

Human Oracles vs Machine Oracles

Local calibration metrics.

Local metrics comparing suggestions to human verification (confirm/reject/neutral). Useful for quality control, bias detection, and improving heuristics without uploading raw case files.

Two lanes: Machine vs Verified

Hypothesis vs confirmed decision.

Two-track workflow: machine outputs are hypotheses; verified outputs are human-confirmed decisions. Separation prevents truth-by-model and enables calibration metrics.