OuiDire: an evidence workflow (sources, mechanisms, citations, encrypted vault)

What is the OuiDire.app about?

A workflow to turn psychiatric/legal PDFs into auditable claim cards with exportable citations.

Pipeline:

Sources + mechanism macros (8) + tags (~30)

Source layer (provenance):

  1. Hearsay / Oui-dire

Mechanism macros = verdict layer (export-grade outcomes):

  1. Narrative deviation (patient’s words rewritten)
  2. Fabrication / extrapolation
  3. Biographical rewrite
  4. Recycled psychiatric antecedents (RAP)
  5. Internal contradictions
  6. Critical omissions
  7. Amplification
  8. Canonisation (false story hardens through repetition)

Tags = instrumentation layer (mechanism-level cues), used for search/filtering, concise rationales, heuristics, and ML features. Examples of tag families: attribution verbs (“reported by”), time anchoring issues, hedging vs certainty inflation, contradiction markers, recycled-history signals, omission patterns.

Macros = “what kind of failure.” Tags = “how it manifests.”

Auditability (the core contract)

Each claim card has:

Exports include:

Goal: “where did this claim come from?” is answerable immediately.

Privacy here is leverage, not virtue

In this context, “privacy-first” isn’t a preference; it’s operational control. When civil rights can be suspended, the practical risks are losing access, losing copies, losing narrative control. On-device state and an encrypted vault are continuity tools: keep a usable record, keep exports reproducible, reduce third-party exposure. The point is simple: don’t let your file become (or remain) someone else’s uninspectable story.

Cloud-first, with clear boundaries

We start cloud-first for:

Boundaries:

OCR / extraction (Azure Document Intelligence)

Storage (Azure Vault)

The target is an optional vault that stores encrypted blobs:

Local-only fallback (later, degraded)

A strict local-only path can exist later as a fallback:

It’s a trade-off, not the primary path.

Human vs machine (clean learning signal)

Two parallel layers:

Track per claim:

This yields calibration/training signal without centralizing raw dossiers by default.

v0 → v1 → v2

v0:

v1:

v2:

Future themes (not necessarily in that order)