Audit-first AI for claims document workflows.

I build document intelligence that turns claim PDFs into structured outputs—with provenance, quality gates, and a fast review console for exceptions.

  • Cut review time by routing only risky cases to humans
  • Make outputs defensible with evidence links to source pages
  • Ship pragmatically — prototype → measurable runs → production

Prefer async? Email me

Zurich-based Claims QA + Document intelligence Evidence & provenance Live demos
Document Review UI — Review
Run: Pipeline health & coverageBenchmark: Error drivers & accuracyReview: Human-in-the-loop evidence
Human-in-the-loop evidence

10+

Years in insurance tech

3

Live production systems

100%

Evidence-linked outputs

Featured Work

View all →

Systems I ship when reliability matters more than demos.

ClaimEval illustration
AI Insurance

ClaimEval

QA copilot for claims review

Outcome: Error drivers + benchmark drilldown

Agentic Context Builder illustration
AI Architecture

Agentic Context Builder

Reliable LLM context pipelines

Outcome: Evidence-first context packs

Prompt Management Playbook illustration
AI DevOps

Prompt Management Playbook

Versioned, testable prompts

Outcome: Prompt versioning + governance pack

See it running

Try the live demos directly - no signup required

Claims QA App
Loading demo...
Decisioning Console
Loading demo...

How it works (in one view)

Most AI demos stop at "it extracted something." I optimize for what matters in claims ops: traceable fields, quality gates, and a review console that makes exceptions fast.

ContextBuilder outcome-oriented workflow diagram
  • 1 Ingest messy document packs (PDFs, images, emails)
  • 2 Extract structured fields with source citations
  • 3 Validate against rules and flag exceptions
  • 4 Review edge cases in a human console