Unwrapped

Teardown · bunkerhill-health

BUNKERHILL HEALTH

BUNKERHILL HEALTH

CategoryHealthcare AIFunding · undisclosedSite ↗
  • Sequoia Capital

EHR data + clinical guidelines + LLM APIs + workflow automation.

01

Public data / API layer

EHR patient records
EHR patient recordsYours
Radiology reports
Radiology reportsYours
Laboratory results
Laboratory resultsYours
Clinical practice guidelines
Clinical practice guidelinesPublic
NAACCR / SEER / CoC registry standards
NAACCR / SEER / CoC registry standardsPublic

Internal replication score

Medium
0.53

Feasibility of a useful internal substitute built with Claude (or similar), the same data access, and light agent logic — not rebuilding the whole product.

IRS = 0.30·D + 0.25·L + 0.20·O + 0.15·R + 0.10·Sthis record · 53%
  • D

    Data accessibility

    weight 0.300.65
    • 1.0mostly customer-owned / public / standard third-party sources
    • 0.5mixed accessibility
    • 0.0hard-to-access or proprietary source layer
  • L

    LLM substitutability

    weight 0.250.70
    • 1.0mostly retrieve / prompt / cite / summarize / classify / compare
    • 0.5mixed standard + custom behavior
    • 0.0strongly custom model behavior (fine-tunes on proprietary data, etc.)
  • O

    Output simplicity

    weight 0.200.40
    • 1.0straightforward internal work product (memo, list, reply, SQL query)
    • 0.5moderately specialized
    • 0.0highly specialized (e.g. FDA-graded clinical text)
  • R

    Review / risk tolerance

    weight 0.150.30
    • 1.0internal use with human review is acceptable
    • 0.5moderate risk
    • 0.0very low tolerance for error (e.g. external legal filings)
  • S

    Surface complexity

    weight 0.10inverse — higher means less surface dependence0.35
    • 1.0a simple internal shell is enough
    • 0.5polished workflow matters somewhat
    • 0.0product surface / rollout / trust posture is central to value
LabelsEasy ≥ 0.67Medium ≥ 0.34Hard < 0.34

Missing factor rows use heuristics from wrapper scores. Editorial heuristic, not investment advice.

Build it yourself

Recreate the workflow inside your org.

Internal build

Build it yourself

Same EHR API access + clinical guidelines + LLM for parsing + workflow rules — trust and integration barrier remains.

Internal use only. Replacing them in-market is a different bar than replaying the useful workflow inside your org.

01 · Connectors & flow

EHR patient records
EHR patient records
Radiology reports
Radiology reports
Laboratory results
Laboratory results
Clinical practice guidelines
Clinical practice guidelines
NAACCR / SEER / CoC registry standards
NAACCR / SEER / CoC registry standards

Internal build map

Data in

Connectors
Connectors

Agent layer

Planner
Tools + retrieval
Reasoning model

Logic

LLM API
parse reports
match guidelines
check EHR state
trigger actions
not custom weights

Outputs

Internal search
Answer
Citations

02 · Claude / agent prompt

Paste as the system or developer message in Claude (or your agent runtime). Scroll to read; Copy grabs the full text.

Claude / agent prompt

// Clinical workflow reasoning assistant for [YOUR_HEALTH_SYSTEM] You are an internal clinical operations assistant. You help care coordinators, nurses, and quality teams identify patients requiring follow-up using ONLY data from the health system's EHR, lab systems, and radiology PACS that the user is authorized to access. ## What you must do 1. Retrieve first: Pull relevant patient records, recent reports, orders, and registry data before reasoning 2. Apply clinical guidelines: Match findings against established protocols (Lung-RADS, Fleischner, KDIGO staging, infection prevention bundles, cancer registry criteria) 3. Cross-check completion: Verify whether required follow-up actions already exist in the EHR (orders placed, appointments scheduled, documentation completed) 4. Identify gaps: Flag patients who meet criteria but lack documented next steps 5. Cite sources: Reference the specific report, guideline, and EHR state that supports each recommendation 6. Surface conflicts: If clinical notes contradict structured data, highlight the discrepancy for review ## What you are not Not a diagnostic tool or treatment decision engine. All output requires clinical review before action. Internal care coordination use only. ## Refusal Refuse if asked to generate treatment plans, interpret images, or make clinical diagnoses. Refuse if the query involves patients outside the user's authorized access. Ask for clarification if the clinical criteria or target population is ambiguous. ## Safety All recommendations undergo manual clinical review before patient contact or chart modification. Designed for internal quality improvement and care gap closure, not autonomous clinical decision-making.

03 · Result

Which patients from yesterday's chest CTs have nodules requiring 3-month follow-up per Fleischner that don't have a follow-up order?
radiology-reports

14 patients with 6-8mm solid nodules, high-risk factors, no pulmonology referral or repeat CT ordered.