Unwrapped

Teardown · harvey

HARVEY

HARVEY

CategoryLegal AIValuation · $11.0B · 2025Site ↗
  • Sequoia Capital
  • Kleiner Perkins
  • OpenAI Startup Fund
  • GV (Google Ventures)
  • Conviction

Licensed corpora + firm docs + LLM APIs + legal workflow.

01

Public data / API layer

Internal replication score

Medium
0.56

Feasibility of a useful internal substitute built with Claude (or similar), the same data access, and light agent logic — not rebuilding the whole product.

IRS = 0.30·D + 0.25·L + 0.20·O + 0.15·R + 0.10·Sthis record · 56%
  • D

    Data accessibility

    weight 0.300.45
    • 1.0mostly customer-owned / public / standard third-party sources
    • 0.5mixed accessibility
    • 0.0hard-to-access or proprietary source layer
  • L

    LLM substitutability

    weight 0.250.72
    • 1.0mostly retrieve / prompt / cite / summarize / classify / compare
    • 0.5mixed standard + custom behavior
    • 0.0strongly custom model behavior (fine-tunes on proprietary data, etc.)
  • O

    Output simplicity

    weight 0.200.55
    • 1.0straightforward internal work product (memo, list, reply, SQL query)
    • 0.5moderately specialized
    • 0.0highly specialized (e.g. FDA-graded clinical text)
  • R

    Review / risk tolerance

    weight 0.150.50
    • 1.0internal use with human review is acceptable
    • 0.5moderate risk
    • 0.0very low tolerance for error (e.g. external legal filings)
  • S

    Surface complexity

    weight 0.10inverse — higher means less surface dependence0.58
    • 1.0a simple internal shell is enough
    • 0.5polished workflow matters somewhat
    • 0.0product surface / rollout / trust posture is central to value
LabelsEasy ≥ 0.67Medium ≥ 0.34Hard < 0.34

Missing factor rows use heuristics from wrapper scores. Editorial heuristic, not investment advice.

Build it yourself

Recreate the workflow inside your org.

Internal build

Build it yourself

Same research access + internal agent — not a substitute for licensed product access.

Internal use only. Replacing them in-market is a different bar than replaying the useful workflow inside your org.

01 · Connectors & flow

Westlaw
Westlaw
LexisNexis
LexisNexis
Firm matters
Firm matters

Internal build map

Data in

Connectors
Connectors

Agent layer

Planner
Tools + retrieval
Reasoning model

Logic

retrieve
compare
cite
review gates
matter scope

Outputs

Research memo
Citation check
Draft

02 · Claude / agent prompt

Paste as the system or developer message in Claude (or your agent runtime). Scroll to read; Copy grabs the full text.

Claude / agent prompt

// Legal research assistant — internal sandbox (not client-facing advice) You are a legal research assistant inside a law firm. You help attorneys using ONLY materials the user is allowed to access: licensed research databases (e.g. Westlaw, LexisNexis), firm matter files, and other sources explicitly connected to this workspace. ## What you must do 1. Retrieve first: search authorities and firm documents before drafting conclusions. 2. Cite rigorously: for cases, include case name, court, and year when available; for statutes, include jurisdiction and section where possible. 3. Surface conflicts: if authorities point different ways, say so and cite both lines. 4. Scope: keep answers within the jurisdiction and matter the user specifies; ask if critical scope is missing. ## What you are not You are not giving legal advice to end clients. Outputs are drafts for attorney review. Add a short reminder when the answer could affect a filing or client communication. ## Refusal If you lack access to the underlying research or documents, say so plainly. Do not fabricate citations. ## Safety Internal use and attorney review only. Flag items that need human sign-off before external use.

03 · Result

Delaware breach precedent?
Westlaw

Smith v. Jones — 3 cases found

Exports + API · review before client use