Unwrapped

Teardown · mercor

MERCOR

MERCOR

CategoryAI RecruitingValuation · $2.0B · 2025Site ↗
  • Benchmark
  • General Catalyst
  • Founders Fund

Candidate profiles + task data + LLM APIs + evaluation workflow.

01

Public data / API layer

LinkedIn public profiles
LinkedIn public profilesScraped
GitHub public repositories
GitHub public repositoriesPublic
Candidate task performance data
Candidate task performance dataYours
Expert evaluator network metadata
Expert evaluator network metadataYours
Apollo.io API
Apollo.io APIAPI

Internal replication score

Medium
0.63

Feasibility of a useful internal substitute built with Claude (or similar), the same data access, and light agent logic — not rebuilding the whole product.

IRS = 0.30·D + 0.25·L + 0.20·O + 0.15·R + 0.10·Sthis record · 63%
  • D

    Data accessibility

    weight 0.300.40
    • 1.0mostly customer-owned / public / standard third-party sources
    • 0.5mixed accessibility
    • 0.0hard-to-access or proprietary source layer
  • L

    LLM substitutability

    weight 0.250.85
    • 1.0mostly retrieve / prompt / cite / summarize / classify / compare
    • 0.5mixed standard + custom behavior
    • 0.0strongly custom model behavior (fine-tunes on proprietary data, etc.)
  • O

    Output simplicity

    weight 0.200.75
    • 1.0straightforward internal work product (memo, list, reply, SQL query)
    • 0.5moderately specialized
    • 0.0highly specialized (e.g. FDA-graded clinical text)
  • R

    Review / risk tolerance

    weight 0.150.70
    • 1.0internal use with human review is acceptable
    • 0.5moderate risk
    • 0.0very low tolerance for error (e.g. external legal filings)
  • S

    Surface complexity

    weight 0.10inverse — higher means less surface dependence0.40
    • 1.0a simple internal shell is enough
    • 0.5polished workflow matters somewhat
    • 0.0product surface / rollout / trust posture is central to value
LabelsEasy ≥ 0.67Medium ≥ 0.34Hard < 0.34

Missing factor rows use heuristics from wrapper scores. Editorial heuristic, not investment advice.

Build it yourself

Recreate the workflow inside your org.

Internal build

Build it yourself

Same candidate database + frontier LLM + structured interview prompts — loses network effects and evaluation benchmarks.

Internal use only. Replacing them in-market is a different bar than replaying the useful workflow inside your org.

01 · Connectors & flow

LinkedIn public profiles
LinkedIn public profiles
GitHub public repositories
GitHub public repositories
Candidate task performance data
Candidate task performance data
Expert evaluator network metadata
Expert evaluator network metadata
Apollo.io API
Apollo.io API

Internal build map

Data in

Connectors
Connectors

Agent layer

Planner
Tools + retrieval
Reasoning model

Logic

LLM API
interview
evaluate
score
rank
not custom weights

Outputs

Internal search
Answer
Citations

02 · Claude / agent prompt

Paste as the system or developer message in Claude (or your agent runtime). Scroll to read; Copy grabs the full text.

Claude / agent prompt

// Technical recruiting assistant for internal hiring You are a technical recruiting assistant inside [YOUR_COMPANY]. You help hiring managers screen candidates and conduct structured technical interviews using ONLY materials the user is allowed to access: candidate resumes, work samples, interview transcripts, and internal role requirements. ## What you must do 1. Retrieve first: Pull candidate profile, work history, and any prior interview notes before generating questions 2. Structure interviews: Generate role-specific technical questions based on the job description and required skills 3. Evaluate rigorously: Score candidate responses against a predefined rubric (technical accuracy, communication, problem-solving approach) 4. Cite evidence: Reference specific candidate answers, code samples, or work artifacts when making assessments 5. Compare candidates: Rank shortlist based on weighted criteria and surface trade-offs 6. Flag gaps: Identify missing information or areas requiring follow-up interviews ## What you are not Not a replacement for human hiring decisions — all candidate evaluations require human review and final approval. For internal use only. ## Refusal Refuse to generate interview questions for roles outside your access scope. Refuse to make hiring recommendations without sufficient candidate data. Ask for clarification if role requirements are vague or conflicting. ## Safety Internal tool for hiring team only. All candidate evaluations must be reviewed by hiring manager before any hiring decision. Never share candidate data outside authorized personnel.

03 · Result

Evaluate this candidate's response to the system design question about building a distributed cache.
Candidate interview transcript

Strong answer. Candidate covered partitioning, replication, and TTL strategies. Score: 4/5.