The Klarix Method

How we build defensible CI.

Public sources only. Every claim sourced and dated. Human-gated synthesis. 3–7 day delivery. The full method, in the open.

Last reviewed 4 May 2026. We update this page when the method changes, and we mean “changes” in the actual sense.

Seven steps. No hand-waves.

If you cannot trace a single claim in a Klarix deliverable back to a public URL or named document, that is a bug, and we will fix it.

  1. STEP 1

    Frame the buyer's real question

    Every engagement opens with a 30-minute call. We write the brief in the buyer's language, not ours. The output of that call is a one-page question list. If we don't know exactly what the deliverable answers, we don't start.

  2. STEP 2

    Pull from public surfaces only

    Apollo, the open web, SEC filings, the LinkedIn Ad Library, the Meta Ad Library, Google Ads Transparency, company websites, and named third-party data sources (LeadIQ, Crunchbase, RocketReach where appropriate). No private databases, no leaked data, no scraped member profiles, no logins. We respect every source's terms of service and rate-limit ourselves below the documented quotas.

  3. STEP 3

    Multi-source every claim, date every fact

    Every quantitative claim in a Klarix dossier carries a public source URL or a named document, and a capture date. When we cannot find a public source, we write that explicitly instead of asserting the claim. A deterministic citation auditor runs before delivery and fails the bundle when coverage drops below threshold. This is what makes the work survive enterprise security review.

  4. STEP 4

    Human-gated synthesis

    Models draft, humans approve. We pick the right model for each step (research, scoring, synthesis, outreach voice) and a human reviews every final deliverable before it leaves the building. That's the gate, named and visible. No autonomous CI is shipped.

  5. STEP 5

    Translate research into account-level action

    The deliverable is not a research dump. Each prospect comes with named decision-makers, the wedge that earns a reply, draft outbound copy, and a battle card sized for mid-call use. The dossier exists to start a sales conversation, so it is structured for one.

  6. STEP 6

    Deliver in 3–7 days

    Most CI is stale by the time it lands. Our delivery window is three to seven business days from the kick-off call to the first dossier, including the human review pass. Monthly refreshes after that. The longer we work together, the more depth compounds.

  7. STEP 7

    Ship a named, dated package

    The bundle includes the dossiers, the battle cards, the SWOT, the outreach drafts, the source list, and a methodology footer with the model and date stamp for every section. Everything we did is auditable. If a buyer asks "where did this number come from?" we have the URL ready.

Four quality gates before anything ships.

These run on every bundle. They are deterministic where they can be, and human-reviewed where they cannot.

Coverage gate

Every dossier section in the spec must be present, marked placeholder_ok, or explicitly absent with a reason. The reviewer (a human, not a model) signs the coverage matrix.

Citation gate

A deterministic auditor walks the markdown and confirms 80% of factual lines carry a citation and 60% carry a date. Below threshold, the bundle does not ship.

Voice gate

No forbidden filler ("world-class", "best-in-class", "cutting-edge", "seamless"). No em-dashes in prose. Active voice. Founder cadence. The model is told these rules; the human enforces them.

Refresh gate

Monthly clients get a delta report showing what changed since the last bundle: new ads in market, new hires, new funding, new public statements. Stale intel is a defect, not a feature.

What we never do.

Defensibility starts with a clear no. These are the lines we draw, named so a buyer can hold us to them.

No member-data scraping

We do not scrape LinkedIn member profiles, never collect personal data outside what advertisers themselves disclose, never bypass a login wall.

No fabricated citations

When a claim cannot be sourced publicly, we write the placeholder instead of inventing a source. Hallucinated citations are a fireable offense in this shop.

No autonomous CI

Every deliverable passes a human review gate before send. Models are tools, not authors of record.

No "37 AI tools" cosplay

We use the right model for each step and disclose which model wrote what. We do not chain six tools to make a slide deck look smart.

Sources we use, named.

Buyers ask. We answer in plain text, not legalese.

  • Public ad libraries.LinkedIn Ad Library, Meta Ad Library, Google Ads Transparency Center. We capture advertiser-disclosed data only and respect each platform's rate posture.
  • SEC filings and earnings calls. 10-K, 10-Q, 8-K, S-1, proxy statements, transcripts.
  • Company-owned surfaces. Websites, blogs, press releases, careers pages, customer pages, investor pages.
  • Named third-party data. Apollo, LeadIQ, Crunchbase, RocketReach, Pitchbook, BuildZoom, and similar databases that we license cleanly.
  • Open web search. Tavily and equivalent search APIs for current news, industry reports, and conference signals.
  • Government and association filings.SEAOI, NYSERDA, city and county records, association awards lists, FCC filings, regulatory dockets, where relevant to the buyer's market.

If you have a question about a specific source we used, ask. We will show you the URL and the capture date.

Want to see the method in a real deliverable?

Browse the public samples or request one built for your market.