Calgentik

VerifiedSignal: Read everything. Trust wisely.

The newspaper-of-record for the synthetic age. An automated epistemological verification layer designed to restore trust in digital discourse. In an era where an estimated $78 billion is lost annually to decisions predicated on bad information, VerifiedSignal provides infrastructure for skeptical reading.

See it in action

Watch the product in motion

Screen recording of VerifiedSignal—document intake, review UI, and intelligence overlays. For the full media library (including downloads), visit Resources.

Open in Resources →Video is streamed from cloud storage (S3/CloudFront).

The problem

Document-heavy workflows fail when trust is assumed instead of verified

Volume is not the hard part. The hard part is knowing what to believe—especially when synthetic text, persuasive framing, and thin sourcing look credible at a glance.

We move beyond simple AI detection to offer a scalable critical-thinking engine that uncovers hidden persuasion, misinformation, and factual inconsistencies.

The intervention

Technical scoring—not vibes—for skeptical reading at scale

VerifiedSignal pairs extraction and structure with explicit intelligence lenses, evidence links, and review workflows so teams can defend conclusions.

DimensionProblemTechnical intervention
Signal 1AI detection gap: as LLMs reach human parity in style, many readers struggle to recognize machine-generated text.Multi-model scoring: specialized detection models (beyond style heuristics) assess authorship probability with high-precision confidence estimates.
Signal 2Logical fallacies: viral articles often use “invisible machinery” such as false equivalences to steer readers.Fallacy rating engine: identifies and names specific fallacy types (for example ad hominem) with direct links to the offending passages.
Signal 3Cost of bad information: investment and policy decisions grounded in misinformation create fiscal and operational leakage.Factuality confidence: a 0–1 score from internal consistency checks, citation signals, and cross-referenced claim validation.
Illustrative framing from the product reference: problems map to concrete system behaviors, not generic “AI summaries.”

Solution overview

Eight dimensions. One auditable scorecard.

Every document is evaluated through system-wide lenses designed for operational use—not a single opaque “helpfulness” label.

Logical fallacy rating

Measures manipulative reasoning. Per-fallacy breakdown (ad hominem, straw man, false dichotomy, slippery slope) mapped to specific text triggers.

Factuality confidence

Measures the reliability of claims. Internal consistency, citation signals, and cross-referenced factual claims with rationale.

AI generation probability

Measures machine authorship likelihood. Multi-model assessment of linguistic patterns versus known LLM signatures, including a specific model guess.

Pseudoscience indicators

Measures adherence to scientific rigor. Unfalsifiable claims, anecdotal evidence, appeal to nature, absence of peer review.

Fictional content likelihood

Measures intent and genre. Separates reported fact from narrative; satire detection; speculation presented as journalism.

Source provenance

Measures document origin history. Domain reputation, WHOIS history, canonical URL verification, archive.org presence.

Semantic search

Measures deep content relevance. Vector similarity (kNN) and hybrid retrieval across large collections.

Collection analytics

Measures aggregate quality shifts. Trend dashboards for factuality and fallacy frequency across sources over time.

Workflow

From "submit" to "collect"—an agentic loop with humans in control

Uploads and URLs become structured signals, provenance checks, multi-model analysis, and collections you can compare over time.

Step 1

Submit

Upload PDF, Word, or HTML—or provide a URL. The system fetches, extracts text, and cleans content automatically.

Step 2

Investigate

Verify source provenance—publication dates, author identity, and domain history—before analysis begins.

Step 3

Analyze

A coordinated set of models scores the document across all eight intelligence dimensions.

Step 4

Collect

Save documents to collections for side-by-side comparison and trend visualization.

Use cases

Built for teams who publish, decide, or teach from documents

The same verification substrate supports research corpora, newsroom workflows, markets analysis, classrooms, and compliance reviews.

Researchers & academics

Build trustworthy evidence bases at scale. Ingest large volumes to identify high-factuality sources and track quality trends across scientific literature.

Journalists & fact-checkers

Vet sources before publication. Surface provenance gaps, misleading framing, and AI-generated content presented as human reporting.

Investors & analysts

Pressure-test market narratives. Score earnings calls and research reports to find overstatements or track a source’s accuracy over time.

Educators & students

Scale critical thinking and media literacy. Compare high- and low-quality sources through objective intelligence lenses.

Legal & compliance teams

Run due diligence on regulatory filings, expert witness materials, and third-party reports through factuality and provenance scoring.

Curious individuals

Defend against digital manipulation. Analyze threads or blogs to understand the mechanics of persuasion before deciding what to believe.

View all audience workflows →

Architecture snapshot

Postgres is canonical; search is derived

The product reference treats PostgreSQL as the system of record and OpenSearch as an expendable analytics and retrieval plane—so reliability and auditability stay anchored.

  1. 1. Intake & acquisition
  2. 2. Extract & enrich
  3. 3. LLM scoring (Bedrock)
  4. 4. Persist → index → SSE

Full pipeline & stack →

Why now

Market pull toward agentic document workflows—and buyer demand for proof

Procurement is shifting from demos to operational criteria: provenance, traceable scores, and review interfaces that stand up to scrutiny.

  • Intelligent document processing remains a large and growing category; banking and finance is a major segment for KYC, AML, and operational document automation.
  • Buyer interest in agentic workflows is showing up strongly in cloud marketplaces and procurement conversations.
  • Teams are consolidating around outcomes: traceable scoring, provenance, and review—not opaque summaries.

Differentiation

Verification infrastructure—not another generic document chatbot

VerifiedSignal emphasizes named fallacies, factuality rationale, provenance history, and collection-level analytics. Outputs are designed to be challenged, corrected, and audited.

See the scorecard on your own documents

Request a walkthrough of the eight dimensions, the review UI, and how SSE-driven progress fits your deployment model.