Introduction
Learn about Check's LLM verification API and how it can help you detect hallucinations and verify AI-generated content at scale.
Welcome to Check
Check is an AI-powered LLM verification platform that helps you detect hallucinations, verify claims, and ensure the accuracy of AI-generated content at scale. Think of it as "the Stripe for LLM Verification" — a developer-first solution for organizations building on top of large language models.
What is Check?
Check uses a multi-paradigm verification approach combining seven distinct verification methods, each with different False Positive Rates (FPR) and strengths:
| Method | FPR | Description |
|---|---|---|
| Formal | 0% | mathjs autoformalization for symbolic verification |
| Tool | ~1% | CLoVE contrastive web verification with source independence |
| Reasoning | ~4-5% | Adaptive self-refinement with conditional self-critique |
| BiPRM | ~5-7% | Parallel bidirectional step-level scoring |
| Entropy | ~8-10% | Dual-path: logprobs fast path + behavioral semantic clustering |
| Ensemble | ~15-20% | Meta-judge adjudicated multi-model consensus |
| Semantic | ~20-30% | Boltzmann energy embedding analysis |
Key Features
Multi-Paradigm Verification
Each piece of content is analyzed through multiple verification methods simultaneously, providing a comprehensive assessment with confidence scores. Results are aggregated using FPR-weighted voting to produce the most accurate verdicts.
Intelligent Decision Making
Based on confidence scores, Check automatically recommends actions:
| Confidence | Decision | Condition |
|---|---|---|
| >= 0.95 | Accept | High confidence, auto-approve |
| 0.70 – 0.95 | Refine | Moderate confidence, needs clarification |
| < 0.70 | Escalate | Low confidence + paradigm disagreement |
| < 0.70 | Reject | Low confidence + paradigm agreement |
Automatic Escalation
Low-confidence verifications are automatically flagged for human review, ensuring accuracy on edge cases with priority levels and optional SLA deadlines.
Real-time Webhooks
Get instant notifications when verifications complete or require attention. Supports verification.completed, verification.escalated, and usage limit events.
Batch Processing
Process thousands of claims efficiently with CSV/JSON upload, real-time progress tracking, and automatic retries.
Team Collaboration
Invite team members, assign roles (Owner, Admin, Member, Viewer), and manage permissions with full RBAC support.
Bring Your Own Key (BYOK)
Use your own LLM provider API keys for cost control and compliance. Check supports 7 providers: OpenAI, Anthropic, Google, Mistral, Azure, Bedrock, and OpenRouter.
SDKs & Integrations
Check provides official SDKs for seamless integration:
| Package | Description |
|---|---|
@check/sdk | TypeScript/Node.js SDK |
check | Python SDK |
@check/langchain | LangChain TypeScript integration |
check-langchain | LangChain Python integration |
@check/llamaindex | LlamaIndex integration |
Getting Started
Ready to start verifying content? Follow our Quickstart Guide to make your first API call in minutes.
API Keys
You'll need an API key to authenticate requests. Generate one from your Dashboard.
Use Cases
- AI Chatbots - Verify responses before sending to users
- Content Generation - Fact-check AI-written articles and summaries
- RAG Applications - Validate retrieved information accuracy
- Legal & Compliance - Ensure AI outputs meet accuracy requirements
- Healthcare - Verify medical information with high confidence
- Financial Services - Fact-check market analysis and reports
Support
- Documentation - You're looking at it!
- API Playground - Test endpoints interactively at
/dashboard/playground - GitHub - Report issues and request features
- Email - Contact support@check.ai for help