Rate Limiting
Understanding rate limits and how to handle them in the Check API.
Rate Limiting
Rate limits protect the API from abuse and ensure fair usage for all customers. Limits vary by plan and are applied per organization.
Rate Limits by Plan
| Plan | Requests/Minute | Monthly Verifications |
|---|---|---|
| Free | 20 | 1,000 |
| Pro | 200 | 25,000 |
| Enterprise | 1,000 | Unlimited |
Need Higher Limits?
Enterprise plans offer custom rate limits tailored to your needs. Contact sales to discuss your requirements.
Response Headers
Every API response includes rate limit information in the headers:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1705317060
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed per minute |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when the limit resets |
Handling Rate Limits
When you exceed the rate limit, the API returns a 429 Too Many Requests response:
{
"error": {
"code": "rate_limited",
"message": "Rate limit exceeded. Try again in 45 seconds.",
"retryAfter": 45
}
}The response also includes a Retry-After header with the number of seconds to wait.
SDK Automatic Retry
Both the TypeScript and Python SDKs handle rate limiting automatically:
Manual Implementation
If using the REST API directly, implement retry logic:
Best Practices
1. Implement Exponential Backoff
When rate limited, wait progressively longer between retries:
async function withExponentialBackoff<T>(
fn: () => Promise<T>,
maxRetries = 5
): Promise<T> {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await fn();
} catch (error: any) {
if (error.status !== 429 || attempt === maxRetries - 1) {
throw error;
}
// Exponential backoff with jitter
const baseDelay = Math.min(1000 * Math.pow(2, attempt), 30000);
const jitter = Math.random() * 1000;
const delay = baseDelay + jitter;
console.log(`Retry ${attempt + 1}/${maxRetries} in ${delay}ms`);
await new Promise(r => setTimeout(r, delay));
}
}
throw new Error('Max retries exceeded');
}2. Monitor Rate Limit Headers
Track your usage and slow down proactively before hitting limits:
function checkRateLimits(response: Response): void {
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '0');
const limit = parseInt(response.headers.get('X-RateLimit-Limit') || '1');
const reset = parseInt(response.headers.get('X-RateLimit-Reset') || '0');
const usagePercent = ((limit - remaining) / limit) * 100;
if (remaining === 0) {
const waitTime = reset - Math.floor(Date.now() / 1000);
console.warn(`Rate limit exhausted. Resets in ${waitTime}s`);
} else if (usagePercent > 80) {
console.warn(`High rate limit usage: ${remaining}/${limit} remaining`);
// Consider slowing down requests
}
}3. Use Batch Processing
For bulk verifications, use the Batch API instead of individual requests:
import { Check } from '@check/sdk';
const client = new Check({ apiKey: 'vfy_...' });
// Instead of 100 individual requests...
// Use batch processing
const batch = await client.createBatchAndWait({
name: 'Bulk verification',
items: claims.map(content => ({ content })),
methods: { reasoning: 1.0, tool: 0.5 }
});
console.log(`Processed ${batch.totalItems} items`);4. Use Webhooks for Async Results
Instead of polling for results, use webhooks:
// Start verification with webhook
const response = await fetch('https://api.check.ai/v1/verify', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
content: 'Your content',
methods: { reasoning: 1.0 },
webhookUrl: 'https://your-app.com/webhooks/check'
}),
});
// Results delivered via webhook - no polling needed!5. Cache Results
Cache verification results to avoid redundant API calls:
import crypto from 'crypto';
const cache = new Map<string, any>();
function hashContent(content: string): string {
return crypto.createHash('sha256').update(content).digest('hex');
}
async function verifyCached(
client: Check,
content: string,
methods: Record<string, number>
): Promise<any> {
const cacheKey = hashContent(JSON.stringify({ content, methods }));
if (cache.has(cacheKey)) {
console.log('Cache hit');
return cache.get(cacheKey);
}
const result = await client.verifyAndWait({ content, methods });
cache.set(cacheKey, result);
return result;
}6. Distribute Requests Over Time
Instead of bursting requests, spread them out:
async function verifyWithThrottle(
client: Check,
contents: string[],
requestsPerSecond = 5
): Promise<any[]> {
const delay = 1000 / requestsPerSecond;
const results: any[] = [];
for (const content of contents) {
const start = Date.now();
const result = await client.verifyAndWait({
content,
methods: { reasoning: 1.0 }
});
results.push(result);
// Wait for remaining time to maintain rate
const elapsed = Date.now() - start;
if (elapsed < delay) {
await new Promise(r => setTimeout(r, delay - elapsed));
}
}
return results;
}Usage Quotas
In addition to rate limits (requests per minute), each plan has a monthly verification quota:
| Plan | Monthly Verifications | Price |
|---|---|---|
| Free | 1,000 | $0 |
| Pro | 25,000 | $99/month |
| Enterprise | Unlimited | Custom pricing |
Track your usage in the Dashboard. You'll receive:
- Email notification at 80% usage
- Webhook event at 80% usage (
usage.limit.warning) - Webhook event at 100% usage (
usage.limit.reached)
Troubleshooting
Consistently Hitting Rate Limits
- Check your plan - You may need to upgrade
- Review request patterns - Are you making unnecessary requests?
- Implement caching - Avoid re-verifying identical content
- Use batch processing - More efficient for bulk operations
- Add request queuing - Smooth out traffic spikes
429 Errors in Production
- Add monitoring - Track rate limit header values
- Implement circuit breakers - Fail fast when limits are hit
- Queue requests - Buffer requests during high traffic
- Contact support - For temporary limit increases during launches