Quick Start
Getting Started with AnyCrawl
Introduction
Born for LLMs. A multi-threaded, high-performance crawler & scraper that's ready to use out of the box. With a developer-friendly OpenAPI, it delivers clean, structured data perfectly optimized for LLMs.
AnyCrawl has below features:
High performance
High performance, multi-threaded.
Fully open-source
Open source, available on Github.
LLM-Friendly
Clean, structured data for LLMs.
OpenAPI
OpenAPI friendly.
API Conventions
Parameter Naming
AnyCrawl API follows consistent naming conventions:
-
Request Parameters: Use
snake_caseformat- Example:
webhook_url,event_types,cron_expression,max_retries
- Example:
-
Response Fields: Use
snake_caseformat- Example:
task_id,webhook_id,next_execution_at,task_type,cron_expression,is_active
- Example:
All API endpoints consistently use snake_case for both request parameters and response fields, making it easy to work with the API across different programming languages.
Response Format
All API responses follow a standard structure:
{
"success": true,
"data": { ... },
"message": "Optional message"
}success: Boolean indicating if the request was successfuldata: The response payload (object or array)message: Optional human-readable message
Error Responses
Error responses include detailed information:
{
"success": false,
"error": "Error type",
"message": "Human-readable error message",
"details": { ... }
}Common HTTP status codes:
200- Success400- Bad Request (validation errors)401- Unauthorized (invalid API key)402- Payment Required (insufficient credits)404- Not Found429- Too Many Requests (rate limit exceeded)500- Internal Server Error