Usage Examples
Practical examples showing request payloads and response formats for common Domain Crawling scenarios. Use the endpoint:
POST https://scrape.evomi.com/api/v1/scraper/crawlx-api-key: YOUR_API_KEYDomain Crawling scrapes discovered URLs. Each scraped page uses credits according to Scraper API pricing.
Basic Crawling
Example 1: Crawl Homepage and Links
Discover and scrape pages from a domain starting at the homepage.
Request:
{
"domain": "example.com",
"max_urls": 10
}Response:
{
"success": true,
"domain": "example.com",
"discovered_count": 10,
"scraper_tasks_submitted": 10,
"results": [
{
"url": "https://example.com/",
"source": "crawl",
"scrape_task_id": "abc-123",
"scrape_check_url": "/api/v1/scraper/tasks/abc-123"
},
{
"url": "https://example.com/about",
"source": "crawl",
"scrape_task_id": "def-456",
"scrape_check_url": "/api/v1/scraper/tasks/def-456"
}
],
"credits_used": 20.0,
"credits_remaining": 980.0
}Example 2: Deeper Crawl
Crawl deeper to find more pages.
Request:
{
"domain": "example.com",
"depth": 2,
"max_urls": 50
}Response:
{
"success": true,
"domain": "example.com",
"discovered_count": 50,
"scraper_tasks_submitted": 50,
"results": [
{
"url": "https://example.com/",
"source": "crawl"
},
{
"url": "https://example.com/page1",
"source": "crawl"
}
],
"credits_used": 100.0,
"credits_remaining": 900.0
}URL Pattern Filtering
Example 3: Scrape Only Blog Posts
Only scrape URLs matching a pattern.
Request:
{
"domain": "example.com",
"url_pattern": "/blog/.*",
"max_urls": 20
}Response:
{
"success": true,
"domain": "example.com",
"discovered_count": 18,
"scraper_tasks_submitted": 18,
"results": [
{
"url": "https://example.com/blog/post-1",
"source": "crawl",
"scrape_task_id": "blog-001",
"scrape_check_url": "/api/v1/scraper/tasks/blog-001"
}
],
"credits_used": 36.0,
"credits_remaining": 964.0
}Example 4: Product Pages Only
Target specific URL patterns like product pages.
Request:
{
"domain": "shop.example.com",
"url_pattern": "/products?/[a-z0-9-]+",
"max_urls": 100
}Response:
{
"success": true,
"domain": "shop.example.com",
"discovered_count": 87,
"scraper_tasks_submitted": 87,
"results": [
{
"url": "https://shop.example.com/products/laptop",
"source": "crawl",
"scrape_task_id": "prod-001",
"scrape_check_url": "/api/v1/scraper/tasks/prod-001"
}
],
"credits_used": 174.0,
"credits_remaining": 826.0
}Custom Scraper Settings
Example 5: JavaScript Rendering
Enable JavaScript rendering for dynamic sites.
Request:
{
"domain": "example.com",
"max_urls": 10,
"scraper_config": {
"mode": "request",
"js_rendering": true,
"wait_for": 2000
}
}Example 6: Get Markdown Output
Request content in Markdown format.
Request:
{
"domain": "blog.example.com",
"max_urls": 20,
"scraper_config": {
"content": "markdown"
}
}Response:
{
"success": true,
"domain": "blog.example.com",
"discovered_count": 20,
"scraper_tasks_submitted": 20,
"results": [
{
"url": "https://blog.example.com/post-1",
"source": "crawl",
"scrape_task_id": "md-001",
"scrape_check_url": "/api/v1/scraper/tasks/md-001"
}
],
"credits_used": 60.0,
"credits_remaining": 940.0
}Example 7: AI Enhancement
Use AI to extract structured data automatically.
Request:
{
"domain": "news.example.com",
"max_urls": 10,
"scraper_config": {
"ai_enhance": true,
"ai_source": "markdown",
"ai_prompt": "Extract: headline, author, publish_date, summary. Return as JSON."
}
}Async Mode
Example 8: Large Crawl
Use async mode for large crawls.
Request:
{
"domain": "largecorp.com",
"max_urls": 500,
"async": true
}Response (202 Accepted):
{
"success": true,
"task_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "processing",
"message": "Crawl task submitted for background processing",
"check_url": "/api/v1/scraper/crawl/tasks/550e8400-e29b-41d4-a716-446655440000"
}Check Status:
GET https://scrape.evomi.com/api/v1/scraper/crawl/tasks/550e8400-e29b-41d4-a716-446655440000Status Response (Completed):
{
"success": true,
"domain": "largecorp.com",
"discovered_count": 473,
"scraper_tasks_submitted": 473,
"results": [
{
"url": "https://largecorp.com/",
"source": "crawl",
"scrape_task_id": "task-001",
"scrape_check_url": "/api/v1/scraper/tasks/task-001"
}
],
"credits_used": 946.0,
"credits_remaining": 54.0
}Real-World Scenarios
Example 9: Complete Site Scrape
Crawl and scrape an entire website.
Request:
{
"domain": "mycompany.com",
"depth": 2,
"max_urls": 200,
"async": true
}Example 10: Competitor Product Research
Discover and scrape competitor product pages.
Request:
{
"domain": "competitor.com",
"url_pattern": "/(products?|shop|store)/",
"max_urls": 500,
"async": true,
"scraper_config": {
"extract_scheme": [
{
"label": "product",
"selector": ".product-info",
"type": "nest",
"fields": [
{
"label": "name",
"selector": "h1",
"type": "content"
},
{
"label": "price",
"selector": ".price",
"type": "content"
}
]
}
]
}
}Example 11: Blog Content Aggregation
Aggregate content from multiple blog posts.
Request:
{
"domain": "blog.example.com",
"url_pattern": "/posts/[a-z0-9-]+",
"max_urls": 100,
"scraper_config": {
"content": "markdown"
}
}Error Responses
Insufficient Credits
Response (402 Payment Required):
{
"success": false,
"error": "Insufficient credits",
"message": "Need at least 2 credits to start crawl"
}Invalid Domain
Response (400 Bad Request):
{
"success": false,
"error": "Invalid request parameters",
"message": "domain is required and must be a valid domain name"
}Best Practices
1. Start Small
Test with small max_urls first:
{
"domain": "example.com",
"max_urls": 10
}2. Use Patterns
Filter with url_pattern to target specific content:
{
"domain": "blog.example.com",
"url_pattern": "/blog/[0-9]{4}/",
"max_urls": 50
}3. Async for Large Jobs
Use async mode for 100+ URLs:
{
"domain": "largecorp.com",
"max_urls": 500,
"async": true
}4. Set Appropriate Depth
Start with depth 1, increase if needed:
{
"domain": "example.com",
"depth": 2,
"max_urls": 100
}5. Customize Scraper Config
Get exactly the output you need:
{
"domain": "example.com",
"max_urls": 20,
"scraper_config": {
"content": "markdown",
}
}