Error Handling
The Scraper API uses standard HTTP status codes and provides detailed error messages to help you quickly identify and resolve issues.
HTTP Status Codes
| Status | Meaning | Action Required |
|---|---|---|
200 OK |
Success | Process the response |
202 Accepted |
Async task submitted or timeout | Poll task status |
400 Bad Request |
Invalid parameters | Fix request parameters |
401 Unauthorized |
Authentication failed | Verify API key |
402 Payment Required |
Insufficient credits | Add credits to account |
408 Request Timeout |
Request exceeded timeout | Use async mode or retry |
422 Unprocessable Entity |
Validation error | Check parameter values |
429 Too Many Requests |
Rate limit exceeded | Wait and retry with backoff |
500 Internal Server Error |
Server error | Retry with exponential backoff |
503 Service Unavailable |
Temporary unavailability | Retry after delay |
Error Response Format
All errors follow a consistent JSON structure:
{
"success": false,
"error": "Error type",
"message": "Detailed error description",
"timestamp": "2025-10-29T12:00:00Z"
}Common Errors
Validation Errors (422)
Cause: Invalid parameter values or missing required fields.
{
"success": false,
"error": "Invalid request parameters",
"message": "url is required and must be a valid URL; mode must be one of ['request', 'browser', 'auto']",
"timestamp": "2025-10-29T12:00:00Z",
"details": {
"validation_errors": [
"url: value is not a valid URL",
"mode: must be one of ['request', 'browser', 'auto']"
]
}
}Common validation errors:
| Error | Cause | Solution |
|---|---|---|
url is required |
Missing URL parameter | Include url in request |
invalid URL format |
URL missing protocol | Add https:// or http:// |
invalid mode value |
Unknown mode | Use request, browser, or auto |
invalid proxy_country |
Invalid country code | Use 2-letter ISO code (e.g., US, GB) |
invalid proxy_session_id |
Wrong length | Use 6-8 alphanumeric characters |
screenshot requires browser |
Screenshot with request mode | Set mode=browser |
ai_source required |
AI enabled without source | Add ai_source: markdown or screenshot |
Authentication Errors (401)
Cause: Missing or invalid API key.
{
"success": false,
"error": "Invalid API key",
"message": "The provided API key is invalid or missing",
"timestamp": "2025-10-29T12:00:00Z"
}Solutions:
- Verify your API key in the dashboard
- Check that you’re passing the key correctly (header, query param, or body)
- Ensure you’re using
x-api-key(notx-apikeyor other variants)
Insufficient Credits (402)
Cause: Not enough credits to complete the request.
{
"success": false,
"error": "Insufficient credits",
"message": "Not enough credits. Required: 5.0, Available: 2.5",
"credits_remaining": 2.5,
"timestamp": "2025-10-29T12:00:00Z"
}Solutions:
- Add credits to your account through the dashboard
- Use cheaper proxy type (
datacenterinstead ofresidential) - Use simpler mode (
requestinstead ofbrowser) - Disable expensive features (AI enhancement, screenshots)
Rate Limit Exceeded (429)
Cause: Too many concurrent requests.
{
"success": false,
"error": "Concurrency limit exceeded (5/5)",
"message": "You have exceeded the maximum number of concurrent requests",
"limit": 5,
"reset": 60,
"timestamp": "2025-10-29T12:00:00Z"
}Solutions:
- Wait for the
resettime (usually 60 seconds) before retrying - Implement request queuing on your side
- Use async mode with polling
- Contact support to increase your concurrency limit
See Rate Limits for detailed handling strategies.
Request Timeout (408)
Cause: Request took longer than the timeout limit (30-45 seconds).
{
"success": true,
"task_id": "task_abc123",
"status": "processing",
"message": "Task is taking longer than expected. Use task_id to check status.",
"check_url": "/api/v1/scraper/tasks/task_abc123",
"timestamp": "2025-10-29T12:00:00Z"
}Note: The request continues processing in the background. Use the task_id to retrieve results.
Solutions:
- Poll the task status endpoint:
GET /api/v1/scraper/tasks/{task_id} - Use
async=truefrom the start for long-running requests - Optimize request (block resources, use datacenter proxies)
Scraping Errors (500)
Cause: Failed to scrape the target website.
{
"success": false,
"error": "Scraping error",
"message": "Connection refused by target server",
"url": "https://example.com",
"credits_used": 0,
"timestamp": "2025-10-29T12:00:00Z"
}Common scraping errors:
| Message | Cause | Solution |
|---|---|---|
DNS resolution failed |
Invalid domain | Check URL spelling |
Connection refused |
Target blocked request | Try residential proxy |
SSL certificate error |
Invalid HTTPS certificate | Contact support if legitimate site |
Page crashed |
Browser issue | Retry request |
Navigation timeout |
Page too slow | Increase wait_seconds |
Target closed |
Connection interrupted | Retry with backoff |
Retry Strategy
The API implements automatic retry logic for transient errors:
Request Mode: Up to 5 retries
Browser/Auto Mode: Up to 3 retries
Retryable errors:
- Network connection errors
- Timeout waiting for elements
- Page crashed
- Browser context destroyed
- SSL errors
- Gateway errors (502, 504)
Non-retryable errors:
- DNS resolution failures
- Authentication errors (401)
- Validation errors (422)
- Insufficient credits (402)
- Page navigation errors
Client-Side Retry Implementation
Implement exponential backoff for optimal retry behavior:
import time
import requests
def scrape_with_retry(url, api_key, max_retries=3):
"""Scrape with exponential backoff retry logic"""
for attempt in range(max_retries):
try:
response = requests.get(
f"https://scrape.evomi.com/api/v1/scraper/realtime?url={url}&api_key={api_key}",
timeout=60
)
# Success
if response.status_code == 200:
return response
# Rate limited
elif response.status_code == 429:
data = response.json()
wait_time = data.get("reset", 60)
print(f"Rate limited. Waiting {wait_time}s...")
time.sleep(wait_time)
continue
# Server error - retry with backoff
elif response.status_code >= 500:
if attempt < max_retries - 1:
wait_time = 2 ** attempt # 1s, 2s, 4s
print(f"Server error. Retrying in {wait_time}s...")
time.sleep(wait_time)
continue
else:
return response
# Client error - don't retry
else:
return response
except requests.exceptions.Timeout:
if attempt < max_retries - 1:
wait_time = 2 ** attempt
print(f"Timeout. Retrying in {wait_time}s...")
time.sleep(wait_time)
else:
raise
return NoneError Handling Best Practices
1. Always Check Status Code
response = scrape(url, api_key)
if response.status_code == 200:
# Success - process content
content = response.text
elif response.status_code == 402:
# Insufficient credits
send_alert("Low credits!")
add_credits()
elif response.status_code == 422:
# Validation error
data = response.json()
log_error(data['message'])
else:
# Other error
handle_error(response)2. Parse Error Messages
if not response.ok:
try:
error_data = response.json()
error_msg = error_data.get('message', 'Unknown error')
print(f"API Error: {error_msg}")
except:
print(f"HTTP {response.status_code}: {response.text}")3. Monitor Credit Headers
credits_remaining = float(response.headers.get('X-Credits-Remaining', 0))
if credits_remaining < 100:
send_alert(f"Low credits: {credits_remaining}")
if credits_remaining < 10:
pause_scraping()4. Handle Timeouts Gracefully
response = scrape(url, api_key)
if response.status_code == 202:
# Task submitted or timeout
data = response.json()
if 'task_id' in data:
# Poll for results
result = wait_for_task(data['task_id'], api_key)
return result5. Use Try-Except Blocks
try:
response = requests.get(...)
response.raise_for_status()
return response.json()
except requests.exceptions.HTTPError as e:
if e.response.status_code == 429:
handle_rate_limit(e.response)
else:
log_error(e)
except requests.exceptions.Timeout:
log_error("Request timed out")
except requests.exceptions.RequestException as e:
log_error(f"Request failed: {e}")Debugging Tips
Enable Verbose Logging
import logging
logging.basicConfig(level=logging.DEBUG)
# Now all requests are logged
response = requests.get(...)Check Response Headers
# Print all headers
for key, value in response.headers.items():
print(f"{key}: {value}")
# Key headers to check
print(f"Credits Used: {response.headers.get('X-Credits-Used')}")
print(f"Mode Used: {response.headers.get('X-Mode-Used')}")
print(f"Rate Limit: {response.headers.get('X-RateLimit-Remaining')}")Test with Health Check
curl "https://scrape.evomi.com/api/v1/scraper/health?api_key=YOUR_API_KEY" -vValidate Parameters First
def validate_params(params):
"""Validate parameters before sending request"""
# Check required fields
if 'url' not in params:
raise ValueError("url is required")
# Check URL format
if not params['url'].startswith(('http://', 'https://')):
raise ValueError("url must include http:// or https://")
# Check mode
if 'mode' in params and params['mode'] not in ['request', 'browser', 'auto']:
raise ValueError("mode must be 'request', 'browser', or 'auto'")
return True