Agent Requests
The Agent Request API provides a stateless, AI-powered interface for web scraping tasks. Describe what you want in natural language, and the agent autonomously executes the necessary operations.
Endpoint Reference
| Endpoint | Method | Description |
|---|---|---|
/agent/request |
POST |
Execute a stateless AI agent request |
Authentication: Include your API key in the x-api-key header.
Base URL: https://scrape.evomi.com/api/v1
Your First Request
The simplest way to use the Agent API is to describe what you want:
curl -X POST "https://scrape.evomi.com/api/v1/agent/request" \
-H "x-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "Scrape https://evomi.com and extract the page title and navigation links"
}'Response:
{
"success": true,
"response": "I scraped evomi.com and extracted the page title 'Evomi - Premium Proxies' and found 6 navigation links including Home, Pricing, Dashboard, and Documentation.",
"actions_taken": [
{
"tool": "generate_schema",
"params": {"url": "https://evomi.com", "extraction_prompt": "extract page title and navigation links"},
"result": {"success": true, "scheme_id": "sch_abc123"}
},
{
"tool": "scrape_url",
"params": {"url": "https://evomi.com", "scheme_id": "sch_abc123"},
"result": {"success": true, "status_code": 200}
}
],
"credits_used": 90,
"credits_remaining": 910
}How It Works
Your Message β AI Analysis β Tool Execution β Results Summary- Send a Request β Describe your scraping task in natural language
- AI Processing β The agent determines the best approach and tools to use
- Tool Execution β Actions are performed (scraping, schema generation, etc.)
- Response β Receive a clear summary with all results
Available Actions
The agent can perform these actions autonomously:
| Action | Description |
|---|---|
scrape_url |
Scrape a URL and extract data |
generate_schema |
AI-generate an extraction schema for a URL |
generate_config |
Create a saved scraper configuration |
discover_urls |
Find URLs on a domain matching a pattern |
search_domains |
Find domains by searching the web |
list_configs |
List your saved scraper configs |
get_config |
Retrieve details of a specific config |
create_schedule |
Set up a recurring scrape job |
list_schedules |
List your scheduled jobs |
toggle_schedule |
Pause or activate a schedule |
get_account_info |
Check your credit balance |
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
message |
string | Yes | Natural language description of your task (3-10,000 characters) |
Examples
Quick Scrape with Auto-Extraction
Let the AI figure out what to extract:
curl -X POST "https://scrape.evomi.com/api/v1/agent/request" \
-H "x-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "Scrape https://example.com/product/123 and extract all product information"
}'Generate a Saved Config
Create a reusable configuration:
curl -X POST "https://scrape.evomi.com/api/v1/agent/request" \
-H "x-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "Create a config for scraping Amazon product pages - extract title, price, rating, and review count"
}'Response:
{
"success": true,
"response": "I've created a config named 'Amazon Products' with ID cfg_xyz789. It extracts product title, price, rating, and review count from Amazon product pages.",
"actions_taken": [
{
"tool": "generate_config",
"params": {
"name": "Amazon Products",
"prompt": "Scrape Amazon product pages - extract title, price, rating, and review count"
},
"result": {
"success": true,
"config_id": "cfg_xyz789"
}
}
],
"credits_used": 60,
"credits_remaining": 940
}Multi-Task Request
Ask for multiple things in one request:
curl -X POST "https://scrape.evomi.com/api/v1/agent/request" \
-H "x-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "Show me my credit balance, list all my configs, and list all my schedules"
}'Discover URLs on a Domain
Find specific page types:
curl -X POST "https://scrape.evomi.com/api/v1/agent/request" \
-H "x-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "Find all blog post URLs on evomi.com"
}'Search for Domains
Find websites matching a search query:
curl -X POST "https://scrape.evomi.com/api/v1/agent/request" \
-H "x-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "Find e-commerce websites in Germany"
}'Run an Existing Config
Execute a saved configuration:
curl -X POST "https://scrape.evomi.com/api/v1/agent/request" \
-H "x-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "Run my config named \"Product Scraper\" and show me the results"
}'Create a Schedule
Set up automated recurring scraping:
curl -X POST "https://scrape.evomi.com/api/v1/agent/request" \
-H "x-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "Schedule my \"Product Scraper\" config to run daily at 9 AM UTC"
}'Response Format
Success Response
{
"success": true,
"response": "A human-readable summary of what was accomplished",
"actions_taken": [
{
"tool": "tool_name",
"params": {...},
"result": {...}
}
],
"credits_used": 90,
"credits_remaining": 910
}Response Fields
| Field | Type | Description |
|---|---|---|
success |
boolean | Whether the request completed successfully |
response |
string | Human-readable summary of results |
actions_taken |
array | List of tools executed and their results |
credits_used |
number | Total credits consumed by this request |
credits_remaining |
number | Your remaining credit balance |
Credit Usage
How Credits Are Calculated
| Component | Cost |
|---|---|
| AI step | 30 credits each |
| Scraping | Base scrape cost (varies by mode/proxy) |
| Schema generation | Includes scrape cost + AI processing |
| URL discovery | Includes crawl cost |
Credit Reserve
Each request reserves up to 300 credits (10 AI steps Γ 30 credits). Unused credits are automatically refunded after the request completes.
Monitoring Credits
Every response includes credit information:
{
"credits_used": 90,
"credits_remaining": 910
}Response headers also include:
X-Credits-Used: 90
X-Credits-Remaining: 910Error Responses
Insufficient Credits (402)
{
"success": false,
"error": {
"code": "INSUFFICIENT_CREDITS",
"message": "Insufficient credits: operation costs 300 credits but only 50 available"
},
"credits_remaining": 50
}Invalid Request (400)
{
"success": false,
"error": "'message' is required"
}Message Too Short/Long (400)
{
"success": false,
"error": "Message is too short"
}Best Practices
Be Specific
The more specific your request, the better the results:
# Good β clear intent and target
"Scrape https://example.com/products and extract product name, price, and availability"
# Less clear
"Get products from example.com"Use for One-Off Tasks
The Agent API is ideal for:
- Quick data extraction tasks
- Automated workflows
- Testing and prototyping
- Integration with other tools
Check Actions Taken
Review the actions_taken array to understand exactly what the agent did:
{
"actions_taken": [
{"tool": "generate_schema", "params": {...}, "result": {...}},
{"tool": "scrape_url", "params": {...}, "result": {...}}
]
}Handle Rate Limits
If you receive a 429 response, wait before retrying:
{
"success": false,
"error": "Rate limit exceeded. Please wait before making more requests."
}Response Codes
| Status | Meaning | Action |
|---|---|---|
| 200 | Success | Process the response |
| 400 | Bad request | Check your message format |
| 401 | Unauthorized | Verify your API key |
| 402 | Insufficient credits | Add credits to your account |
| 429 | Rate limit exceeded | Wait and retry |
| 500 | Server error | Retry or contact support |
What’s Next?
- Scraper Configs β Learn about saved configurations
- Schedules β Automate recurring scraping
- Usage Examples β See code examples in multiple languages