Python Client
Python Client
The Evomi Python Client provides a production-ready, typed interface to Evomi’s web scraping API. It supports both synchronous and asynchronous operations with minimal dependencies.
Installation
Install the package from PyPI:
pip install evomi-clientQuick Start
Async Client
import asyncio
from evomi_client import EvomiClient
async def main():
# Initialize with API key (or set EVOMI_API_KEY env var)
client = EvomiClient(api_key="your-api-key")
# Scrape a URL
result = await client.scrape("https://example.com")
print(result)
# Get markdown output with auto JS detection
result = await client.scrape(
"https://example.com",
output="markdown",
mode="auto"
)
# AI-powered extraction
result = await client.scrape(
"https://example.com/products",
ai_enhance=True,
ai_prompt="Extract product names and prices"
)
asyncio.run(main())Sync Client
from evomi_client import EvomiClientSync
# Initialize with API key (or set EVOMI_API_KEY env var)
client = EvomiClientSync(api_key="your-api-key")
# Scrape a URL
result = client.scrape("https://example.com")
print(result)Authentication
Set your API key via environment variable:
export EVOMI_API_KEY="your-api-key"Or pass it directly:
client = EvomiClient(api_key="your-api-key")Custom base URL (for testing):
client = EvomiClient(
api_key="your-api-key",
base_url="https://custom.evomi.com"
)Scraping Operations
scrape(url, …)
Scrape a single URL with configurable options.
| Parameter | Type | Default | Description |
|---|---|---|---|
url |
str | required | URL to scrape |
mode |
str | “auto” | Scraping mode: “request” (fast), “browser” (JS), “auto” (detect) |
output |
str | “markdown” | Output format: “html”, “markdown”, “screenshot”, “pdf” |
device |
str | “windows” | Device type: “windows”, “macos”, “android” |
proxy_type |
str | “residential” | Proxy type: “datacenter”, “residential” |
proxy_country |
str | “US” | Two-letter country code |
proxy_session_id |
str | None | Proxy session ID (6-8 chars) |
wait_until |
str | “domcontentloaded” | Wait condition |
ai_enhance |
bool | False | Enable AI enhancement |
ai_prompt |
str | None | Prompt for AI extraction |
js_instructions |
list | None | JS actions: click, wait, fill, wait_for |
execute_js |
str | None | Raw JavaScript to execute |
screenshot |
bool | False | Capture screenshot |
pdf |
bool | False | Capture PDF |
wait_seconds |
int | 0 | Seconds to wait after load |
config_id |
str | None | Saved config ID |
scheme_id |
str | None | Saved extraction schema ID |
extract_scheme |
list | None | Inline extraction schema |
crawl(domain, …)
Crawl a website to discover and scrape multiple pages.
result = await client.crawl(
domain="example.com",
max_urls=100,
depth=2,
url_pattern="/products/.*", # Regex filter
async_mode=True # Returns task_id
)| Parameter | Type | Default | Description |
|---|---|---|---|
domain |
str | required | Domain to crawl |
max_urls |
int | 100 | Maximum URLs to crawl |
depth |
int | 2 | Crawl depth |
url_pattern |
str | None | Regex pattern to filter URLs |
scraper_config |
dict | None | Configuration for scraping each page |
async_mode |
bool | False | Return immediately with task ID |
map_website(domain, …)
Discover URLs from a website.
result = await client.map_website(
domain="example.com",
sources=["sitemap", "commoncrawl"],
max_urls=500
)| Parameter | Type | Default | Description |
|---|---|---|---|
domain |
str | required | Domain to map |
sources |
list | [“sitemap”, “commoncrawl”] | Discovery sources |
max_urls |
int | 500 | Maximum URLs to discover |
url_pattern |
str | None | Regex pattern to filter URLs |
check_if_live |
bool | False | Check if URLs are live |
async_mode |
bool | False | Return immediately with task ID |
search_domains(query, …)
Find domains by searching the web. Use this to discover websites when you don’t know specific domains.
result = await client.search_domains(
query="best e-commerce sites in Germany",
max_urls=20,
region="de-de"
)
# Can also pass multiple queries
result = await client.search_domains(
query=["online bookstores", "book shops UK"],
max_urls=50,
region="uk-en"
)| Parameter | Type | Default | Description |
|---|---|---|---|
query |
str/list | required | Search query (or array of up to 10 queries) |
max_urls |
int | 20 | Max domains per query (max: 100) |
region |
str | “us-en” | Region for results and proxy (e.g., “us-en”, “de-de”) |
agent_request(message)
Send a natural language request to the AI agent.
result = await client.agent_request(
"Scrape example.com and extract all product prices"
)get_task_status(task_id, task_type)
Check the status of an async task.
result = await client.get_task_status(
task_id="abc123",
task_type="scrape" # or "crawl", "map", "config_generate", "schema"
)Config Management
List Configs
configs = await client.list_configs()Create Config
config = await client.create_config(
name="My Scraper",
config={"mode": "browser", "output": "markdown"}
)Get Config
config = await client.get_config("cfg_abc123")Update Config
config = await client.update_config("cfg_abc123", name="New Name")Delete Config
await client.delete_config("cfg_abc123")Generate Config from Prompt
config = await client.generate_config(
name="Amazon Scraper",
prompt="Scrape product title, price, and reviews from Amazon"
)Schema Management
List Schemas
schemas = await client.list_schemas()Create Schema
schema = await client.create_schema(
name="Product Schema",
config={
"url": "https://example.com/product",
"extract_scheme": [
{"label": "title", "type": "content", "selector": "h1"},
{"label": "price", "type": "content", "selector": ".price"}
]
},
test=True # Test the schema
)Get Schema Status
status = await client.get_schema_status("sch_abc123")Schedule Management
Create Schedule
schedule = await client.create_schedule(
name="Daily Price Check",
config_id="cfg_abc123",
interval_minutes=1440, # Daily
start_time="09:00" # UTC
)List Schedules
schedules = await client.list_schedules(active_only=True)Toggle Schedule
await client.toggle_schedule("sched_abc123")Get Execution History
runs = await client.list_schedule_runs("sched_abc123")Storage Management
Create Storage Config
storage = await client.create_storage_config(
name="My S3",
storage_type="s3_compatible",
config={
"bucket": "my-bucket",
"region": "us-east-1",
"access_key": "...",
"secret_key": "..."
},
set_as_default=True
)List Storage Configs
configs = await client.list_storage_configs()Account Info
info = await client.get_account_info()
print(f"Credits remaining: {info.get('credits', 'N/A')}")Error Handling
import httpx
from evomi_client import EvomiClient
client = EvomiClient(api_key="your-key")
try:
result = await client.scrape("https://example.com")
except httpx.HTTPStatusError as e:
print(f"HTTP error: {e.response.status_code}")
print(f"Response: {e.response.text}")
except httpx.RequestError as e:
print(f"Request error: {e}")Pricing & Credits
All operations consume credits:
| Operation | Cost |
|---|---|
| Base request | 1 credit |
| Browser mode | 5x multiplier |
| Residential proxy | 2x multiplier |
| AI enhancement | +30 credits |
| Screenshot/PDF | +1 credit each |
Credit information is returned in response headers and in _credits_used, _credits_remaining fields.
Resources
| Resource | Link |
|---|---|
| PyPI Package | pypi.org/project/evomi-client |
| Evomi Website | evomi.com |
| API Documentation | docs.evomi.com |
Benefits
- Async & Sync Support — Use
EvomiClientfor async orEvomiClientSyncfor synchronous operations - Full API Coverage — All Evomi endpoints supported
- Type Hints — Complete type annotations for IDE support