Python Client
The Evomi Python Client provides a production-ready, typed interface to Evomi’s API. It supports both synchronous and asynchronous operations with minimal dependencies.
Installation
Install the package from PyPI:
pip install evomi-clientQuick Start
Async Client
import asyncio
from evomi_client import EvomiClient
async def main():
# Initialize with API key (or set EVOMI_API_KEY env var)
client = EvomiClient(api_key="your-api-key")
# Scrape a URL
result = await client.scrape("https://example.com")
print(result["content"])
asyncio.run(main())Sync Client
from evomi_client import EvomiClientSync
client = EvomiClientSync(api_key="your-api-key")
result = client.scrape("https://example.com")
print(result["content"])Authentication
Set your API key via environment variable:
export EVOMI_API_KEY="your-api-key"Or pass it directly:
client = EvomiClient(api_key="your-api-key")Scraping Operations
scrape(url, …)
Scrape a single URL with configurable options.
result = await client.scrape(
"https://example.com",
mode="auto", # "request", "browser", or "auto"
output="markdown", # "html", "markdown", "screenshot", "pdf"
device="windows", # "windows", "macos", "android"
proxy_type="residential",
proxy_country="US",
proxy_session_id="abc123",
wait_until="domcontentloaded",
ai_enhance=True,
ai_prompt="Extract product data",
ai_source="markdown",
js_instructions=[{"click": ".load-more"}],
execute_js="window.scrollTo(0, document.body.scrollHeight)",
wait_seconds=2,
screenshot=False,
pdf=False,
excluded_tags=["nav", "footer"],
excluded_selectors=[".ads"],
block_resources=["image", "stylesheet"],
additional_headers={"X-Custom": "value"},
capture_headers=True,
network_capture=[{"url_pattern": "/api/.*"}],
async_mode=False,
config_id="cfg_abc123",
scheme_id="sch_abc123",
extract_scheme=[{"label": "title", "type": "content", "selector": "h1"}],
storage_id="stor_abc123",
use_default_storage=False,
)| Parameter | Type | Default | Description |
|---|---|---|---|
url |
str | required | URL to scrape |
mode |
str | “auto” | Scraping mode: “request” (fast), “browser” (JS), “auto” (detect) |
output |
str | “markdown” | Output format: “html”, “markdown”, “screenshot”, “pdf” |
device |
str | “windows” | Device type: “windows”, “macos”, “android” |
proxy_type |
str | “residential” | Proxy type: “datacenter”, “residential” |
proxy_country |
str | “US” | Two-letter country code |
proxy_session_id |
str | None | Proxy session ID (6-8 chars) |
wait_until |
str | “domcontentloaded” | Wait condition |
ai_enhance |
bool | False | Enable AI enhancement |
ai_prompt |
str | None | Prompt for AI extraction |
ai_source |
str | None | AI source: “markdown”, “screenshot” |
ai_force_json |
bool | True | Force AI response to valid JSON |
js_instructions |
list | None | JS actions: click, wait, fill, wait_for |
execute_js |
str | None | Raw JavaScript to execute |
wait_seconds |
int | 0 | Seconds to wait after load |
screenshot |
bool | False | Capture screenshot |
pdf |
bool | False | Capture PDF |
excluded_tags |
list | None | HTML tags to remove |
excluded_selectors |
list | None | CSS selectors to remove |
block_resources |
list | None | Resource types to block |
additional_headers |
dict | None | Extra HTTP headers |
capture_headers |
bool | False | Capture response headers |
network_capture |
list | None | Network capture filters |
async_mode |
bool | False | Return immediately with task ID |
config_id |
str | None | Saved config ID |
scheme_id |
str | None | Saved extraction schema ID |
extract_scheme |
list | None | Inline extraction schema |
storage_id |
str | None | Storage config ID |
use_default_storage |
bool | False | Use default storage |
delivery |
str | “json” | Response format: “raw” or “json” |
include_content |
bool | True | Include content in JSON response |
webhook |
WebhookConfig | None | Webhook configuration |
crawl(domain, …)
Crawl a website to discover and scrape multiple pages.
result = await client.crawl(
domain="example.com",
max_urls=100,
depth=2,
url_pattern="/products/.*",
scraper_config={"mode": "browser", "output": "markdown"},
async_mode=False,
)| Parameter | Type | Default | Description |
|---|---|---|---|
domain |
str | required | Domain to crawl |
max_urls |
int | 100 | Maximum URLs to crawl |
depth |
int | 2 | Crawl depth |
url_pattern |
str | None | Regex pattern to filter URLs |
scraper_config |
dict | None | Configuration for scraping each page |
async_mode |
bool | False | Return immediately with task ID |
map_website(domain, …)
Discover URLs from a website via sitemaps, CommonCrawl, or crawling.
result = await client.map_website(
domain="example.com",
sources=["sitemap", "commoncrawl"],
max_urls=500,
url_pattern="/products/.*",
check_if_live=False,
depth=1,
async_mode=False,
)| Parameter | Type | Default | Description |
|---|---|---|---|
domain |
str | required | Domain to map |
sources |
list | [“sitemap”, “commoncrawl”] | Sources: “sitemap”, “commoncrawl”, “crawl” |
max_urls |
int | 500 | Maximum URLs to discover |
url_pattern |
str | None | Regex pattern to filter URLs |
check_if_live |
bool | False | Check if URLs are live |
depth |
int | 1 | Crawl depth if using crawl source |
async_mode |
bool | False | Return immediately with task ID |
search_domains(query, …)
Find domains by searching the web.
# Single query
result = await client.search_domains(
query="e-commerce platforms",
max_urls=20,
region="us-en",
)
# Multiple queries (up to 10)
result = await client.search_domains(
query=["web scraping tools", "data extraction services"],
max_urls=20,
region="us-en",
)| Parameter | Type | Default | Description |
|---|---|---|---|
query |
str or list | required | Search query or list of up to 10 queries |
max_urls |
int | 20 | Max domains per query (max: 100) |
region |
str | “us-en” | Region for results (e.g., “us-en”, “de-de”) |
agent_request(message)
Send a natural language request to the AI agent.
result = await client.agent_request(
"Scrape example.com and extract all product prices"
)get_task_status(task_id, task_type)
Check the status of an async task.
result = await client.get_task_status(
task_id="abc123",
task_type="scrape" # "scrape", "crawl", "map", "config_generate", "schema"
)Config Management
Save and reuse scrape configurations.
list_configs(…)
configs = await client.list_configs(
page=1,
per_page=20,
sort_by="created_at",
sort_order="desc",
)create_config(name, config)
config = await client.create_config(
name="Product Scraper",
config={"mode": "browser", "output": "markdown"}
)get_config(config_id)
config = await client.get_config("cfg_abc123")update_config(config_id, …)
config = await client.update_config(
"cfg_abc123",
name="New Name",
config={"mode": "request"}
)delete_config(config_id)
await client.delete_config("cfg_abc123")generate_config(name, prompt)
Generate a scrape config from natural language using AI.
config = await client.generate_config(
name="Amazon Scraper",
prompt="Scrape product title and price from Amazon product pages"
)Schema Management
Define reusable structured data extraction schemas.
list_schemas(…)
schemas = await client.list_schemas(
page=1,
per_page=20,
sort_by="created_at",
sort_order="desc",
)create_schema(name, config, …)
schema = await client.create_schema(
name="Product Schema",
config={
"url": "https://example.com/product",
"extract_scheme": [
{"label": "title", "type": "content", "selector": "h1"},
{"label": "price", "type": "content", "selector": ".price"}
]
},
test=True,
fix=False,
)get_schema(scheme_id)
schema = await client.get_schema("sch_abc123")update_schema(scheme_id, name, config, …)
schema = await client.update_schema(
"sch_abc123",
name="Updated Schema",
config={"url": "...", "extract_scheme": [...]},
test=True,
)delete_schema(scheme_id)
await client.delete_schema("sch_abc123")get_schema_status(scheme_id)
status = await client.get_schema_status("sch_abc123")Schedule Management
Run scrape configs on a recurring schedule.
list_schedules(…)
schedules = await client.list_schedules(
page=1,
per_page=20,
active_only=False,
)create_schedule(name, config_id, interval_minutes, …)
schedule = await client.create_schedule(
name="Daily Price Check",
config_id="cfg_abc123",
interval_minutes=1440, # Daily
start_time="09:00", # UTC
stop_on_error=True,
)get_schedule(schedule_id)
schedule = await client.get_schedule("sched_abc123")update_schedule(schedule_id, …)
schedule = await client.update_schedule(
"sched_abc123",
name="New Name",
interval_minutes=720,
)delete_schedule(schedule_id)
await client.delete_schedule("sched_abc123")toggle_schedule(schedule_id)
await client.toggle_schedule("sched_abc123")list_schedule_runs(schedule_id, …)
runs = await client.list_schedule_runs(
"sched_abc123",
page=1,
per_page=20,
)Storage Management
Connect cloud storage to automatically save scrape results.
list_storage_configs()
configs = await client.list_storage_configs()create_storage_config(name, storage_type, config, …)
# S3-compatible storage
storage = await client.create_storage_config(
name="My S3",
storage_type="s3_compatible",
config={
"bucket": "my-bucket",
"region": "us-east-1",
"access_key": "...",
"secret_key": "...",
},
set_as_default=True,
)
# Google Cloud Storage
storage = await client.create_storage_config(
name="My GCS",
storage_type="gcs",
config={
"bucket": "my-bucket",
"credentials_json": "...",
},
)
# Azure Blob Storage
storage = await client.create_storage_config(
name="My Azure",
storage_type="azure_blob",
config={
"container": "my-container",
"connection_string": "...",
},
)update_storage_config(storage_id, …)
storage = await client.update_storage_config(
"stor_abc123",
name="Renamed Storage",
set_as_default=True,
)delete_storage_config(storage_id)
await client.delete_storage_config("stor_abc123")Public API
Access proxy credentials and related data.
get_public_api()
Access the Evomi Public API to get proxy credentials and related data.
data = await client.get_public_api()
# Returns proxy credentials and product informationget_proxy_data()
Get detailed information about your proxy products.
data = await client.get_proxy_data()
# Returns: {"products": {"rp": {...}, "sdc": {...}, "mp": {...}}, ...}get_targeting_options()
Get available targeting parameters for different proxy types.
options = await client.get_targeting_options()get_scraper_data()
Get information about your Scraper API access.
data = await client.get_scraper_data()get_browser_data()
Get information about your Browser API access.
data = await client.get_browser_data()rotate_session(session_id, product)
Force an IP address change for an existing proxy session.
result = await client.rotate_session(
session_id="abc12345",
product="rp" # "rpc", "rp", "sdc", "mp"
)generate_proxies(product, …)
Generate proxy strings with specific targeting parameters.
proxies = await client.generate_proxies(
product="rp",
countries="US,GB,DE",
city="New York",
session="sticky",
amount=10,
protocol="http",
lifetime=30,
adblock=True,
)
# Returns plain text, one proxy per lineAccount Info
get_account_info()
info = await client.get_account_info()
print(info.get("credits", "N/A"))Webhooks
Webhooks allow you to receive notifications when scraping operations complete, fail, or start. Supports Discord, Slack, and custom endpoints.
WebhookConfig
from evomi_client import WebhookConfig
# Discord webhook
webhook = WebhookConfig(
url="https://discord.com/api/webhooks/...",
webhook_type="discord",
events=["completed", "failed"]
)
# Custom webhook with HMAC signature
webhook = WebhookConfig(
url="https://your-server.com/webhook",
webhook_type="custom",
events=["completed"],
secret="your-secret-key"
)| Parameter | Type | Default | Description |
|---|---|---|---|
url |
str | required | Your webhook endpoint URL |
webhook_type |
str | “custom” | Type: “discord”, “slack”, or “custom” |
events |
list | [“completed”] | Events to subscribe to |
secret |
str | None | Secret key for HMAC signature (custom only) |
Using Webhooks with Scrape
from evomi_client import EvomiClient, WebhookConfig
client = EvomiClient(api_key="your-api-key")
webhook = WebhookConfig(
url="https://discord.com/api/webhooks/...",
webhook_type="discord",
events=["completed", "failed"]
)
result = await client.scrape(
"https://example.com",
mode="browser",
webhook=webhook
)Using Webhooks with Crawl/Map/Search
# With crawl
result = await client.crawl(
domain="example.com",
max_urls=100,
webhook=webhook
)
# With map_website
result = await client.map_website(
domain="example.com",
webhook=webhook
)
# With search_domains
result = await client.search_domains(
query="e-commerce platforms",
webhook=webhook
)Using Webhooks with Schedules
schedule = await client.create_schedule(
name="Daily Price Check",
config_id="cfg_abc123",
interval_minutes=1440,
webhook=webhook
)Proxy String Builder
Evomi provides a proxy network you can use with any HTTP client. Build proxy strings for tools like requests, httpx, or aiohttp:
Automatic Proxy Configuration
from evomi_client import EvomiClient, ProxyType, ProxyProtocol
client = EvomiClient(api_key="your-api-key")
# Build a proxy string for US residential proxy
proxy_string = await client.build_proxy_string(
proxy_type=ProxyType.RESIDENTIAL,
country="US",
session="abc12345"
)
print(proxy_string)
# Output: http://user:[email protected]:1000Manual Proxy Configuration
from evomi_client import ProxyConfig, ProxyType, ProxyProtocol
config = ProxyConfig(
proxy_type=ProxyType.RESIDENTIAL,
protocol=ProxyProtocol.HTTP,
country="US",
city="New York",
username="your-username",
password="your-password"
)
proxy_string = config.build_proxy_string()Proxy Configuration Options
| Parameter | Type | Default | Description |
|---|---|---|---|
proxy_type |
ProxyType | RESIDENTIAL |
Type: RESIDENTIAL, DATACENTER, MOBILE |
protocol |
ProxyProtocol | HTTP |
Protocol: HTTP, HTTPS, SOCKS5 |
country |
str | None | Two-letter ISO country code (e.g., “US”, “DE”) |
city |
str | None | City name (spaces replaced with dots) |
region |
str | None | State/region name |
continent |
str | None | Continent: africa, asia, europe, north.america, oceania, south.america |
isp |
str | None | ISP shortcode (e.g., “att”, “comcast”) |
session |
str | None | Sticky session ID (6-10 alphanumeric chars) |
hardsession |
str | None | Hard session ID (6-10 alphanumeric chars) |
lifetime |
int | None | Session duration in minutes (max 120) |
mode |
ResidentialMode | STANDARD |
Residential mode: STANDARD, SPEED, QUALITY |
Expert Settings
| Parameter | Type | Default | Description |
|---|---|---|---|
latency |
int | None | Max latency in ms |
fraudscore |
int | None | Max fraud score |
device |
str | None | Device type: “windows”, “unix”, “apple” |
activesince |
int | None | Min connection minutes |
asn |
str | None | ASN filter |
zip |
str | None | Zipcode targeting |
http3 |
bool | False | Enable HTTP3/QUIC |
localdns |
bool | False | Local DNS resolution |
udp |
bool | False | UDP support (Enterprise only) |
extended |
bool | False | Extended pool (cannot combine with other expert filters) |
Proxy Endpoints & Ports
| Type | HTTP | HTTPS | SOCKS5 |
|---|---|---|---|
| Residential | rp.evomi.com:1000 |
rp.evomi.com:1001 |
rp.evomi.com:1002 |
| Datacenter | dcp.evomi.com:2000 |
dcp.evomi.com:2001 |
dcp.evomi.com:2002 |
| Mobile | mp.evomi.com:3000 |
mp.evomi.com:3001 |
mp.evomi.com:3002 |
Proxy String Format
{protocol}://{username}:{password}_{params}@{endpoint}:{port}Example: http://user:[email protected]:1000
Proxy Types
| Type | Endpoint | Use Case |
|---|---|---|
| Residential | rp.evomi.com:1000 |
Human-like browsing, anti-bot bypass |
| Datacenter | dcp.evomi.com:2000 |
Fast, high-volume requests |
| Mobile | mp.evomi.com:3000 |
Highest trust, mobile-specific targets |
Error Handling
import httpx
try:
result = await client.scrape("https://example.com")
except httpx.HTTPStatusError as e:
print(f"API error: {e.response.status_code}")
print(f"Details: {e.response.text}")Credits & Pricing
All operations consume credits:
| Operation | Cost |
|---|---|
| Base request | 1 credit |
| Browser mode | 5x multiplier |
| Residential proxy | 2x multiplier |
| AI enhancement | +30 credits |
| Screenshot/PDF | +1 credit each |
Credit usage is returned in response headers and in _credits_used, _credits_remaining fields.
Resources
| Resource | Link |
|---|---|
| PyPI Package | pypi.org/project/evomi-client |
| Evomi Website | evomi.com |
| API Documentation | docs.evomi.com |
Benefits
- Async & Sync Support — Use
EvomiClientfor async orEvomiClientSyncfor synchronous operations - Full API Coverage — All Evomi endpoints supported
- Type Hints — Complete type annotations for IDE support