JS Client
The Evomi JavaScript Client provides a production-ready, promise-based interface to Evomi’s API. It is designed to work seamlessly in Node.js environments with native fetch support.
Installation
Install the package from npm:
npm install evomi-clientQuick Start
import { EvomiClient } from 'evomi-client';
const client = new EvomiClient({ apiKey: 'your-api-key' });
// Scrape a webpage
const result = await client.scrape('https://example.com');
console.log(result.content);Authentication
Set your API key via environment variable:
export EVOMI_API_KEY="your-api-key"Or pass it directly:
const client = new EvomiClient({ apiKey: 'your-api-key' });Custom base URL (for testing):
const client = new EvomiClient({
apiKey: 'your-api-key',
baseUrl: 'https://custom.evomi.com'
});Scraping Operations
scrape(url, options)
Scrape a single URL with configurable options.
const result = await client.scrape('https://example.com', {
mode: 'auto', // 'request', 'browser', or 'auto'
output: 'markdown', // 'html', 'markdown', 'screenshot', 'pdf'
device: 'windows', // 'windows', 'macos', 'android'
proxyType: 'residential',
proxyCountry: 'US',
proxySessionId: 'abc123',
waitUntil: 'domcontentloaded',
aiEnhance: true,
aiPrompt: 'Extract product data',
aiSource: 'markdown',
jsInstructions: [{ click: '.load-more' }],
executeJs: 'window.scrollTo(0, document.body.scrollHeight)',
waitSeconds: 2,
screenshot: false,
pdf: false,
excludedTags: ['nav', 'footer'],
excludedSelectors: ['.ads'],
blockResources: ['image', 'stylesheet'],
additionalHeaders: { 'X-Custom': 'value' },
captureHeaders: true,
networkCapture: [{ url_pattern: '/api/.*' }],
asyncMode: false,
configId: 'cfg_abc123',
schemeId: 'sch_abc123',
extractScheme: [{ label: 'title', type: 'content', selector: 'h1' }],
storageId: 'stor_abc123',
useDefaultStorage: false,
noHtml: false,
});| Parameter | Type | Default | Description |
|---|---|---|---|
url |
string | required | URL to scrape |
mode |
string | 'auto' |
Scraping mode: 'request', 'browser', 'auto' |
output |
string | 'markdown' |
Output format: 'html', 'markdown', 'screenshot', 'pdf' |
device |
string | 'windows' |
Device type: 'windows', 'macos', 'android' |
proxyType |
string | 'residential' |
Proxy type: 'datacenter', 'residential' |
proxyCountry |
string | 'US' |
Two-letter country code |
proxySessionId |
string | — | Proxy session ID (6-8 chars) |
waitUntil |
string | 'domcontentloaded' |
Wait condition |
aiEnhance |
boolean | false |
Enable AI extraction |
aiPrompt |
string | — | Prompt for AI extraction |
aiSource |
string | — | AI source: 'markdown', 'screenshot' |
aiForceJson |
boolean | true |
Force AI response to valid JSON |
jsInstructions |
array | — | JS actions: click, wait, fill, wait_for |
executeJs |
string | — | Raw JavaScript to execute |
waitSeconds |
number | 0 |
Seconds to wait after page load |
screenshot |
boolean | false |
Capture screenshot |
pdf |
boolean | false |
Capture PDF |
excludedTags |
array | — | HTML tags to remove |
excludedSelectors |
array | — | CSS selectors to remove |
blockResources |
array | — | Resource types to block |
additionalHeaders |
object | — | Extra HTTP headers |
captureHeaders |
boolean | false |
Capture response headers |
networkCapture |
array | — | Network capture filters |
asyncMode |
boolean | false |
Return immediately with task ID |
configId |
string | — | Saved config ID |
schemeId |
string | — | Saved extraction schema ID |
extractScheme |
array | — | Inline extraction schema |
storageId |
string | — | Storage config ID |
useDefaultStorage |
boolean | false |
Use default storage |
noHtml |
boolean | false |
Exclude HTML from response |
crawl(domain, options)
Crawl a website to discover and scrape multiple pages.
const result = await client.crawl('example.com', {
maxUrls: 100,
depth: 2,
urlPattern: '/blog/.*',
scraperConfig: { mode: 'browser', output: 'markdown' },
asyncMode: false,
});| Parameter | Type | Default | Description |
|---|---|---|---|
domain |
string | required | Domain to crawl |
maxUrls |
number | 100 |
Maximum URLs to crawl |
depth |
number | 2 |
Crawl depth |
urlPattern |
string | — | Regex pattern to filter URLs |
scraperConfig |
object | — | Config for scraping each page |
asyncMode |
boolean | false |
Return immediately with task ID |
mapWebsite(domain, options)
Discover URLs from a website via sitemaps, CommonCrawl, or crawling.
const result = await client.mapWebsite('example.com', {
sources: ['sitemap', 'commoncrawl'],
maxUrls: 500,
urlPattern: '/products/.*',
checkIfLive: false,
depth: 1,
asyncMode: false,
});| Parameter | Type | Default | Description |
|---|---|---|---|
domain |
string | required | Domain to map |
sources |
array | ['sitemap', 'commoncrawl'] |
Sources: 'sitemap', 'commoncrawl', 'crawl' |
maxUrls |
number | 500 |
Maximum URLs to discover |
urlPattern |
string | — | Regex pattern to filter URLs |
checkIfLive |
boolean | false |
Check if URLs are live |
depth |
number | 1 |
Crawl depth if using crawl source |
asyncMode |
boolean | false |
Return immediately with task ID |
searchDomains(query, options)
Find domains by searching the web.
// Single query
const result = await client.searchDomains('e-commerce platforms', {
maxUrls: 20,
region: 'us-en',
});
// Multiple queries (up to 10)
const result = await client.searchDomains(
['web scraping tools', 'data extraction services'],
{ maxUrls: 20, region: 'us-en' }
);| Parameter | Type | Default | Description |
|---|---|---|---|
query |
string or array | required | Search query or list of up to 10 queries |
maxUrls |
number | 20 |
Max domains per query (max: 100) |
region |
string | 'us-en' |
Region for results |
agentRequest(message)
Send a natural language request to the AI agent.
const result = await client.agentRequest(
'Scrape example.com and extract all product prices'
);getTaskStatus(taskId, taskType)
Check the status of an async task.
const result = await client.getTaskStatus('abc123', 'scrape');
// taskType: 'scrape' | 'crawl' | 'map' | 'config_generate' | 'schema'
Config Management
Save and reuse scrape configurations.
listConfigs(options)
const configs = await client.listConfigs({
page: 1,
perPage: 20,
sortBy: 'created_at',
sortOrder: 'desc',
});createConfig(name, config)
const config = await client.createConfig('Product Scraper', {
mode: 'browser',
output: 'markdown',
});getConfig(configId)
const config = await client.getConfig('cfg_abc123');updateConfig(configId, options)
const config = await client.updateConfig('cfg_abc123', {
name: 'New Name',
config: { mode: 'request' },
});deleteConfig(configId)
await client.deleteConfig('cfg_abc123');generateConfig(name, prompt)
Generate a scrape config from natural language using AI.
const config = await client.generateConfig(
'Amazon Scraper',
'Scrape product title and price from Amazon product pages'
);Schema Management
Define reusable structured data extraction schemas.
listSchemas(options)
const schemas = await client.listSchemas({
page: 1,
perPage: 20,
sortBy: 'created_at',
sortOrder: 'desc',
});createSchema(name, config, options)
const schema = await client.createSchema(
'Product Schema',
{
url: 'https://example.com/product',
extract_scheme: [
{ label: 'title', type: 'content', selector: 'h1' },
{ label: 'price', type: 'content', selector: '.price' },
],
},
{ test: true, fix: false }
);getSchema(schemeId)
const schema = await client.getSchema('sch_abc123');updateSchema(schemeId, name, config, options)
const schema = await client.updateSchema(
'sch_abc123',
'Updated Schema',
{ url: '...', extract_scheme: [...] },
{ test: true }
);deleteSchema(schemeId)
await client.deleteSchema('sch_abc123');getSchemaStatus(schemeId)
const status = await client.getSchemaStatus('sch_abc123');Schedule Management
Run scrape configs on a recurring schedule.
listSchedules(options)
const schedules = await client.listSchedules({
page: 1,
perPage: 20,
activeOnly: false,
});createSchedule(name, configId, intervalMinutes, options)
const schedule = await client.createSchedule(
'Daily Price Check',
'cfg_abc123',
1440, // Daily (in minutes)
{ startTime: '09:00', stopOnError: true }
);getSchedule(scheduleId)
const schedule = await client.getSchedule('sched_abc123');updateSchedule(scheduleId, options)
const schedule = await client.updateSchedule('sched_abc123', {
name: 'New Name',
intervalMinutes: 720,
});deleteSchedule(scheduleId)
await client.deleteSchedule('sched_abc123');toggleSchedule(scheduleId)
await client.toggleSchedule('sched_abc123');listScheduleRuns(scheduleId, options)
const runs = await client.listScheduleRuns('sched_abc123', {
page: 1,
perPage: 20,
});Storage Management
Connect cloud storage to automatically save scrape results.
listStorageConfigs()
const configs = await client.listStorageConfigs();createStorageConfig(name, storageType, config, options)
// S3-compatible storage
const storage = await client.createStorageConfig(
'My S3',
's3_compatible',
{
bucket: 'my-bucket',
region: 'us-east-1',
access_key: '...',
secret_key: '...',
},
{ setAsDefault: true }
);
// Google Cloud Storage
const storage = await client.createStorageConfig(
'My GCS',
'gcs',
{ bucket: 'my-bucket', credentials_json: '...' }
);
// Azure Blob Storage
const storage = await client.createStorageConfig(
'My Azure',
'azure_blob',
{ container: 'my-container', connection_string: '...' }
);updateStorageConfig(storageId, options)
const storage = await client.updateStorageConfig('stor_abc123', {
name: 'Renamed Storage',
setAsDefault: true,
});deleteStorageConfig(storageId)
await client.deleteStorageConfig('stor_abc123');Public API
Access proxy credentials and related data.
getProxyData()
Get detailed information about your proxy products.
const data = await client.getProxyData();
// Returns: { products: { rp: {...}, sdc: {...}, mp: {...} }, ... }
getTargetingOptions()
Get available targeting parameters for different proxy types.
const options = await client.getTargetingOptions();getScraperData()
Get information about your Scraper API access.
const data = await client.getScraperData();getBrowserData()
Get information about your Browser API access.
const data = await client.getBrowserData();rotateSession(sessionId, product)
Force an IP address change for an existing proxy session.
const result = await client.rotateSession('abc12345', 'rp');
// product: 'rpc', 'rp', 'sdc', 'mp'
generateProxies(options)
Generate proxy strings with specific targeting parameters.
const proxies = await client.generateProxies({
product: 'rp',
countries: 'US,GB,DE',
city: 'New York',
session: 'sticky',
amount: 10,
protocol: 'http',
lifetime: 30,
adblock: true,
});
// Returns plain text, one proxy per line
Account Info
getAccountInfo()
const info = await client.getAccountInfo();
console.log(info.credits);Proxy String Builder
Evomi provides a proxy network you can use with any HTTP client. Build proxy strings for fetch, axios, or any other library:
import { EvomiClient, ProxyType } from 'evomi-client';
const client = new EvomiClient({ apiKey: 'your-api-key' });
// Build a proxy string for US residential proxy
const proxyString = await client.buildProxyString({
proxyType: ProxyType.RESIDENTIAL,
country: 'US',
session: 'abc12345',
});
console.log(proxyString);
// Output: http://user:[email protected]:1000
Manual Proxy Configuration
import { ProxyConfig, ProxyType, ProxyProtocol } from 'evomi-client';
const config = new ProxyConfig({
proxyType: ProxyType.RESIDENTIAL,
protocol: ProxyProtocol.HTTP,
country: 'US',
city: 'New York',
username: 'your-username',
password: 'your-password',
});
const proxyString = config.buildProxyString();Proxy Types
| Type | Endpoint | Use Case |
|---|---|---|
| Residential | rp.evomi.com:1000 |
Human-like browsing, anti-bot bypass |
| Datacenter | dcp.evomi.com:2000 |
Fast, high-volume requests |
| Mobile | mp.evomi.com:3000 |
Highest trust, mobile-specific targets |
Error Handling
try {
const result = await client.scrape('https://example.com');
} catch (error) {
console.error('Scraping failed:', error.message);
}Credits & Pricing
All operations consume credits:
| Operation | Cost |
|---|---|
| Base request | 1 credit |
| Browser mode | 5x multiplier |
| Residential proxy | 2x multiplier |
| AI enhancement | +30 credits |
| Screenshot/PDF | +1 credit each |
Credit usage is returned in the result:
console.log(result._credits_used);
console.log(result._credits_remaining);Requirements
- Node.js >= 18 (for native
fetch)
Resources
| Resource | Link |
|---|---|
| npm Package | npmjs.com/package/evomi-client |
| Evomi Website | evomi.com |
| API Documentation | docs.evomi.com |
Benefits
- Promise-based — Uses modern async/await for clean flow
- Full API Coverage — All Evomi endpoints supported
- Node.js Ready — Built with native fetch support